Explore further © 2011 PhysOrg.com More information: The origin and evolution of word order, PNAS, Published online before print October 10, 2011, doi:10.1073/pnas.1113716108AbstractRecent work in comparative linguistics suggests that all, or almost all, attested human languages may derive from a single earlier language. If that is so, then this language—like nearly all extant languages—most likely had a basic ordering of the subject (S), verb (V), and object (O) in a declarative sentence of the type “the man (S) killed (V) the bear (O).” When one compares the distribution of the existing structural types with the putative phylogenetic tree of human languages, four conclusions may be drawn. (i) The word order in the ancestral language was SOV. (ii) Except for cases of diffusion, the direction of syntactic change, when it occurs, has been for the most part SOV > SVO and, beyond that, SVO > VSO/VOS with a subsequent reversion to SVO occurring occasionally. Reversion to SOV occurs only through diffusion. (iii) Diffusion, although important, is not the dominant process in the evolution of word order. (iv) The two extremely rare word orders (OVS and OSV) derive directly from SOV. (PhysOrg.com) — With the thousands of languages in the world today, it’s hard to imagine just one of them being spoken by all of the existing humans on Earth. And while there is really no way to prove that such was the case some fifty thousand years ago when the human race apparently shifted into behavior patterns that are more consistent with modern behavior than that which had come before, many believe it to be the case. Rescuing ancient languages: Linguists labor to unravel endangered Mayan tongues It was during this time period that early humans began to use more sophisticated tools, to paint and to create engravings and sculpture. Many historians have attributed this “sudden” leap to the development of language. And if that was the case, then it’s likely all the people of that time were all speaking the same language, seeing as how there were still so few of them. Now, well-known physicist Murray Gell-Mann and anthropologist Merritt Ruhlen argue that most languages descended from a common ancestor which likely came much later as the result of a possible bottleneck.They describe in their paper published in the Proceedings of the National Academy of Sciences, how they believe that rather than following the more modern language construct of subject-verb-object (SVO), the ancient base language instead used subject-object-verb (SOV), such as is the case with old so-called dead languages, like Latin.Murray Gell-Mann, currently a distinguished fellow with the Santa Fe Institute in New Mexico, received the Nobel Prize in Physics back in the late sixties for work he did on the theory of elementary particles. In addition to his numerous achievements in the field of physics, Gell-Mann has apparently always had an interest in linguistics as well. Now in his eighties, he has embarked on what some may deem a controversial idea; to develop a linguistics tree going all the way back to the first human language.Thus far he and partner Ruhlen have come up with some 2200 nodes comprised of eight distinct branches, and twenty two sub or sub-sub branches. For each branch or sub, the two describe its most modern state and then work backwards to show how it might have developed from an older form. Using this method to go all the way back in time to the single earlier language, the two propose it must have been of the subject-object-verb variety.It should be noted that thus far, the work is still just theory, and not all historians or linguistics experts for that matter, agree on its validity. Evolution of word order. Image: (c) PNAS, doi:10.1073/pnas.1113716108 Citation: Noted physicist teams with anthropologist to create ancient linguistic tree (2011, October 12) retrieved 18 August 2019 from https://phys.org/news/2011-10-physicist-teams-anthropologist-ancient-linguistic.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
But whereas the grapes and cockroaches could generate electricity for just days or weeks, Evgeny Katz, a professor of chemistry at Clarkson University in Potsdam, New York, and colleagues have shown that the snail can generate electricity for many months at a time. And in spite of the electrodes in their shells, the snails live long, healthy lives.“The animals are quite fit – they eat, drink and crawl,” Katz told Nature News. “We take care to keep them alive and happy.” Although a snail’s tissues and organs are bathed in blood, or haemolymph, it takes time to regenerate its glucose levels, which means snails don’t generate very large amounts of power. For the first few minutes, the researchers could extract 7.45 microwatts, but this power decreased to just 0.16 microwatts during long-term, continuous extraction. The main cause of this decay comes from the local depletion of glucose at the electrode surface. Still, the snail’s eating and resting could sufficiently regenerate its overall glucose levels, allowing it to “recharge” and produce sustainable electrical power.These snails – as well as other potential electrified creatures such as worms and insects – could be useful for powering low-power devices, such as sensors and wireless transmitters. The US Department of Defense is funding cyborg research in the hopes of creating bugs that can gather information about their environment while crawling around. Researchers are also investigating medical applications, in which a patient’s implantable biofuel cell could use his or her own blood glucose to power medical devices such as pacemakers. In the future, the researchers at Clarkson University plan to electrify lobsters in the same way as the snails, with the hopes that the larger animals’ metabolism could provide more power. Citation: Cyborg snail produces electricity (2012, March 15) retrieved 18 August 2019 from https://phys.org/news/2012-03-cyborg-snail-electricity.html Image credit: L. Halamkova, et al. ©2012 American Chemical Society Explore further Biofuel cell generates electricity when implanted in False Death’s Head Cockroach A snail with implanted electrodes connected with crocodile clips to external circuitry. Image credit: L. Halamkova, et al. ©2012 American Chemical Society More information: Lenka Halámková, et al. “Implanted Biofuel Cell Operating in a Living Snail.” Journal of the American Chemical Society. DOI: 10.1021/ja211714wAbstractImplantable biofuel cells have been suggested as sustainable micropower sources operating in living organisms, but such bioelectronic systems are still exotic and very challenging to design. Very few examples of abiotic and enzyme-based biofuel cells operating in animals in vivo have been reported. Implantation of biocatalytic electrodes and extraction of electrical power from small living creatures is even more difficult and has not been achieved to date. Here we report on the first implanted biofuel cell continuously operating in a snail and producing electrical power over a long period of time using physiologically produced glucose as a fuel. The “electrified” snail, being a biotechnological living “device”, was able to regenerate glucose consumed by biocatalytic electrodes, upon appropriate feeding and relaxing, and then produce a new “portion” of electrical energy. The snail with the implanted biofuel cell will be able to operate in a natural environment, producing sustainable electrical micropower for activating various bioelectronic devices.via: Nature News Journal information: Journal of the American Chemical Society © 2011 PhysOrg.com (PhysOrg.com) — First it was grapes, then cockroaches, and now snails have become the latest organism to generate electricity through an implanted biofuel cell. The process works similarly in all three situations: the electricity comes from a metabolic process involving the transfer of electrons from sugar (such as glucose) to oxygen. In the case of the snail, two electrodes from a biofuel cell are implanted into holes in the snail’s shell, with the anode performing glucose oxidation and the cathode performing oxygen reduction. When the electrons flow between the electrodes, they produce an electric current. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
More information: M.E. McCulloch. “Testing Quantised Inertia on Galactic Scales.” Astrophysics and Space Science. DOI: 10.1007/s10509-012-1197-0 Also at arXiv:1207.7007v1 [physics.gen-ph] McCulloch’s blog: http://physicsfromtheedge.blogspot.co.uk/ While dark matter is still the most popular explanation for this and other problems, there have also been many proposed alternative explanations. Most recently, Michael McCulloch of Plymouth University in the UK, who specializes in geomatics (the mathematics of positioning in space), has proposed that a new model that modifies a galaxy’s inertial mass may account for the faster-than-expected rotation at a galaxy’s outer edges, even though this model violates Einstein’s famous equivalence principle.McCulloch’s paper on the model of modified inertial mass is published in Astrophysics and Space Science, and is also posted at arXiv.org.Two kinds of massIn general, there are two ways to calculate the mass of any object. One way involves comparing the force of gravity on an object of unknown mass to the force of gravity on an object whose mass is known. This method, which the bathroom scale is based on, gives an object’s gravitational mass. The second method, which gives inertial mass, involves applying a known force to an object of unknown mass, measuring the resulting acceleration, and calculating the mass using Newton’s Second Law (m = F/a). In 1907, Einstein proposed that gravitational mass and inertial mass are always equal, which is known as the equivalence principle and serves as a fundamental concept of general relativity. Although tests of the equivalence principle have verified that Einstein is correct to many decimal places of accuracy, some scientists have been willing to violate the equivalence principle in attempts to explain the galactic rotation problem without invoking dark energy.One such explanation came in 1983, when physicist Mordehai Milgrom proposed a theory called Modified Newtonian Dynamics (MoND) that can either slightly modify the gravitational constant or slightly modify Newston’s second (inertial) law at very small gravitational accelerations. According to MoND, the velocity of stars in a circular orbit far from the center of a galaxy is a constant and does not depend on the distance from the center. However, for MOND to work, an adjustable parameter must be set. In 2007, McCulloch proposed a model to explain the flatness of galactic rotation that is similar to the second (inertial) version of MoND in that it also proposes modifications of an object’s inertial mass at small accelerations, deviating from Newton’s second law. Unlike MOND, this new model does not need an adjustable parameter. However, both models violate the equivalence principle when masses have very small accelerations – and at the edges of galaxies, the gravitational acceleration is extremely small compared to that on Earth.”The accelerations we are familiar with on Earth are around 9.8 m/s2,” McCulloch told Phys.org. “At the edges of galaxies, the acceleration is only on the order of 10-10 m/s2. At this tiny acceleration it would take you 317 years to get from rest to a speed of 1 m/s, or from 0 to 60 miles per hour in 8500 years! Or, as Milgrom once wrote, the lifetime of the universe to get near to the speed of light.”Mass of accelerating objectsIn the new study, McCulloch expands on his model, called Modification of Inertia resulting from a Hubble-scale Casimir effect (MiHsC), or Quantized Inertia. This model proposes that accurately calculating an object’s inertial mass involves accounting for the emission of photons, or Unruh radiation, that occurs as a result of the object’s acceleration with respect to surrounding matter. The existence of Unruh radiation is a subject of some dispute, since it is unclear whether it has been observed.In the MiHsC model, a Hubble-scale Casimir effect, which can be thought of as a vacuum energy arising from virtual particles, imposes restrictions on the Unruh radiation wavelengths. As an object’s acceleration decreases, Unruh wavelengths lengthen to the Hubble scale, and more of them are disallowed. Because this radiation is assumed in MiHsC to contribute to inertial mass, a decrease in acceleration leads to fewer Unruh waves and a gradual decrease in the object’s inertial mass. With a smaller inertial mass, a star within a galaxy can be accelerated into a bound orbit more easily by the same gravitational force.”There are two kinds of mass: gravitational mass (GM, measured by the gravitational force produced by the galaxy) and inertial mass (IM, measured by the ease of response of a star to a force),” McCulloch said. “These are usually assumed to be equal. The point is that you can either (1) increase the GM of the galaxy to hold its stars in with more force (dark matter), or (2) you can decrease the IM of the stars so that they can be bent more easily into a bound orbit even by the small existing gravitational force from the visible mass. MiHsC/quantized inertia does the latter.”By assuming that a galaxy’s inertia is due to Unruh radiation that is subject to a Hubble-scale Casimir effect, McCulloch derived a relation between the velocity and visible mass of a galaxy or galaxy cluster (a Tully-Fisher relation). Using only the mass from baryonic (visible) matter, he could use the relation to predict the rotational velocity of dwarf galaxies, spiral galaxies, and galaxy clusters. Although the predictions overestimate the observed velocities by one-third to one-half, they are still within error bars. (Uncertainty arises from uncertainty in the Hubble constant and in the ratio of stellar mass to light, affecting mass estimates based on observation.)”MiHsC predicts that, as an object’s acceleration decreases, the Unruh waves it sees become large compared to the Hubble scale, so they become impossible to detect and so a greater proportion of them are disallowed,” McCulloch explained. “This kind of thinking, ‘If you can’t directly observe it, then forget it,’ may seem strange, but it has a distinguished history. It was discussed by Berkeley and Mach, and it was used by Einstein to discredit Newton’s concept of absolute space and formulate special relativity. Back to MiHsC: at this low acceleration then, stars cannot see the Unruh waves, start to lose their inertial mass very quickly, and this makes it easier for an existing external force to accelerate them again, so their acceleration increases, they see more Unruh waves, gain inertia and decelerate. A balance is achieved around a minimum acceleration which is predicted to be close to the recently observed cosmic acceleration, and MiHsC predicts galaxy rotation within the uncertainty without any adjustable parameters.”Although MiHsC and MoND are somewhat similar, as mentioned above, with both predicting the observed velocities within error bars, MiHsC uses no adjustable parameters while MoND requires an unexplained adjustable acceleration parameter to fit the data.Testing predictionsWhether or not MiHsC turns out to be true remains to be seen. As noted above, the model violates Einstein’s equivalence principle. Although the equivalence principle has been well tested, this particular violation of it could not have been seen in those tests. “At the normal accelerations that we see on Earth (9.8 m/s2), the disagreement between MiHsC and equivalence is tiny; it only becomes important at accelerations as small as 10-10 m/s2,” McCulloch said. “Torsion balance experiments have tested the equivalence principle down to accelerations of 10-15 m/s2, but they cannot show the effects of MiHsC. This is because these experiments are more accurate versions of Galileo’s experiment in which he dropped two objects of different mass off a tower. If the equivalence principle is right the heavier object should be attracted downwards (gravitationally) more to the Earth (due to its greater gravitational mass, GM), but also find it equally harder to accelerate towards the Earth due to its greater inertial mass (IM), so the two objects should fall together. The anomalous acceleration predicted by MiHsC due to the difference between GM and IM is independent of the mass of the objects, so the two objects would still drop together, although both would drop slightly more quickly than expected. So, MiHsC cannot be detected in these kinds of experiments.”Also, MiHsC makes a testable prediction, which is that accelerations at a galaxy’s edge should remain above a certain value to offset the traditional decrease in acceleration with radius. McCulloch hopes that future observations will provide support for the MiHsC model.”I am trying to devise an unambiguous test,” he said. “The problem with astronomical data is that often there can be more than one explanation of an observation, so it is hard to prove things conclusively. The best proof would be a lab experiment where one can control the conditions and isolate causes. A possible experiment would be to cool an object to say 5K while weighing it. Tests with spacecraft may also be possible. I am trying to get funding to attempt something like this.” © 2012 Phys.org Gyroscope’s unexplained acceleration may be due to modified inertia Citation: Dark matter effect might be explained by modified way to calculate inertial mass (2012, September 18) retrieved 18 August 2019 from https://phys.org/news/2012-09-dark-effect-inertial-mass.html (Phys.org)—One of the first observations suggesting the existence of an invisible dark matter came in 1933 when astronomer Fritz Zwicky noticed that galaxy clusters were more energetic than they should be, according to the mass of visible stars in them, and he proposed dark matter to explain the discrepancy. Later observations of galaxies (by Rubin & Ford, among others) showed that the galaxies’ edges were rotating as fast as the insides of the galaxies, even though acceleration is supposed to decrease with radius. Explore further A comparison of the observed rotation speeds in km/s (black dots) with the predictions of MoND (dotted) and MiHsC (dashed) for galaxies and galaxy clusters of increasing baryonic mass (in Solar masses). Credit: M.E. McCulloch Journal information: Astrophysics and Space Science This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
The apparatus with builder Joel Young. Credit: Science DOI: 10.1126/science.1221195 More information: Rapid Acceleration Leads to Rapid Weakening in Earthquake-Like Laboratory Experiments, Science 5 October 2012: Vol. 338 no. 6103 pp. 101-105. DOI: 10.1126/science.1221195ABSTRACTAfter nucleation, a large earthquake propagates as an expanding rupture front along a fault. This front activates countless fault patches that slip by consuming energy stored in Earth’s crust. We simulated the slip of a fault patch by rapidly loading an experimental fault with energy stored in a spinning flywheel. The spontaneous evolution of strength, acceleration, and velocity indicates that our experiments are proxies of fault-patch behavior during earthquakes of moment magnitude (Mw) = 4 to 8. We show that seismically determined earthquake parameters (e.g., displacement, velocity, magnitude, or fracture energy) can be used to estimate the intensity of the energy release during an earthquake. Our experiments further indicate that high acceleration imposed by the earthquake’s rupture front quickens dynamic weakening by intense wear of the fault zone. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Citation: Researchers build better earthquake simulator (2012, October 5) retrieved 18 August 2019 from https://phys.org/news/2012-10-earthquake-simulator.html Real earthquakes generally occur when the pressure between geological plates pressing against one another reaches a “breaking point.” But, because each plate is interfering with the other’s movement, one invariably slides beneath the other in a slipping and catching process which results in the ground above undulating in an unpredictable fashion. Creating models with the same effects allows researchers to better understand the whole process, and allows urban planners to build better, more earthquake-proof structures. Unfortunately, most models haven’t been able to produce the huge energy release found in real earthquakes. In this new effort, the research team attempted to replicate a massive energy burst onto a sample of rock, rather than the slow crushing energy generally replicated. To make this happen, they attached a clutch to a 500-pound flywheel that, at a set point, grabbed a large piece of disk-shaped granite and pressed it against the flywheel. The pressing caused energy to be quickly transferred to the granite, making it spin. This configuration more faithfully represents the events of real earthquakes, the researchers suggest, because it allows for the sudden transfer of energy into rock that happens when the breaking point is reached. In review of their research results, the team found similarities between the characteristics of the granite disks, and rock samples taken from earthquake fault areas, thereby suggesting that their experiment closely emulated what happens to rock in earthquake zones. They also found that by varying the flywheel speed they could create earthquake simulations for various magnitudes ranging from four to eight. © 2012 Phys.org Sumatra earthquake mysteries examined Explore further Journal information: Science (Phys.org)—Traditional earthquake simulators have generally functioned by pressing two pieces of rock material together at high pressure until they reach a breaking point, resulting in something similar to the sticking and sliding of real earthquakes. The problem with this approach, of course, is that because of the difference in scale, it’s not clear if the model can be used to accurately represent real events. For this reason, researchers from the University of Oklahoma and the U.S. Geological Survey tried a new approach, pressing a piece of granite against a rotating flywheel. As described in a paper published in the journal Science, the researchers discovered that the resulting energy densities compared closely with results from measurements of real earthquakes.
Credit: Eun-Ha Choi and Santokh Singh. “Statistical Assessment of the Glare Issue – Human and Natural Elements” Explore further (Phys.org)—Imagine how jarring the experience can be—blinding light that becomes a visual impairment to the point where the driver cannot manage to drive correctly. In a study titled “Statistical Assessment of the Glare Issue – Human and Natural Elements,” Eun-Ha Choi and Santokh Singh wrote that, whether it comes from headlamps or sunlight, the effect of glare affects driving performance. “The challenge for vehicle manufacturers and regulators is to provide the driver with a reasonable level of protection from glare. Empirical research is necessary in order to address this issue,” they said. © 2013 Phys.org Citation: Vibrating steering wheel may rescue driver from blinding glare (2013, January 21) retrieved 18 August 2019 from https://phys.org/news/2013-01-vibrating-wheel-driver-glare.html Vibrating steering wheel guides drivers while keeping their eyes on the road More information: Research paper (PDF): www.fcsm.gov/05papers/Choi_Singh_IVA.pdfvia Newscientist Recent research indicates scientists continue to focus on the challenge of coming up with ways to combat the problem of glare. A vibrating steering wheel prototype might prevent an accident that could easily result. Eelke Folmer, an associate professor in the Department of Computer Science and Engineering at the University of Nevada in Reno, and Burkay Sucu are the researchers behind a vibrating wheel designed to steer drivers back on a safe path when glare prevents them from safely steering their own vehicles. Aware that temporary blindness from unexpected light, such as the glare in winter or any other type, can lead to accidents, they wanted to provide a solution that could get drivers to proceed safely through tactile cues. They tested their wheel on 12 volunteers in a simulator.How it works: GPS and lane-keeping cameras map the road ahead. When the sensors identify the driver as drifting from the lane, a vibro-tactile system buzzes. Vibrations are tuned to a frequency sensitive to human skin, to 275 hertz. According to their construct, for example, if a driver drifted left, the left side of the wheel would vibrate. The vibration coming from the left side of the wheel would instruct the driver to steer right. In steering right, the vibration would stop. Using touch for correcting a driver’s lane position is nothing new, however. Last year, researchers at Carnegie Mellon University and AT&T Labs also showed a vibrating steering wheel concept for providing the right directions and keeping a driver safe on the road. Last year, Ford showed off its Fusion 2013 at the Detroit Auto show, and among its features was a steering-wheel vibration to warn the driver if the car was drifting too close to lane markings.While haptic steering wheels are nothing new, the Folmer-Suku prototype, according to the two researchers, bears distinctions.”Existing haptic automotive interfaces typically indicate when and in which direction to steer, but they don’t convey how much to steer, as a driver typically determines this using visual feedback,” they stated.Their haptic interface involves an intelligent vehicle position system to indicate when, in which direction and how far to steer, in support of steering without any visual feedback. “Our interface may improve driving safety when a driver is temporarily blinded, for example, due to glare or fog.” Their paper, “Haptic Interface for Non-Visual Steering,” has been accepted for the International Conference on Intelligent User Interfaces, scheduled for March 19 to22 in Santa Monica, California. According to the paper, Folmer and Sucu performed three user studies. “The first study tries to understand driving using visual feedback, the second study evaluates two different haptic encoding mechanisms with no visual feedback present, and a third study evaluates the supplement effect of haptic feedback when used with visual feedback.” This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
(Phys.org) —Artifacts from the Middle Stone Age, which lasted from about 200,000 to 50,000 years ago, provide us with the earliest glimpses of modern human art and culture. Previously, scientists thought an increase in population drove the behavioral innovations that led to the creation of these artifacts and eventually, the expansion out of Africa. However, by examining mollusk shells from Stone Age sites, Richard Klein of Stanford University and Teresa Steele of University of California, Davis, have determined that a significant population increase did not occur until the Late Stone Age, after the migration out of Africa had already begun. Their research appears in the Proceedings of the National Academy of Sciences. Cymbula oculus (Born, 1778). .Credit: H. Zell / Wikipedia. Archeologists have found precursors of modern human artwork and jewelry, including fragments of ochre with abstract incisions and shells with perforations, in Middle Stone Age sites. The humans who made them, between 85,000 and 65,000 years ago, must have had modern cognitive abilities and exhibited modern behaviors. During the Late Stone Age, these abilities and behaviors allowed humans to create objects that are recognizable as art and spurred the migration to Eurasia.Population growth has been a popular explanation for the innovations of the Middle Stone Age. As population increases, the chance that someone will come up with an innovative idea also increases. At the same time, the probability that an idea will be lost decreases.To test the hypothesis that a large increase in population drove Middle Stone Age innovation, Klein and Steele measured the shells of slow-growing mollusks found in Middle and Late Stone Age middens on the southern and western coasts of South Africa. They reasoned that selection pressure, caused by an increase in human population, would decrease median shell size. Frequent foraging by large numbers of humans would have prevented many shellfish from reaching their full size. The researchers found that the median size of Middle Stone Age shells was larger than that of Late Stone Age shells. This showed that selection pressure, and therefore human population, was greater in the Late Stone Age than the Middle Stone Age. In addition, shellfish from smaller species were more common in Late Stone Age than new Stone Age sites. As selection pressure increased with population size, humans would be less likely to overlook smaller shellfish as a source of food.Klein and Steele claim that because the population increase did not occur until after the migration out of Africa, which took place between 60,000 and 50,000 years ago, had already begun, there must be another explanation for the cultural advancements of the Middle Stone Age. This could be a change in the human genome or pressure causd by climate change. Journal information: Proceedings of the National Academy of Sciences © 2013 Phys.org This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. More information: Archaeological shellfish size and later human evolution in Africa, PNAS, Published online before print June 17, 2013, doi: 10.1073/pnas.1304750110 AbstractApproximately 50 ka, one or more subgroups of modern humans expanded from Africa to populate the rest of the world. Significant behavioral change accompanied this expansion, and archaeologists commonly seek its roots in the African Middle Stone Age (MSA; ∼200 to ∼50 ka). Easily recognizable art objects and “jewelry” become common only in sites that postdate the MSA in Africa and Eurasia, but some MSA sites contain possible precursors, especially including abstractly incised fragments of ocher and perforated shells interpreted as beads. These proposed art objects have convinced most specialists that MSA people were behaviorally (cognitively) modern, and many argue that population growth explains the appearance of art in the MSA and its post-MSA florescence. The average size of rocky intertidal gastropod species in MSA and later coastal middens allows a test of this idea, because smaller size implies more intense collection, and more intense collection is most readily attributed to growth in the number of human collectors. Here we demonstrate that economically important Cape turban shells and limpets from MSA layers along the south and west coasts of South Africa are consistently and significantly larger than turban shells and limpets in succeeding Later Stone Age (LSA) layers that formed under equivalent environmental conditions. We conclude that whatever cognitive capacity precocious MSA artifacts imply, it was not associated with human population growth. MSA populations remained consistently small by LSA standards, and a substantial increase in population size is obvious only near the MSA/LSA transition, when it is dramatically reflected in the Out-of-Africa expansion. Explore further Citation: Shellfish show population growth did not send humans out of Africa (2013, June 18) retrieved 18 August 2019 from https://phys.org/news/2013-06-shellfish-population-growth-humans-africa.html Stone Age technological and cultural innovation accelerated by climate, research says
The black hole conundrum came to exist due to work done by John Wheeler in the 60’s and then Steven Hawking and colleagues—first in the early 70’s and then later in 1976—it centers around the idea of what happens to the information contained in particles that are pulled into a black hole, once the black hole shrinks away to nothing. It was Hawking that first postulated that contrary to prior belief, black holes do emit something—now called Hawking radiation. But, as he and colleagues noted in the later paper, such radiation would have properties that are completely random, and that would suggest that once the black hole was gone, some of the information carried by the radiation would be lost—gone forever. This of course runs contrary to the laws of physics which state that energy is conserved—thus there came to exist a conundrum.Moving forward 40 years, Hawking and colleagues believe they have solved the conundrum—the earlier work did not take into account the possibility of empty space carrying information, they suggest. More specifically, they propose that soft particles are at work. These particles they note, can exist in a zero energy state, and because of that particles falling into a black hole would leave information behind with them. Most in the field have been with them to this point, it is the next that causes concern. Hawking and his colleagues go on to suggest that a mechanism exists that is involved in allowing the information to be transferred—called black hole (soft )hair, a term they came up with to describe calculations that showed encoding data in quantum descriptions of the event horizon—information would be stored in them, and thus not lost.Some in the field have expressed their frustration with the soft hair idea, in part because Hawking and his team have yet to explain how the information exchange to the Hawking radiation would actually occur. This suggests that more work will have to be done before the idea will be accepted by the majority of scientists in the field. A black hole devouring a star. Credit: NASA Stephen Hawking says he’s solved a black hole mystery, but physicists await the proof Explore further © 2016 Phys.org More information: Soft Hair on Black Holes, arXiv:1601.00921 [hep-th] arxiv.org/abs/1601.00921AbstractIt has recently been shown that BMS supertranslation symmetries imply an infinite number of conservation laws for all gravitational theories in asymptotically Minkowskian spacetimes. These laws require black holes to carry a large amount of soft (i.e. zero-energy) supertranslation hair. The presence of a Maxwell field similarly implies soft electric hair. This paper gives an explicit description of soft hair in terms of soft gravitons or photons on the black hole horizon, and shows that complete information about their quantum state is stored on a holographic plate at the future boundary of the horizon. Charge conservation is used to give an infinite number of exact relations between the evaporation products of black holes which have different soft hair but are otherwise identical. It is further argued that soft hair which is spatially localized to much less than a Planck length cannot be excited in a physically realizable process, giving an effective number of soft degrees of freedom proportional to the horizon area in Planck units. (Phys.org)—It has been nearly a month since, Stephen Hawking, Malcolm Perry and Andrew Strominger uploaded a paper to the arXiv preprint server that described a possible solution to the black-hole conundrum—they showed a way that information that had been pulled into a black hole could be retained via soft particles. Now that others in the field have had time to react to the paper, there appears to be a split—some agree with the findings in the paper while others suggest that there is still a vital piece of the puzzle to be explained. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Citation: Physicists split on ideas expressed in Hawking’s latest black hole paper (2016, January 28) retrieved 18 August 2019 from https://phys.org/news/2016-01-physicists-ideas-hawking-latest-black.html Journal information: arXiv
Map of South Africa. The Barberton greenstone belt shown in red. Credit: Wikipedia Citation: Study suggests ocean was cooler than others have suggested during time life began on Earth (2016, February 29) retrieved 18 August 2019 from https://phys.org/news/2016-02-ocean-cooler-life-began-earth.html Journal information: Science Advances Explore further © 2016 Phys.org Prior studies have been done on some particular rocks found in South Africa, important because they have been dated back approximately 3.5 billion years—back to approximately the same time period that scientists believe life first got its start on planet Earth. Those earlier studies have shown that the rocks actually resided at the bottom of the ocean during that time period, and that the ocean was approximately 55 to 85 degrees Celsius. In this new study, the research pair suggest that estimate was in error because it did not take into account the possibility that the rocks were near hydrothermal vents, where the water is always warmer than the rest of the ocean.To get a better idea of the true early water temperature, the researchers studied other nearby rocks of the same age that had been formed from ocean sediments, which meant they could not have resided near a vent. Those rocks, the team reports, contained gypsum, which in modern times grows only in cold deep sea water. Also, they noted that tiny grains of iron present in the rock at the time of its formation revealed that it had come about at low latitudes, very near the equator. Taken together, the evidence suggests, the researchers claim, that it was likely both the oceans and atmosphere were similar to conditions today, and that suggests that our planet may have resided in the Goldilocks zone (not too hot or too cold to support life) for the entire duration of the existence of life on Earth. They further suggest that their findings will lay to rest the common assumption that the only possibility of life coming to exist on Earth occurred during a time when the oceans were much warmer than today. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. A pair of researchers, one with Nelson Mandela Metropolitan University in South Africa and the other with the University of Bergen in Norway, has conducted a study of rocks in South Africa, and has concluded that the ocean was not as cold as other studies have shown during the time period when life is believed to have first appeared on Earth. In their paper published in the journal Sciences Advances, Maarten de Wit and Harald Furnes describe their research results and why they now believe that our planet may have existed in the Goldilocks Zone for the entire time that life has existed on our planet. Curiosity finds rocks that might point to a continental crust on Mars More information: M. J. de Wit et al, 3.5-Ga hydrothermal fields and diamictites in the Barberton Greenstone Belt—Paleoarchean crust in cold environments, Science Advances (2016). DOI: 10.1126/sciadv.1500368
Did dinosaurs enjoy Grand Canyon views? Definitely not, say researchers (Phys.org)—A team of researchers from the U.K. and the U.S. has found that England’s famed white cliffs have been eroding away 10 times faster over the past century and a half than they did in the prior 7,000 years. In their paper published in Proceedings of the National Academy of Sciences, the researchers describe how they were able to measure the rate of erosion over such a long time frame and what their findings might mean for the future of the iconic sea front. Beachy Head. Credit: University of Glasgow More information: Martin D. Hurst et al. Recent acceleration in coastal cliff retreat rates on the south coast of Great Britain, Proceedings of the National Academy of Sciences (2016). DOI: 10.1073/pnas.1613044113AbstractRising sea levels and increased storminess are expected to accelerate the erosion of soft-cliff coastlines, threatening coastal infrastructure and livelihoods. To develop predictive models of future coastal change we need fundamentally to know how rapidly coasts have been eroding in the past, and to understand the driving mechanisms of coastal change. Direct observations of cliff retreat rarely extend beyond 150 y, during which humans have significantly modified the coastal system. Cliff retreat rates are unknown in prior centuries and millennia. In this study, we derived retreat rates of chalk cliffs on the south coast of Great Britain over millennial time scales by coupling high-precision cosmogenic radionuclide geochronology and rigorous numerical modeling. Measured 10Be concentrations on rocky coastal platforms were compared with simulations of coastal evolution using a Monte Carlo approach to determine the most likely history of cliff retreat. The 10Be concentrations are consistent with retreat rates of chalk cliffs that were relatively slow (2–6 cm⋅y−1) until a few hundred years ago. Historical observations reveal that retreat rates have subsequently accelerated by an order of magnitude (22–32 cm⋅y−1). We suggest that acceleration is the result of thinning of cliff-front beaches, exacerbated by regional storminess and anthropogenic modification of the coast.Press release The white coloring of the cliffs is due to the material of which they are composed—mostly chalk, which is particularly vulnerable to erosion. Prior research has suggested the cliffs were first formed approximately 90 million years ago and have been eroding ever since. But in the past, the erosion was slowed by wide beaches that helped to reflect much of the energy from wind and rain back out to sea—now, the researchers note, there is hardly any beach left at all.To learn more about the pace of erosion on the cliffs, the researchers used a technique that involved measuring the changes in rock that occur as it is exposed to falling particles from space—more specifically, they studied an isotope of beryllium-10 that exists in the rock behind the chalk—the isotope, they note, forms and builds up in layers as the rock is struck by cosmic rays.Their study suggested that prior to a century and a half ago, the pace of erosion was approximately two to six centimeters a year, going back approximately 7000 years. But then, starting approximately 150 years ago, things picked up and the cliffs began to see an erosion rate of approximately 22 to 32 centimeters a year. The researchers theorize the change likely came about due to the construction of sea walls and groynes, which Britains have been putting in place since Victorian times, and stronger storms pounding the coastline, possibly due to global warming. This, they add, suggests that the chalk front on the cliffs is in danger of disappearing altogether if something is not done to protect them. Citation: England’s white cliffs found to be eroding ten times faster over the past 150 years (Update) (2016, November 8) retrieved 18 August 2019 from https://phys.org/news/2016-11-england-white-cliffs-dover-eroding.html Explore further © 2016 Phys.org Journal information: Proceedings of the National Academy of Sciences This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Psychotic disorders such as schizophrenia can be highly disabling. An episode of psychosis involves experiences that aren’t based in reality. These can include hallucinations and delusions, such as feeling that people are trying to harm you. If researchers could identify when people with psychotic disorders are verging on psychosis, promising methods to delay or stop the process could be tested. Studies suggest that language patterns may help predict if someone is likely to experience psychosis. Drs. Neguine Rezaii, Elaine Walker, and Phillip Wolf of Emory University tested whether machine learning could help identify such patterns. They used sophisticated computer programs to analyze patterns of speech from 40 people enrolled in a long-term study of youth who are at risk of developing psychosis. The participants were enrolled because of unusual patterns of thought, perception, and communication. Read the whole story: National Institutes of Health (NIH)