Everyone knows yawning is the pinkeye of social cues: powerfully contagious and not that attractive. Yet scientists aren't sure what the point of it is. Is yawning a form of communication that evolved to send some message to our companions? Or is the basis of yawning physiological, and its social contagiousness unrelated? A new paper suggests that yawning--even when triggered by seeing another person yawn--is meant to cool down overheated brains.
We're not the only species that feels compelled to yawn when we see others doing it. Other primates, and possibly dogs, have been observed catching a case of the yawns. But Princeton researcher Andrew Gallup thinks the root cause of yawning is in the body, not the mind. After all, we yawn when we're alone, not just when we're with other people.
Previously, Gallup worked on a study that involved sticking tiny thermometers into the brains of rats and waiting for them to yawn. The researchers observed that yawning and stretching came after a rapid temperature rise in the frontal cortex. After the yawn and the stretch, rats' brain temperatures dropped back to normal. The authors speculated that yawning cools the blood off (by taking in a large amount of air from outside the body) and increases blood flow, thereby bringing cooler blood to the brain.
If yawning's function is to cool the brain, Gallup reasoned, then people should yawn less often when they're in a hot environment. If the air outside you is the same temperature as your body, it won't make you less hot.
To test that theory, researchers went out into the field--namely, the sidewalks of Tuscon, Arizona--in both the winter and the summer. They recruited subjects walking down the street (80 people in each season) and asked them to look at pictures of people yawning. Then the subjects answered questions about whether they yawned while looking at the pictures, how much sleep they'd gotten the night before, and how long they'd been outside.
The researchers found that the main variable affecting whether people yawned was the season. It's worth noting that "winter" in Tuscon was a balmy 22 degrees Celsius (71 degrees Fahrenheit), while summer was right around body temperature. In the summer, 24% of subjects reported yawning while they looked at the pictures. In the winter, that number went up to 45%.
Additionally, the longer people had been outside in the summer heat, the less likely they were to yawn. But in the winter, the opposite was true: People were more likely to yawn after spending more time outside. Gallup speculates that because the testing took place in direct sunlight, subjects' bodies were heating up, even though the air around them remained cooler. So a yawn became more refreshing to the brain the longer subjects stood outside in the winter, but only got less refreshing as they sweltered in the summer.
The study used contagious yawning rather than spontaneous yawning, presumably because it's easier to hand subjects pictures of yawning people than to aggressively bore them. Gallup notes that contagious and spontaneous yawning are physically identical ("a stretching of the jaw and a deep inhalation of air," if you were wondering), so one can stand in for the other. Still, it would be informative to study people in a more controlled setting--in a lab rather than on the street, and preferably not aware that they're part of a yawning study.
A lab experiment would also allow researchers to directly observe whether their subjects yawned, rather than just asking them. In the field, researchers walked away while subjects were looking at the pictures, since people who know they're being watched are less likely to yawn. But self-reported results might not be accurate. The paper points out that "four participants in the winter condition did not report yawning during the experiment but yawned while handing in the survey to the experimenter."
Still, it seems there's real connection between brain temperature and yawning. It will take more research (and more helplessly yawning subjects) to elucidate exactly what the connection is. Even if brain temperatures always rise right before a yawn and fall afterward, cooling the brain might not be the point of the yawn--another factor could be causing the impulse to yawn, and the temperature changes could be a side effect. Studying subjects in a truly cold environment, and showing that they are once again less likely to yawn (because outside air would cool their brains too much), would provide another piece of evidence that temperature triggers the yawn in the first place.
None of this tells us why yawning is so catching, though. Personally, I think I yawned at least a thousand times while reading and writing about this paper. Maybe I should have taken some advice from an older study by Andrew Gallup, which found that you can inhibit yawning by breathing through your nose or putting something chilly on your forehead.
Photo: Wikipedia/National Media Museum
Andrew C. Gallup, & Omar Tonsi Eldakar (2011). Contagious yawning and seasonal climate variation. Frontiers in Evolutionary Neuroscience
... Read more »
Andrew C. Gallup, & Omar Tonsi Eldakar. (2011) Contagious yawning and seasonal climate variation. Frontiers in Evolutionary Neuroscience. info:/
Antidepressant sales have been rising for many years in Western countries, as regular Neuroskeptic readers will remember.Most of the studies on antidepressant use come from the USA and the UK, although the pattern also seems to hold for other European countries. The rapid rise of antidepressants from niche drugs to mega-sellers is perhaps the single biggest change in the way medicine treats mental illness since the invention of psychiatric drugs.But while a rise in sales has been observed in many countries, that doesn't mean the same causes were at work in every case. For example, in the USA, there is good evidence that more people have started taking antidepressants over the past 15 years.In the UK, however, it's a bit more tricky. Antidepressant prescriptions have certainly risen. However, a large 2009 study revealed that, between 1993 and 2005, there was not any significant rise in people starting on antidepressants for depression. Rather, the rise in prescriptions was caused by patients getting more prescriptions each. The same number of users were using more antidepressants.Now a new paper has looked at antidepressant use over much the same period (1995-2007), but using a different set of data. Pauline Lockhart and Bruce Guthrie looked at pharmacy records of drugs actually dispensed, not just prescribed, and their data only covers a specific region, Tayside in Scotland. The 2009 study was nationwide.So what happened?The new paper confirmed the 2009 survey's finding of a strong increase in the number of antidepressant prescriptions per patient.However, unlike the old study, this one found an increase in the number of people who used antidepressants each year. It went up from 8% of the population in 1995, to 13% in 2007 - an extremely high figure, higher even than the USA.In other words, more people took them, and they took more of them on average - adding up to a threefold increase in antidepressants actually sold. The increase was seen across men and women of all ages and social classes.There's no good evidence of an increase in mental illness in Britain in this period, by the way.But why did the 2009 paper report no change in antidepressant users, while this one did? It could be that the increase was localized to the Tayside area. Another possibility is that there was an increase nationwide, but it wasn't about people with depression.The 2009 study only looked at people with a diagnosis of depression. Yet modern antidepressants are widely used for other things as well - like anxiety, insomnia, pain, premature ejaculation. Maybe this non-depression-based use of antidepressants is what's on the rise.Lockhart, P. and Guthrie, B. (2011). Trends in primary care antidepressantprescribing 1995–2007 British Journal of General Practice... Read more »
Lockhart, P. and Guthrie, B. (2011) Trends in primary care antidepressant prescribing 1995–2007. British Journal of General Practice. info:/
Lots of press has been given in the past week to two late 7th to early 9th century burials found at the site of Kilteasheen in Ireland. According to the news reports and the documentary (which won't air in the U.S. until 2012, but which you can see on YouTube... for now), archaeologists excavating at the site from 2005-2009 uncovered over 130 graves. Two of them - both males - were buried with stones in their mouths, and one of the men also had a large stone on top of his torso. Aside from a 2008 report of a 4,000-year-old burial, these two early 8th century Irish burials seem to be the oldest evidence of what may be the practice of preventing "revenants" (zombies, vampires, and other undead people) from returning to the land of the living.
8th century male burial from Kilteasheen, Ireland,
with large stone on and under torso
(screencap from documentary)
Both Dorothy King (PhDiva) and Michelle Ziegler (Contagions) have already blogged about this. Dorothy points out some of the other evidence for "vampire" burials in Europe, such as the 10th-11th century cemetery of Celakovice near Prague that held a dozen people who were buried oddly (as with rocks in their mouths) and the so-called Vampire of Venice, a 60-year-old woman from a 1576 plague cemetery in Italy, who was buried with a large rock in her mouth (Nuzzolese and Borrini 2010).
The tradition of weighting down or otherwise defiling corpses (as with nails through the temple and stakes through the heart) seems to be a long one in Europe, born out of a fear of the dead that was related to the rise of Christianity, the lack of understanding of germ theory, and the increase in epidemic diseases.
There weren't, for example, vampires in Rome. The Romans actually had ongoing relationships with the dead, running pipes from the ground to the grave below in order to offer them food and drink and celebrating them at least once a year in the Parentalia. The Judeo-Christian idea that the dead should go into the ground and stay there means that deviations from this practice - as hair and nails seemed to grow after death, for example - probably caused a lot of general freaking out. But the simple introduction of monotheism may also have caused cultural stress, particularly in 7th century England, when kings were converting to Christianity and people were no longer sure what to believe.
Michelle points out that Ireland suffered through two major epidemics of bubonic plague, in 664 and 683, followed by a massive famine in 700. Based on the C14 dates reported in the documentary, it's possible these two burials date as early as the late 7th century. Rocks in or on the body of the deceased may have been meant to pin the person into the grave to prevent that person from rising or coming back, or may have been placed there because the mouth was where the soul escaped from. But rocks may also have been important in the mitigation of disease. Many of the archaeological examples of skeletons with mouth-rocks are assumed to have come from plague cemeteries. Some of the symptoms of bubonic plague are delirium, heavy breathing, and continuous blood-vomiting. People knew that plague could spread but didn't understand how, so blocking a person's mouth may have been an attempt to prevent the spread of the disease. The sight of a terminally ill person coughing up blood could even have been the catalyst for the invention of vampires, as a cultural explanation for disease before the advent of germ theory.
The Kilteasheen burials are likely too late to be plague-related, but even a small urban center could have had endemic tuberculosis, which causes some similar symptoms, like bloody sputum. I don't think a disease-based explanation can be completely ruled out for these burials.
8th century burial of a male, Kilteasheen, Ireland,
with stone in the mouth
(credit: Chris Read, found at MSNBC.com)
In sum, we can't be certain of the meaning of these Irish burials, but the long tradition of incapacitating the dead to prevent them from becoming revenants coupled with historical records of disease epidemics suggests the people who buried these men likely had a good reason for wanting them to stay dead.
Excavators at Kilteasheen estimate that there are around 3,000 burials at the site, so I suspect we'll be hearing more about this cemetery in the years to come. It will be interesting in particular to see if other burials in the cemetery were given the same mouth-rock treatment and whether the practice dates only to the 8th century or continues to later periods of the cemetery's use.
Watch the documentary, Mysteries of the Vampire Skeletons, on YouTube:
Further Reading and References:
McLeod, J. 2010. Vampires, a Bite-Sized History. Pier 9. [Google Books]
Nuzzolese E, & Borrini M. 2010. Forensic approach to an archaeological casework of "vampire" skeletal remains in Venice: odontological and anthropological prospectus. Journal of Forensic Sciences, 55 (6), 1634-7. PMID: 20707834.
Rickels, L. 1999. The Vampire Lectures. University of Minnesota Press. [Google Books]
Tsaliki, A. 2001. Vampires beyond legend - a bioarchaeological approach. In Proceedings of the XIII European Meeting of the Paleopathology Association, ed. M. La Verghetta and L. Capasso, pp. 295-300. [Read here]
Tsaliki, A. 2008.... Read more »
Nuzzolese E, & Borrini M. (2010) Forensic approach to an archaeological casework of "vampire" skeletal remains in Venice: odontological and anthropological prospectus. Journal of Forensic Sciences, 55(6), 1634-7. PMID: 20707834
A week and a half ago, Kibii and colleagues (2011) published reconstructions and re-analyses of two hips belonging to the 1.98 million-year old Australopithecus sediba. As with many fossil discoveries, these additions to the fossil record raise more questions than they answer. Unless the question was, "did A. sediba have a pelvis?" It did. Here's a good summary from the paper itself:Thus, Au. sediba is australopith-like in having a long superior pubic ramus and an anteriorly positioned and indistinctly developed iliac pillar...[and] Homo-like in having vertically oriented and sigmoid shaped iliac blades, more robust ilia, and a narrow tuberoacetabular sulcus...and the pubic body is upwardly rotated as in Homo. (p. 1410, emphases mine)So far as I can tell, the main way the hips are 'advanced' toward a more human-like condition is that the iliac blades are more upright and sweep forward more than in earlier known hominid hips. Here's the figure 2 from the paper (more sweet pics of the fossils are available here). NB that in both A. sediba hips much of the upper portions of the iliac blades are missing (reconstructed in white; this region is missing in lots of fossils), so it's possible they were more flaring like the australopith in the center photo.The authors' bottom-line, take-home point is that the A. sediba pelvis has features traditionally associated with large-brained Homo - but belonged to a small-brained species (based solely on the ~430 cc MH1 endocast). They argue that this means that many of these unique pelvic features did not evolve in the context of birthing large-brained babies, as has often been thought. They state that these features are thus "most parsimoniously attributed to altered biomechanical demands on the pelvis in locomotion," and suggest that this hypothetical locomotion was mostly bipedalism but with a good degree of climbing. Maybe, maybe not. This interpretation is consistent with the analysis of the A. sediba foot/ankle (Zipfel et al. 2011).The weird mix of ancient (australopith-like) and newer (Homo-like) pelvic features in A. sediba really raises the question of how australopithecines moved around. More intriguing is that the A. sediba pelvis has different Homo-like features than the ~1 million year old Busidima pelvis (Simpson et al. 2008), which has been attributed to Homo erectus (largely in aspects of the iliac blades). This raises the question of whether A. sediba is really pertinent to the origins of the genus Homo, and whether the Busidima pelvis belongs to Homo erectus or a late-surviving robust australopithecus (e.g. boisei, Ruff 2010).Also interesting is that the subpubic angle (in the pic above, the upside-down "V" created by the pubic bones just above the red labels) is pretty low in MH2. This is curious because modern human males and females differ in how large this angle is - females tend to have a large angle which contributes to an enlarged birth canal, whereas males have a low angle like MH2. But MH2 is considered female based on skeletal and dental size. This raises the additional questions of whether human-like sexual dimorphism had not evolved in hominids prior to 1.9 million years ago, and whether the sex of MH2 was accurately described.Finally, though the authors did a great job comparing this pelvis with those from other hominids, I think a major, more comprehensive comparative review of hominid pelves is in order. How does the older A. afarensis hip from Woranso (Haile-Selassie et al. 2010) inform australopithecine pelvic evolution? What about the possibly-contemporary-maybe-later hip from the nearby site of Drimolen (Gommery et al. 2002)? Given the subadult status of the MH1 individual, it would be interesting to compare with the WT 15000 Homo erectus fossils, or A. africanus subadults from Makapansgat, to examine the evolution of pelvic growth.Lots of interesting questions arise from these fascinating new fossils. "The more you know," right?ReferencesGommery, D. (2002). Description d'un bassin fragmentaire de Paranthropus robustus du site Plio-Pléistocène de Drimolen (Afrique du Sud)A fragmentary pelvis of Paranthropus robustus of the Plio-Pleistocene site of Drimolen (Republic of South Africa) Geobios, 35 (2), 265-281 DOI: 10.1016/S0016-6995(02)00022-0Haile-Selassie Y, Latimer BM, Alene M, Deino AL, Gibert L, Melillo SM, Saylor BZ, Scott GR, & Lovejoy CO (2010). An early Australopithecus afarensis postcranium from Woranso-Mille, Ethiopia. Proceedings of the National Academy of Sciences of the United States of America, 107 (27), 12121-6 PMID: 20566837Kibii, J., Churchill, S., Schmid, P., Carlson, K., Reed, N., de Ruiter, D., & Berger, L. (2011). A Partial Pelvis of Australopithecus sediba Science, 333 (6048), 1407-1411 DOI: 10.1126/science.1202521Ruff, C. (2010). Body size and body shape in early hominins – implications of the Gona Pelvis Journal of Human Evolution, 58 (2), 166-178 DOI: 10.1016/j.jhevol.2009.10.003... Read more »
Gommery, D. (2002) Description d'un bassin fragmentaire de Paranthropus robustus du site Plio-Pléistocène de Drimolen (Afrique du Sud)A fragmentary pelvis of Paranthropus robustus of the Plio-Pleistocene site of Drimolen (Republic of South Africa). Geobios, 35(2), 265-281. DOI: 10.1016/S0016-6995(02)00022-0
Haile-Selassie Y, Latimer BM, Alene M, Deino AL, Gibert L, Melillo SM, Saylor BZ, Scott GR, & Lovejoy CO. (2010) An early Australopithecus afarensis postcranium from Woranso-Mille, Ethiopia. Proceedings of the National Academy of Sciences of the United States of America, 107(27), 12121-6. PMID: 20566837
Kibii, J., Churchill, S., Schmid, P., Carlson, K., Reed, N., de Ruiter, D., & Berger, L. (2011) A Partial Pelvis of Australopithecus sediba. Science, 333(6048), 1407-1411. DOI: 10.1126/science.1202521
Ruff, C. (2010) Body size and body shape in early hominins – implications of the Gona Pelvis. Journal of Human Evolution, 58(2), 166-178. DOI: 10.1016/j.jhevol.2009.10.003
Simpson, S., Quade, J., Levin, N., Butler, R., Dupont-Nivet, G., Everett, M., & Semaw, S. (2008) A Female Homo erectus Pelvis from Gona, Ethiopia. Science, 322(5904), 1089-1092. DOI: 10.1126/science.1163592
Why does nature allow us to lie to ourselves? Humans are consistently and bafflingly overconfident. We consider ourselves more skilled, more in control, and less vulnerable to danger than we really are. You might expect evolution to have weeded out the brawl-starters and the X-Gamers from the gene pool and left our species with a firmer grasp of our own abilities. Yet our arrogance persists.
In a new paper published in Nature, two political scientists say they've figured out the reason. There's no mystery, they say; it's simple math.
The researchers created an evolutionary model in which individuals compete for resources. Every individual has an inherent capability, or strength, that simply represents how likely he or she is to win in a conflict. If an individual seizes a resource, the individual gains fitness. If two individuals try to claim the same resource, they will both pay a cost for fighting, but the stronger individual will win and get the resource.
Of course, if everyone knew exactly how likely they were to win in a fight, there would be no point in fighting. The weaker individual would always hand over the lunch money or drop out of the race, and everyone would go peacefully on their way. But in the model, as in life, there is uncertainty. Individuals decide whether a resource is worth fighting for based on their perception of their opponents' strength, as well as their perception of their own strength. Both are subject to error. Some individuals in the model are consistently overconfident, overestimating their capability, while others are underconfident, and a few are actually correct.
Using their model, the researchers ran many thousands of computer simulations that showed populations evolving over time. They found that their numerically motivated populations, beginning with individuals of various confidence levels, eventually reached a balance. What that balance was, though, depended on their circumstances.
When the ratio of benefits to costs was high--that is, when resources were very valuable and conflict was not too costly--the entire population became overconfident. As long as there was any degree of uncertainty in how individuals perceived each other's strength, it was beneficial for everyone to overvalue themselves.
At medium cost/benefit ratios, where either costs or benefits somewhat outweighed the other, the computerized populations reached a stable mix of overconfident and underconfident individuals. Neither strategy won out; both types of people persisted. In general, the more uncertainty was built into the model, the more extreme individuals' overconfidence or underconfidence became.
When the cost of conflict was high compared with the benefit of gaining a resource, the entire population became underconfident. Without having much to gain from conflict, individuals opted to avoid it.
The authors speculate that humans' tendency toward overconfidence may have evolved because of a high benefit-to-cost ratio in our past. If the resources available to us were valuable enough, and the price of conflict was low enough, our ancestors would have been predicted to evolve a bias toward overconfidence.
Additionally, our level of confidence doesn't need to wait for evolution to change; we can learn from each other and spread attitudes rapidly through a region or culture. The researchers call out bankers and traders, sports teams, armies, and entire nations for learned overconfidence.
Though our species' arrogance may have been evolutionarily helpful, the authors say, the stakes are higher today. We're not throwing stones and spears at each other; we have large-scale conflicts and large-scale weapons. In areas where we feel especially uncertain, we may be even more prone to grandiosity, like the overconfident individuals in the model who gained more confidence when they had less information to go on. When it comes to negotiating with foreign leaders, anticipating natural disasters, or taking a stand on climate change, brains that have evolved for self-confidence could get us in over our heads.
Johnson, D., & Fowler, J. (2011). The evolution of overconfidence Nature, 477 (7364), 317-320 DOI: 10.1038/nature10384
... Read more »
Just a short time ago, I had a paper at the European Association of Archaeologists meeting in Oslo. I unfortunately couldn't attend the conference, so Rob Tykot presented it. The paper was fun to write, though, and lays out the bioarchaeological evidence (albeit sparse at the moment) for women who immigrated to Imperial Rome. Following is the complete presentation. Comments are always welcome!
Foreign women in Imperial Rome: the isotopic evidence
K. Killgrove, Vanderbilt UniversityR. Tykot, University of South FloridaJ. Montgomery, Durham University
A significant amount of classical scholarship over the years has been dedicated to understanding the demographic make-up of the population of Imperial Rome. Without a proper census, however, classical demographers lack several key pieces of information necessary for reconstructing the number of citizens, slaves, and foreigners at Rome (Noy 2000:16).
Tombstones provide the most solid evidence of immigrants who died in Rome. Here we have an example of the inscription on a tombstone of a soldier, noting he was from Noricum (Corpus Inscriptionum Latinarum vi 3225, translated in Noy 2011). For the most part, though, the epigraphic habit was largely the province of the wealthy, educated elite, leaving us with little information about the lower classes. Demographic estimates of foreigners at Rome range from 5% to 35%, suggesting that as many as one out of every three people in Rome arrived there from elsewhere. Below is the inscription from a large tomb that a group of free slaves built in Rome (Année Epigraphique 1972, 14, translated in Noy 2011). They all appear to have belonged to the same household (as they share a name and the designation C.L., “freed slave of Gaius”) yet came to Rome from various places: Greece, Asia Minor, and north Africa. The practice of commemorating one’s homeland is rare, though, and it is unclear how many slaves and free immigrants came from Italy or from further afield in the Empire (Morley 1996, p. 39). Finally, the epigraphical record of immigrants to Rome is gender-biased, as the vast majority of inscriptions that mention immigrants are those of males (Noy 2000, p. 60). Part of this bias is attributable to the commemoration of soldiers, but males outnumber females three to one even in civilian immigrant inscriptions (Noy 2000, p. 61, Table 2).
In order to learn more about female immigrants to the Imperial capital, we undertook a bioarchaeological study of human skeletal remains from Rome. Through a combination of isotope analyses, palaeopathology, and burial style, we identified previously unknown female immigrants in the archaeological record of Rome and were able to reconstruct key aspects of their life histories.
Our skeletal material comes from the cemetery of Casal Bertone, which was located just 2 km from the center of Rome and was in use from the 2nd-3rd centuries AD (Musco et al. 2008). The majority of the graves were located within a simple necropolis, which included unmarked pit burials as well as burials a cappuccina. An above-ground mausoleum that slightly postdates the necropolis was found as well, and it may have held people of higher social status. Out of the 138 burials, we chose a stratified sample of 30 adults to subject to strontium, oxygen, carbon, and nitrogen isotope analyses – 19 males and 11 females.
This graph shows the strontium and oxygen isotope results for the first molars of adults from Casal Bertone. The approximate isotope range of Rome is represented by a box comprising the upper and lower bounds of expected Sr and O values. No other Sr studies in the Italian peninsula have been done on human skeletal remains, so the local range was estimated conservatively using geochemical modeling that took into account the fact that Rome was supplied by aqueducts that drew water from sources with distinctly different geology than is found in the volcanic Alban Hills (Killgrove 2010a, 2010b). By combining Sr with an O range from previously published human skeletal data (Prowse et al. 2007), however, it is easier to see nonlocals. Here, females T82A and T39 are fairly clearly immigrants to Rome because of low/high O isotopes and rather low Sr. T42, on the other hand, is a borderline case since measurement error could put her within the local O range for Rome. Clearly, though, isotope analysis of human skeletal remains is a viable way to identify female immigrants in the bioarchaeological record of Rome, particularly those who were not commemorated as such on tombstones.
Three of the 11 females we tested (27%) were probably immigrants to Rome. Out of the 19 males studied, 6 were immigrants (32%). More interesting, though, is the sex ratio in the immigrant population. Whereas the sex ratio in tombstones that commemorate immigrants at Rome is 78% male versus 22% female, the ratio of immigrants discovered through skeletal evidence is 66% male versus 33% female. This is, granted, a small sample but suggests that the bias towards male immigrants may in the future be rectified by studying skeletal data.
Epigraphy does occasionally tell us a little about the lives of female immigrants. The tombstone of freedwoman Valeria Lycisca, for example, specifically notes that she came to Rome at age 12 (Corpus Inscriptionum Latinarum vi 28228). Isotope analysis of the skeleton can give us similar information, in that it can help narrow the window of time in which a person immigrated. Two of the Casal Bertone female immigrants – T39 and T82A – also had third molars that could be subjected to Sr isotope analysis. Both produced M3 Sr values that were very close to their M1 values. The difference between T39’s first and third molars is .00016, and the difference between T82A’s first and third molars is .00017. Their M3 values still place them towards the low end of the calculated Sr range of Rome. People in the low end were probably immigrants from an area with younger geology (such as the southern, volcanic areas of Italy); however, it is possible people in this range were locals who consumed a significant amount of Roman aqueduct water (roughly 90% of all water consumed) throughout childhood. Oxygen isotope analysis on the M3s has not yet been done. Based on the small differences between these women’s M1s and M3s, it is likely that both immigrated to Rome after the development of their M3s was complete. Therefore, T39, a woman of about 15-17 years at the time of her death, likely died shortly after arrival in Rome.
... Read more »
K. Killgrove. (2010) Identifying immigrants to Imperial Rome using strontium isotope analysis. Journal of Roman Archaeology. info:/
Prowse TL, Schwarcz HP, Garnsey P, Knyf M, Macchiarelli R, & Bondioli L. (2007) Isotopic evidence for age-related immigration to imperial Rome. American Journal of Physical Anthropology, 132(4), 510-9. PMID: 17205550
A group of researchers from the IPHES (Institut Català de Paleoecologia Humana i Evolució Social) reports on the discovery of a handheld wooden implement from Mousterian deposits at Abric Romaní, Spain. The tool was found in Level P which dates to about 56,000 years BP, and its morphology suggests that it might have been a small spade/shovel, or perhaps a poker, given its association to a hearth... Read more »
CARBONELL, E., & CASTRO-CUREL, Z. (1992) Palaeolithic wooden artefacts from the Abric Romani (Capellades, Barcelona, Spain). Journal of Archaeological Science, 19(6), 707-719. DOI: 10.1016/0305-4403(92)90040-A
Castro-Curel, Z., & Carbonell, E. (1995) Wood Pseudomorphs From Level I at Abric Romani, Barcelona, Spain. Journal of Field Archaeology, 22(3), 376-384. DOI: 10.1179/009346995791974206
Vallverdú, J., Vaquero, M., Cáceres, I., Allué, E., Rosell, J., Saladié, P., Chacón, G., Ollé, A., Canals, A., Sala, R.... (2010) Sleeping Activity Area within the Site Structure of Archaic Human Groups. Current Anthropology, 51(1), 137-145. DOI: 10.1086/649499
It is well known that the modern world religions which trace their origins to the Axial Age are centrally concerned with death. Some might call this concern an obsession. Of these world religions, only Hinduism does not have Axial roots. This is not to say that “Hinduism” (which is neither singular nor unified) was unaffected [...]... Read more »
Blackburn, Stuart H. (1985) Death and Deification: Folk Cults in Hinduism. History of Religions, 24(3), 255-274. info:/
What's the secret to becoming a good father? What would William Cosby do?I for one have no idea BUT! a study published today in PNAS early edition finds an association between studly vs. paternal behavior, and levels of everyone's favorite hormone, testosterone (T).Using longitudinal data, researchers (Gettler et al. in press) found that, in general, a young guy with higher levels of circulating T is more likely than a guy with low T to become a father w/in a few years. MOREOVER! this erstwhile-high-T-and-now-father then experiences a relatively sharper decrease in T than would be expected simply because of aging. PLUS! fathers who interacted with their kids on a daily basis had lower T than fathers who didn't hang around their kids too often.One thing neat about this study is that it uses longitudinal instead of cross-sectional data. A cross-sectional version of this study would've sampled a bunch of dudes (hopefully somewhat randomly) only once. This can be problematic because it's then hard to interpret the results in light of the many sources of variation between people. This study, on the other hand, sampled a tonne (n = 694) of guys at more than one occasion, so they can tell how individuals' testosterone levels tend to change in paternal vs. free-spirited circumstances.The last line of the paper is pretty intriguing: "[these results] add to the evidence that human males have an evolved neuroendocrine architecture shaped to facilitate their role as fathers and caregivers as a key component of reproductive success." (Gettler et al. in press: p. 5/6) This is especially interesting in light of the Ardipithecus ramidus-related evidence for a great antiquity of humans' paternal proclivity (Lovejoy 1981, Lovejoy et al. 2009). Just how and why testosterone responds to/mediates this fatherly 'reproductive strategy' is mysterious to me. And of course, linking this hormonal phenomenon with anything as old as Ardi is a challenge I'm certainly not up to. Still neat, though.My personal circulating T levels are consistently through the roof. So in the event that I become a father, it will be interesting to see if the subsequent, astronomical hormone drop, predicted by this study, won't cause my entire body to collapse in on itself.ReferenceGettler LT et al. in press. Longitudinal evidence that fatherhood decreases testosterone in human males. Proceedings of the National Academy of Sciences... doi: 10.1073/pnas.1105403108Lovejoy, C. (1981). The Origin of Man Science, 211 (4480), 341-350 DOI: 10.1126/science.211.4480.341Lovejoy CO (2009). Reexamining human origins in light of Ardipithecus ramidus. Science (New York, N.Y.), 326 (5949), 740-8 PMID: 19810200Photo credit: google (image) "Bill Cosby Fatherhood"... Read more »
Lovejoy CO. (2009) Reexamining human origins in light of Ardipithecus ramidus. Science (New York, N.Y.), 326(5949), 740-8. PMID: 19810200
The actor David Carradine may have led a troubled life but he experienced no such trouble as Kwai Chang Caine, a Buddhist monk on the move in the old American west. From 1972-1975, the Kung Fu series was must watch television for kids my age, even if we had no idea that Caine was a [...]... Read more »
Keightley, David N. (1978) The Religious Commitment: Shang Theology and the Genesis of Chinese Political Culture. History of Religions, 17(3/4), 211-225. info:/
Inspired by my recent visit to the Gila Cliff Dwellings, I’ve been reading about the Mimbres Mogollon culture of southwestern New Mexico. As I noted earlier, the cliff dwellings themselves aren’t actually Mimbres, instead belonging to the Tularosa Mogollon culture more common to the north, and they postdate the “Classic” Mimbres period (ca. AD 1000 [...]... Read more »
Fewkes, J. (1916) Animal Figures on Prehistoric Pottery from Mimbres Valley, New Mexico. American Anthropologist, 18(4), 535-545. DOI: 10.1525/aa.1916.18.4.02a00080
Gilman, P., Canouts, V., & Bishop, R. (1994) The Production and Distribution of Classic Mimbres Black-on-White Pottery. American Antiquity, 59(4), 695. DOI: 10.2307/282343
Hegmon, M. (2002) Recent Issues in the Archaeology of the Mimbres Region of the North American Southwest. Journal of Archaeological Research, 10(4), 307-357. DOI: 10.1023/A:1020525926010
Hegmon, M., Nelson, M., & Ruth, S. (1998) Abandonment and Reorganization in the Mimbres Region of the American Southwest. American Anthropologist, 100(1), 148-162. DOI: 10.1525/aa.19184.108.40.206
Nelson, M., & Hegmon, M. (2001) Abandonment Is Not as It Seems: An Approach to the Relationship between Site and Regional Abandonment. American Antiquity, 66(2), 213. DOI: 10.2307/2694606
Clinging to rock piles high in the Pyrenees, the plant Borderea pyrenaica has a modest lifestyle: It grows a new shoot every summer, flowers and fruits, then sheds its aboveground growth to survive the winter as a tuber. What's remarkable is how long this life lasts for. Individual plants have been known to live 300 years or more. Scientists headed up into the mountains to find out whether these plants, in all their years of living, ever actually get old.
"Senescence" is what we usually call aging--getting weaker and closer to death as we get on in years. To us humans, it seems like a fact of life. But some other animals are thought to be "negligibly senescent." Certain fish, turtles, and other sea creatures seem to be perfectly healthy and fertile at 100 or 200 years old; they're no more likely to die at that age than at any other. Some plants, and especially some trees, may have nearly unlimited lifespans.
Scientists--not to mention cosmetics companies--would love to know exactly why humans are stuck with senescence while organisms like the bristlecone pine just get more fabulous with age. Unfortunately, it's difficult for those of us with limited lifespans to study those without. To squeeze some secrets out of Borderea pyrenaica, scientists from Spain and Sweden studied two populations of the plant over the course of five years.
Because Borderea pyrenaica is left with a scar on its tuber when each year's growth dies back, researchers could count the scars to calculate an individual tuber's age. Each year, they counted and measured the leaves on each plant. They also counted the plants' flowers, fruits and seeds. Since the plants come in male and female versions, the researchers would be able to compare aging in both--would the metabolic effort of making fruits and seeds take a toll on female plants' lifespans? At the end of the study, the researchers dug up all the tubers, dried them and weighed them. (Aesop says: Don't be jealous of negligibly senescent organisms. If old age doesn't kill you, science will!)
The researchers were able to calculate the age of almost 750 plants that were up to 260 years old. They found that tubers grew in size each year, reaching their maximum size after 50 or 100 years (depending on the population). As the tubers grew, the shoots that they put out each year got bigger too. After they reached about 60 years old, the plants didn't seem any more likely to die with the passing years. If anything, survivorship seemed to increase in old age. There was no difference between male and female plants.
As they got bigger, both types of plants put out more flowers, giving them greater potential to contribute to the next generation. This meant that the plants' "reproductive value"--an individual's expected fertility from its current age onward--actually increased over their entire lifespan.
It seems unlikely that we'll one day tap into some biological secret that enables us to live forever. But further research into the plants and animals that don't deteriorate with age might help us solve the mysteries of our own mortality. We may not ever become ageless, but we could learn to age with some of the grace of a lobster, or a mountain tuber.
Garcia, M., Dahlgren, J., & Ehrlén, J. (2011). No evidence of senescence in a 300-year-old mountain herb Journal of Ecology DOI: 10.1111/j.1365-2745.2011.01871.x
... Read more »
Garcia, M., Dahlgren, J., & Ehrlén, J. (2011) No evidence of senescence in a 300-year-old mountain herb. Journal of Ecology. DOI: 10.1111/j.1365-2745.2011.01871.x
Between teaching, researching, and applying for jobs, I have not had as much time as I'd like to blog. That partly explains the delay in this installment of the Roman bioarchaeology carnival, but the other reason for the delay is that, well, not much has happened in the past two weeks that I'd consider particularly Roman bioarchaeological. I have, therefore, just a few offerings for this carnival...
TB or Not TB
Map of Poundbury Camp. Fig. 1, Lewis 2011.
In the first ever issue of the International Journal of Paleopathology (which is dated March but didn't show up online until fairly recently), Mary Lewis discusses the evidence of tuberculosis in the skeletons of children from the Romano-British camp at Poundbury (Dorset, England). Originally an Iron Age hillfort, in the Roman period (3rd-4th c AD), Poundbury Camp was the main burial site for people living in Durnovaria (modern Dorchester).* It is unclear what kind of environment people lived in at Durnovaria, such as conditions in the small urban settlement, kind of food consumed, and prevalence of diseases. Previous work by Lewis established that the children buried in this settlement were subjected to poor living conditions and malnutrition, as seen in the high frequencies of cribra orbitalia, porotic hyperostosis, rickets, and scurvy.
New bone formation on the visceral
surface of the ribs. Fig 5, Lewis 2011.
For this study, Lewis investigated a sample of 165 subadults (individuals under the age of 17, the approximate age of biological maturity) for evidence of tuberculosis. While tuberculosis is fairly well-known in the palaeopathological literature, only two cases of TB in children have been published in ancient Britain (with an additional 14 possible cases). Ten subadults were found with probable tuberculid lesions, or about 6% of the population studied, although three of these could have had brucellosis which, like TB, is an infectious disease linked to animal domestication.
The presence of TB in children leads Lewis to conclude that the incidence in the adult population was probably higher, as children tend to get TB from adults and also tend to grow up to become adults with TB (if they survive, of course). Whether the percentage of subadults with TB is 6% or 4%, this frequency is much higher than expected for Romano-British Poundbury. The presence of TB in children in this sample suggests that people were living close together, and perhaps close to their animals as well. Lewis concludes by suggesting that TB may well have been endemic in this population.
Mmm, tasty human. Grouper likee.
Bardo Nat'l Museum, Tunis
If you're a regular reader, you know that one of my research areas is the diet of Imperial Romans. To that end, I've written quite often on this blog about the use and consumption of aquatic resources in the Roman world: Weaning and Freshwater Fish Consumption in Roman Britain and Bioarchaeology of Roman Seafood Consumption. Although not technically Roman bioarchaeology, a press release this week mentioned a Stanford researcher who looked to Roman art to study issues of marine conservation. Based on depictions of dusky groupers in hundreds of Etruscan, Greek, and Roman artworks, researchers have concluded that the species should be much larger and should be found in more shallow waters than it is today. Of course, artistic depictions are not always true to life, but the preponderance of depictions of groupers as very large fish leads the researchers to conclude that today's 50- to 60-cm groupers are much smaller than they were in the past. Further, Pliny and Ovid mention fishing for groupers from the shore, a practice that wouldn't work in modern times because groupers range in much deeper waters today. The grouper population today seems to be shrinking, and researchers want to prevent people from fishing for them, in order to restore the population and prevent extinction.
I find it quite interesting that ancient mosaics have proven useful to conservation biologists. In terms of diet, we need to think about what the aquatic species looked like in the past. If groupers were large, tasty, and easy to catch, Romans may have eaten their fair share. Assumptions about the kinds of aquatic resources consumed based on contemporary fish populations may therefore be wrong.
Roman Funerals in Gaul
Excavation at Epiedes-en-Beauce
A brief bit of news notes the discovery of a cemetery dated to 30AD in Epiedes-en-Beauce, in Loiret (north-central France). Within a square enclosure, archaeologists found weapons, jewelry, and pottery, leading them to think the area was religious in nature. But they also found burned ceramics, remnants of funerary meals, nails, and human and animal bone, suggesting it was a cemetery or other funerary area. The abundance of material remains may indicate a high-status burial or burials. The remains are currently being analyzed in the laboratory, so there is no additional information yet.
This discovery could be interesting, but I suspect that lots of little Roman-era burial sites are uncovered in France and other parts of the Empire. Depending on the condition of the bones and teeth and the number of individuals recovered, though, the human remains could form a nice little dataset for understanding life in rural Gaul.
Well, hopefully in another two weeks' time, I'll have some more interesting Roman bioarchaeology news for you!
* See also news from the 1st Roman Bioarch carnival, on a child skeleton found at Durnovaria.
Ref... Read more »
M.E. Lewis. (2011) Tuberculosis in the non-adults from Romano-British Poundbury Camp, Dorset, England. International Journal of Paleopathology, 1(1), 12-23. info:/10.1016/j.ijpp.2011.02.002
I think I know why science does not understand the female orgasm. It is because science excels when it breaks free of context, history, human complexities and anthropology, but when a topic requires one to grasp context, history, human complexities and anthropology, then science, especially the hard sciences, can fall short. Also, the nature of the female orgasm is a comparative question, but human sexuality is highly (but not entirely) derived; It is difficult to make a sensible graph or table comparing aspects of sexuality across mammals that usefully includes humans. It is not as impossible as making such a graph or table with "language" (which is entirely unique to humans) but still, it is difficult.
There is another problem as well. Female orgasm is actually a lot like male orgasm, and probably serves the same evolutionary role with one small but important difference. But, that one small but important difference, the ejaculation of seminal fluid by males, blinds researchers to any other function of male orgasms. Seminal fluid is distracting. Male ejaculation and female ovulation are rough homologues, but entirely different in their physiology and timing. Were it the case that female ovulation could only happen together with orgasm ... well, the human world would be a very different place but at least science would not be fumbling around in search of an answer for this enigma.
The reason I bring any of this up is because of a paper1, just published, that makes the claim that the "byproduct" theory of female orgasms is unsupported. So, I'd like to take a moment to explain the byproduct theory, to explain why this paper does not really address it let alone refute it, and then we'll get back to the question of what female orgasms really are for. The byproduct theory will not survive this discussion.
The byproduct theory originates with the following observations: Read the rest of this post... | Read the comments on this post...... Read more »
Zietsch, B., & Santtila, P. (2011) Genetic analysis of orgasmic function in twins and siblings does not support the by-product theory of female orgasm. Animal Behaviour. DOI: 10.1016/j.anbehav.2011.08.002
Despite their impressive preservation, the Gila Cliff Dwellings have gotten surprisingly little attention in the archaeological literature. This is apparently because they were so thoroughly ransacked by pothunters early on that there wasn’t much left intact for archaeologists to study, and possibly also because the early establishment of Gila Cliff Dwellings National Monument in 1907 [...]... Read more »
Division of labor is a major part of understanding gender and class roles in historic populations. Without text, archaeologists depend on material and human remains for the answers. The physical stress (or lack thereof) from daily activities can leave markers … Continue reading →... Read more »
P. HAVELKOVA, S. VILLOTTE, P. VELEMINSKY, L. POLACEK AND M. DOBISIKOVA. (2011) Enthesopathies and Activity Patterns in the Early Medieval Great Moravian Population: Evidence of Division of Labor. International Journal of Osteoarchaeology, 487-504. info:/
Several years ago I read Daniel Dennett’s Breaking the Spell: Religion as a Natural Phenomenon (2006). It wasn’t easy. This is not because Dennett’s ideas and arguments are difficult (they aren’t). It is because I don’t care for Dennett’s style. While I can overlook stylistic deficiencies if the substance is solid, in this case I [...]... Read more »
Geertz, A. (2008) How Not to Do the Cognitive Science of Religion Today. Method , 20(1), 7-21. DOI: 10.1163/157006808X260232
‘Tis but thy name that is my enemy; Thou art thyself, though not a Montague. What’s Montague? it is nor hand, nor foot, Nor arm, nor face, nor any other part Belonging to a man. O, be some other name! What’s in a name? that which we call a rose By any other name would [...]
... Read more »
Goldin, C., & Shim, M. (2004) Making a Name: Women's Surnames at Marriage and Beyond. Journal of Economic Perspectives, 18(2), 143-160. DOI: 10.1257/0895330041371268
Noordewier, M., Horen, F., Ruys, K., & Stapel, D. (2010) What's in a Name? 361.708 Euros: The Effects of Marital Name Change. Basic and Applied Social Psychology, 32(1), 17-25. DOI: 10.1080/01973530903539812
From London to the Middle East riots have shaken political stability. Are the answers to be found in human nature? Police cars were overturned and shops looted as the mob descended on the city’s central square. Rioters tore the police station’s outer door off its hinges and “used it as a battering ram” to break [...]
... Read more »
Marco Lagi, Karla Z. Bertrand, & Yaneer Bar-Yam. (2011) The Food Crises and Political Instability in North Africa and the Middle East. New England Complex Systems Institute. arXiv: 1108.2455v1
In his 1880 Hibbert Lecture on the history of early Christianity, Ernest Renan commented: “I sometimes permit myself to say that, if Christianity had not carried the day, Mithraicism would have become the religion of the world.” While it is doubtful that a Persian-influenced mystery cult that appealed primarily to Roman soldiers, officials, and aristocrats [...]... Read more »
Beck, R. (1998) The Mysteries of Mithras: A New Account of Their Genesis. The Journal of Roman Studies, 115. DOI: 10.2307/300807
Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.
If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.
Research Blogging is powered by SMG Technology.
To learn more, visit seedmediagroup.com.