The present economic crisis has led to more suicides in Europe - but fewer deaths in road traffic accidents. So says a brief report in The Lancet. The authors show that suicide rates in people under the age of 65, which have been falling for several years in Europe, rose in 2008 and again in 2009, in line with unemployment figures. The overall effect was fairly small - 2009 was no worse than 2006. It still corresponds to a 5% annual increase in most countries. In Greece, Ireland, and Latvia the rise was about 15%.That's sad but not perhaps very surprising.What's interesting though is that road traffic fatalities fell sharply. In Lithuania, they dropped by nearly half, although they were very high to begin with, and in Spain and Ireland they fell by 25%.This presumably reflects the fact that people are just driving less, and perhaps slower. We've got less money to spend on fuel, and fewer jobs and things to need to drive to.The authors note that although fewer road deaths is generally a good thing, there's one downside - a shortage of donor organs for transplantation. Road accidents are a prime source of organs because they're one of the few times that young, healthy people die leaving most of the body intact.Stuckler D, Basu S, Suhrcke M, Coutts A, & McKee M (2011). Effects of the 2008 recession on health: a first look at European data. Lancet, 378 (9786), 124-5 PMID: 21742166... Read more »
Stuckler D, Basu S, Suhrcke M, Coutts A, & McKee M. (2011) Effects of the 2008 recession on health: a first look at European data. Lancet, 378(9786), 124-5. PMID: 21742166
Earlier this month, archaeologist revealed a large mass grave containing the remains of children and llamas. The grave was found on the coast of Peru, near the ancient Chimú capital of Chan Chan. The 800 year old grave contains the remains … Continue reading →... Read more »
Centurion, Curo, and Klaus. (2010) Bioarchaeology of human sacrifice: violence, identity and the evolution of ritual killing at Cerro Cerrillos, Peru. Antiquity. info:/
It took me some time to decide what I should do with Australopithecus sediba on this Blog, in the end I decided to concentrate on the aspects I at least know a little bit of, one of them is taxonomy.I had to reconstruct a bunch of phylogenetic trees in the last few months and I found a someof free online tools which enabled me to do this without using any fancy (and expensive) Computer Programs. The only disadvantage of these resources is that they were originally made for molecular data sets. This made my work a little bit more complicated since I had to modify my morphological datasets in a way that these programs were able to work with them. I won’t talk about the exact process right now; instead I want to show you some of the stuff I did with Australopithecus sediba.First of all, let’s have a look at a classic tree which illustrates the phylogenetic relationships among the genus Homo. I took the Tree from Strait et al. (1997) for this particular example:Strait et al. (1997)There’s nothing really special about this tree, sure you could discuss whether or not the shown phylogeny represent the true relationships of these fossils, but discussing this stuff always tends to get boring, since you have to look at the characters and you need to discuss the validity of each of themTo make things a little more interesting, I took the character matrix from Strait et al. and included Australopithecus sediba. The characters for Australopithecus sediba were taken from the initial description of this Fossil (Berger et al., 2010). This is the tree you get, when you run this modified matrix through an Analysis:Same character matrix but with A. sediba.Sediba ruined everything!What in the first tree looked like a nice and clear relationship is now collapsed into something completely indifferent.To make things clear, the taxonomic position of Homo habilis and Homo rudolfensis never was pretty clear. In fact, the latter species was established, because the initial hypodigm (the total sum of all fossils which describe a species) of Homo habilis was so diverse in its morphology that it was split up into two separate species. The “new” species was then called Homo rudolfensis. I won’t talk up the exact reasons why this was the case, since it would make this post too long, but I will eventually come back to this topic in another post.Let’s go back to Australopithecus sediba for the moment. It’s not only that the fossil practically ruins the common taxonomic picture of relationships of early homo, it’s also very young. Right now, Australopithecus sediba is dated at about 1.9 million years, this is very young, if you keep in mind that there are fossils of Homo habilis and Homo rudolfensis which are much older then 2 million years. There are also possible fossils from Homo ergaster/erectus which are only slightly younger then the sediba fossils. Now add the about 1.7-1.8 million year old remains from Dmanisi/Georgia to this mess and you can see how complicated this whole story starts to look.Fortunately the tree I showed you at the beginning of this post isn’t completely useless since it shows that Australopithecus sediba falls somewhere within the relationship of Homo ergaster/erectus, Homo rudolfensis and Homo habilis. So let’s have look at the possible relationships and the possible consequences of each scenario: Scenario if A. sediba would share a LCA with the Genus Homo In this scenario, Australopithecus sediba would share a last common ancestor with the Genus Homo. The only problem which arises from this tree is that you have to discuss what you should do with the Homo rudolfensis and habilis fossils which pre-date the emergence of Australopithecus sediba in the fossil record.All other scenarios basically ruin our contemporary picture of the Genus Homo: Two of the possible relationships if A. sediba would be place somewhere within the Genus Homo No matter which scenario we look at, none of them shows the Genus Homo as a monophyletic group. This means that either we have to include Australopithecus sediba within the genus Homo which I’m not very fond of since it would lead to an even weaker definition of it. Or we have to exclude Homo habilis and/or Homo rudolfensis from the genus Homo. The Genus Homo would then begin with Homo ergaster/Homo erectus and everything before that species would be either inside the genus Australopithecus or in a complete new genus.Personally, I have no Idea what I should make out of this stuff. Right now everything seems to contradict itself and I think we need to have much more knowledge about this certain period of time. This means of course more fossils from this period but also more research on the already known fossils.What I think we can safely right now is that the emergence of the genus Homo didn’t happen in a gradualistic fashion where one species slowly evolved into the next one. I think what we have here is a series of, possible independent, speciation events. This would explain why we have that many species that look similar to another but who overlap in spatial as well as temporal aspects and whose phylogenetic relationships are completely unclear. I have some more thoughts on this matter and I will write another Post where I go into much more detail. For now, all I can say is that, although Australopithecus sediba completely ruins the contemporary phylogeny, it might help us to really understand what happened back then.References: ... Read more »
Berger, L., de Ruiter, D., Churchill, S., Schmid, P., Carlson, K., Dirks, P., & Kibii, J. (2010) Australopithecus sediba: A New Species of Homo-Like Australopith from South Africa. Science, 328(5975), 195-204. DOI: 10.1126/science.1184944
Excerpts from the Personal Journal of Krystal D’Costa [i] Tuesday: I fell. Again. This time it was while getting out of the car. I’m not sure how I managed it. I got my foot caught on the door jamb and tumbled forward. I hit my shin—hard—against the door jamb and I think I tweaked my [...]
... Read more »
Brown JE,, Chatterjee N,, Younger J,, & Mackey S. (2011) Towards a physiology-based measure of pain: patterns of human brain activity distinguish painful from non-painful thermal stimulation. PLoS one, 6(9). PMID: 21931652
Pia Haudrup Christensen. (1999) "It Hurts": Children's Cultural Learning About Everyday Illness. Stichting Ethnofoor, 12(1), 39-52. info:/
A major international study threatens to overturn what we thought we knew about schizophrenia. People with schizophrenia are more likely to get better if they live in poor countries: that's been known for about 25 years. In the 1980s, a series of pioneering World Health Organization (WHO) studies looked at the prognosis for people diagnosed with schizophrenia around the world.All of the data showed that people in developed countries were less likely to recover than those from poorer areas.This paradoxical finding sparked no end of debate. What is it about these countries that makes them a better place to get schizophrenia? Patients in richer countries tend to have access to more and "better" psychiatric care, the latest drugs, and so on. Does this mean that those treatments are useless - worse, harmful? That's been the interpretation of some people.But is it true? Not always, says a new study, W-SOHO. It's out in the British Journal of Psychiatry.The authors compared schizophrenia outcomes in 37 countries. They recruited outpatients who were starting, or changing, antipsychotic medication. They found that in terms of "clinical" remission - i.e. improvement in the delusions, hallucinations, and other symptoms of schizophrenia - people in the developing world did indeed fare better than those from rich countries.Over a 3 year period, 80-85% of patients from East Asia, the Middle East, and Latin America who started off ill, showed clinical remission, compared to 60-65% in Europe. That's not new: it confirms what the old WHO data showed.But the new study also looked at "functional" remission - essentially, being able to participate in society:having good social functioning for a period of 6 months. Good social functioning included those participants who had: (a) a positive occupational/vocational status, i.e. paid or unpaid full- or part-time employment, being an active student in university or housewife; (b) independent living; and (c) active social interactions, i.e. having more than one social contact during the past 4 weeks or having a spouse or partner.For functional remission, Northern Europe (e.g. the UK, France, Germany) was the best place to get sick, with 35% achieving it. Not a very high figure, but better than elsewhere: it was just 18% in the Middle East and 25% in East Asia, despite these areas having the highest chances of clinical remission. Latin America did pretty well, however, at 29%.This is a very important finding if it's true. Is it solid? First off, were Northern European patients just less ill to start with? Not really. They had the highest rates of suicide attempts. They tended to be older, and to have been diagnosed at a later age, which was correlated with worse functional remission. Regression analyses confirmed that region was a predictor of remission controlling for all the other variables.However, Northern European patients did tend to have better function at baseline. They were more likely to be employed, living independently, and socially active when they entered the study. 63% were living independently which is much higher than anywhere else: it was 24% in Middle East and Latin America. 23% had a paid job compared to 17-19% in developing countries. That's not a flaw in the study as such but it does suggest that the differences, whatever they are, are already in place before people get treated.One concern I have is that the definition of "functional remission" may be North Europe-centric. "Living independently" is something we aspire to but in other places, with a strong tradition of the extended family household, the idea that it would be a bad thing for someone with schizophrenia to be living with their family might seem silly. If that means they'll be cared for and supported, what's wrong with it?And in terms of paid employment, Northern Europe just has a stronger economy than most other places (erm... well, it did back in 2000 when these data were collected), so maybe it's no surprise that people with schizophrenia were more likely to have paid jobs.In terms of the study itself, it was extremely large with over 17,000 patients enrolled. But here's the thing: this study was run by Lilly, the drug company who make olanzapine, an antipsychotic used in schizophrenia. Three of the authors on the paper are Lilly employees, and the lead author was a consultant for them. The study deliberately sampled lots of people taking olanzapine, presumably in order to find out whether they did better.None of this necessarily means that the data aren't valid, but I'm just not sure I trust Lilly over the WHO.Haro JM, Novick D, Bertsch J, Karagianis J, Dossenbach M, & Jones PB (2011). Cross-national clinical and functional remission rates: Worldwide Schizophrenia Outpatient Health Outcomes (W-SOHO) study. The British journal of psychiatry : the journal of mental science, 199, 194-201 PMID: 21881098... Read more »
Haro JM, Novick D, Bertsch J, Karagianis J, Dossenbach M, & Jones PB. (2011) Cross-national clinical and functional remission rates: Worldwide Schizophrenia Outpatient Health Outcomes (W-SOHO) study. The British journal of psychiatry : the journal of mental science, 194-201. PMID: 21881098
The fish tapeworm Diphyllobothrium latum has reared its narrow scolex head around the world in freshwater fish-eating communities. This article looks at its history, culinary (mis)adventures and global travels. Included is a breathtaking video of the tapeworm in action.... Read more »
Scholz, T., Garcia, H., Kuchta, R., & Wicht, B. (2009) Update on the Human Broad Tapeworm (Genus Diphyllobothrium), Including Clinical Relevance. Clinical Microbiology Reviews, 22(1), 146-160. DOI: 10.1128/CMR.00033-08
What do stone axes have to do with the LIA ?In his famous paper entitled “The anthropogenic greenhouseera began thousands of years ago” Ruddiman put forward a fascinating idea: “CO2 oscillations of ∼10ppm in the last 1000 years are too large to be explained by external(solar-volcanic) forcing, but they can be explained by outbreaks of bubonicplague that caused historically documented farm abandonment in western Eurasia.Forest regrowth on abandoned farms sequesteredenough carbon to account for the observed CO2 decreases. Plague-driven CO2changes were also a significant causal factor in temperature changes during theLittle Ice Age (1300–1900 AD)”. There has been a lot of controversy surroundingRuddiman’s paper.More recently, the ideathat plagues caused farmland abandonment and were followed by re-forestationhas been applied to the Amazon Basin and the LIA. Severalscholars have proposed that the depopulation caused by the diseases thatEuropeans brought to the Americas after 1492 induced a large scalere-forestation which, in turn, decreased the amount of atmospheric CO2 andcontributed to the LIA [Dull et al., 2010; Faust etal., 2006; Nevle and Bird, 2008].In order to assess the likelihood of this hypothesis we need to know i) populationsize in pre-Columbian America and ii) the kind of agriculture pre-Columbians practiced.Citing Denevan, Nevleand Bird  write that “Evidence forthe habitation and modification of American landscapes by tens of millions ofPre-Columbian agriculturalists [Denevan, 1992] exists in thewidespread distribution of anthropogenic Amazonian Dark Earth soils, raisedfields, irrigated terrace zones, roads, aqueducts, and numerous large-scaleearthworks distributed throughout Amazonia, the Andes, Central America, andparts of North America”.Many of the papersaddressing this topic cite Denevan with regards to pre-Columbian populationdensities and agriculture. So, what are Denevan’s views on the matter? I will focuson Amazonia, as it is the largest forested area in the world and most of thework on pre-Columbian population density and agriculture that is cited tosupport this hypothesis have been done in Amazonia (for example the worksof Denevan himself, Erickson and Heckenberger).How many people lived in Amazonia in 1491?The first estimate wasgiven by Betty Meggers who said that population density in pre-Columbian Amazoniawas 0.3 people Km-2. She didn’t do any distinction betweenfloodplains (varzea) and uplands (terra firme) because varzea’s fertility wasoffset by unexpected and destructive floods, which made varzea as unsuitablefor people as terra firme.Denevan then proposed amodel in which people settled on the rivers’ bluffs. They were able to takeadvantage of the varzea but avoided the danger of the floods. According to [Denevan,1992] population density was 14.6 people km-2 in the varzea and 0.2 people km-2 in the terra firmeforests. It is interesting that Denevan’s estimate for terra firme is lower thanMeggers’ estimate. This is important as terra firme represents 98% of the Amazonianrain forest.In2003, Denevan changed idea and wrote: “For varzea population densitywould be 10.4 per square kilometer […] For terra firme forests it isimpossible to estimate an average population density and a total population […]Estimating average population densities for the savannas with any confidence isimpossible.” Then, he concluded: “...consequently I now reject thehabitat-density method I used in the past to estimate a Greater Amazoniapopulation in 1492 of from 5.1 to 6.8 million. I nevertheless still believethat a total of at least 5 to 6 million is reasonable” [Denevan, 2003].The stone axesAlthoughDenevan has rejected his own estimate of 0.2 people km-2 for terrafirme, it is still important to highlight how he justified that his estimatewas smaller than Meggers’. Denevan defends that pre-Columbians did not practiceslash and burn agriculture because they did not have metal tools and cuttingthe forest with stone axes would have been too much work. Hence, they preferredto live in savannahs, where they developed raised field agriculture, or on theriver bluffs, where Amazonian Dark Earth (ADE) sites are actually found. In Denevan’s view,raised fields and ADEdeveloped in order to minimize the need of clearing the forest: pre-Columbianspreferred to build raised fields and ADE because such type of agricultural intensification required less workthan cutting the forest with stone axes. Thevery same archaeological evidence that Nevle and Bird  use to infer high rates of pre-Columbian deforestation are usedby Denevan to infer that pre-Columbians actually did not cut the forest!Thequestions I have should now be clear: 1) could have such a small population of 0.2people km-2 significantly modified the Amazon forests? 2) How didthey have such an impact if they had to cut the forest with stone axes? 3) Do raisedfield agriculture and ADE suggesthigh levels of deforestation? Or is it the other way round?I don’t want to bemisinterpreted here; I am not saying that pre-Columbian population was small ort... Read more »
Ruddiman, W. (2003) The Anthropogenic Greenhouse Era Began Thousands of Years Ago. Climatic Change, 61(3), 261-293. DOI: 10.1023/B:CLIM.0000004577.17928.fa
Looking into subdural empyema, which is a meningeal infection you don't want, I stumbled upon a study from the roaring 1970s - the glorious Nixon-Ford-Carter years - using computerized axial tomography (hence, CAT scan) to visualize lesions within the skull (Claveria et al. 1976). Nowadays people refer to various similar scanning techniques simply as "CT" (for computed tomography, though this is not exactly the same as magnetic resonance imaging, MRI).It's pretty amazing how medical imaging has advanced in the 35 years since this study. For example, to the right is a CAT scan from Claveria et al. (1976, Fig. 4). These are transverse images ("slices") through the brain case, the top of the images corresponding to the front of the face. You can discern the low-density (darker) brain from the higher density (lighter) bone - the sphenoid lesser wings and dorsum sellae, and petrous pyramids of the temporal bones are especially prominent in the top left image. In the bottom two images you can see a large, round abscess in the middle cranial fossa. Whoa.What makes this medical imaging technique so great is that it allows a view inside of things without having to dissect into them. Of course, the downside is that it relies on radiation, so ethically you can't be so cavalier as to CT scan just any living thing. If I'd been alive in 1976, CAT scanning would've blown my mind. Still, the image quality isn't super great here, there's not good resolution between materials of different densities, hence the grainy images.But since then, some really smart people have been hard at work to come up with new ways to get better resolution from computerized tomography scans, and the results are pretty amazing. To the left is a slice from a synchrotron CT scan of the MH1 Australopithecus sediba skull (Carlson et al. 2011, Supporting on line material, Fig. S10). You're basically seeing the fossil face-to-face ... if someone had cut of the first few centimeters of the fossil's face. Just like the movie Face Off.Quite a difference from the image above. Here, we can distinguish fossilized bone from the rocky matrix filling in the orbit, brain case and sinuses. Synchrotron even distinguishes molar tooth enamel from the underlying dentin (see the square). The post-mortem distortion to the (camera right) orbit is clear. It also looks as though the hard palate is thick and filled with trabecular bone, as is characteristic of robust Australopithecus (McCollum 1999). Interesting...Even more remarkable, the actual histological structure of bone can be imaged with synchrotron imaging. Mature cortical bone is comprised of these small osteons (or Haversian systems), that house bone cells and transmit blood vessels to help keep bone alive and healthy. Osteons are very tiny, submillimetric. To the right is a 3D reconstruction of an osteon and blood vessels, from synchrotron images (Cooper et al. 2011). The scale bar in the bottom right is 250 micrometers. MICROmeters! Note the scan can distinguish the Haversian canal (red part in B-C) from vessels (white part in B). Insane!Not only has image quality improved over the past few decades, but CT scanning is being applied outside the field of medicine for which it was developed; it's becoming quite popular in anthropology. What I'd like to do, personally, with such imaging is see if it can be used to study bone morphogenesis - if it can be used to distinguish bone deposition vs. resorption, and to see how these growth fields are distributed across a bone during ontogeny. This could allow the study the proximate, cellular causes of skeletal form, how this form arises through growth and development. If it could be applied to fossils, then we could potentially even see how these growth fields are altered over the course of evolution: how form evolves.ReferencesCarlson KJ, Stout D, Jashashvili T, de Ruiter DJ, Tafforeau P, Carlson K, & Berger LR (2011). The endocast of MH1, Australopithecus sediba. Science (New York, N.Y.), 333 (6048), 1402-7 PMID: 21903804Claveria, L., Boulay, G., & Moseley, I. (1976). Intracranial infections: Investigation by computerized axial tomography Neuroradiology, 12 (2), 59-71 DOI: 10.1007/BF00333121Cooper, D., Erickson, B., Peele, A., Hannah, K., Thomas, C., & Clement, J. (2011). Visualization of 3D osteon morphology by synchrotron radiation micro-CT Journal of Anatomy, 219 (4), 481-489 DOI: 10.1111/j.1469-7580.2011.01398.xMcCollum, M. (1999). The Robust Australopithecine Face: A Morphogenetic Perspective Science, 284 (5412), 301-305 DOI: 10.1126/science.284.5412.301... Read more »
Claveria, L., Boulay, G., & Moseley, I. (1976) Intracranial infections: Investigation by computerized axial tomography. Neuroradiology, 12(2), 59-71. DOI: 10.1007/BF00333121
Cooper, D., Erickson, B., Peele, A., Hannah, K., Thomas, C., & Clement, J. (2011) Visualization of 3D osteon morphology by synchrotron radiation micro-CT. Journal of Anatomy, 219(4), 481-489. DOI: 10.1111/j.1469-7580.2011.01398.x
McCollum, M. (1999) The Robust Australopithecine Face: A Morphogenetic Perspective. Science, 284(5412), 301-305. DOI: 10.1126/science.284.5412.301
“Man is born free, and everywhere he is in chains.”
With this famous sentence, Jean-Jacques Rousseau begins his masterful critique of political power. Less well known is another sentence from The Social Contract (1762): “No State has ever been founded without Religion serving as its base.”
My reading of history is that Rousseau was right. State-formation [...]... Read more »
Briquel, Dominique. (2007) Tages Against Jesus: Etruscan Religion in Late Roman Empire. Etruscan Studies, 10(1), 153-161. info:/
In between two columns that represent
the interior of an oikos
stands a woman, reaching to an altar;
a wreath is hung on the wall behind her.
A scene comparable with Menanders
discription of a domestic ritual boundary.
Musée du Louvre CA 1857. © Perseus 1992
For a number of brief posts, the past six years, I discussed elements of the commonly known 'household' worship, supported with
... Read more »
Janett E. Morgan. (2007) Space and the notion of final frontier; Searching for ritual boundaries in the Classical Athenian home. Kernos, 113-129. DOI: http://kernos.revues.org/175
John Pedley. (2005) Sanctuaries and the Sacred in the Ancient Greek World. Cambridge University Press. info:/10.2277/052100635X
The notion of binaries or opposites is deeply entrenched in Western culture and thought. Although it seems perfectly natural to perceive and categorize the world in terms of dichotomies (black-white, either-or), what seems natural is actually learned. Our teacher in this regard is Aristotle, who was so impressed by the Pythagorean Table of Opposites that [...]... Read more »
Irwin, L. (1994) Dreams, Theory, and Culture: The Plains Vision Quest Paradigm. American Indian Quarterly, 18(2), 229. DOI: 10.2307/1185248
Everyone knows yawning is the pinkeye of social cues: powerfully contagious and not that attractive. Yet scientists aren't sure what the point of it is. Is yawning a form of communication that evolved to send some message to our companions? Or is the basis of yawning physiological, and its social contagiousness unrelated? A new paper suggests that yawning--even when triggered by seeing another person yawn--is meant to cool down overheated brains.
We're not the only species that feels compelled to yawn when we see others doing it. Other primates, and possibly dogs, have been observed catching a case of the yawns. But Princeton researcher Andrew Gallup thinks the root cause of yawning is in the body, not the mind. After all, we yawn when we're alone, not just when we're with other people.
Previously, Gallup worked on a study that involved sticking tiny thermometers into the brains of rats and waiting for them to yawn. The researchers observed that yawning and stretching came after a rapid temperature rise in the frontal cortex. After the yawn and the stretch, rats' brain temperatures dropped back to normal. The authors speculated that yawning cools the blood off (by taking in a large amount of air from outside the body) and increases blood flow, thereby bringing cooler blood to the brain.
If yawning's function is to cool the brain, Gallup reasoned, then people should yawn less often when they're in a hot environment. If the air outside you is the same temperature as your body, it won't make you less hot.
To test that theory, researchers went out into the field--namely, the sidewalks of Tuscon, Arizona--in both the winter and the summer. They recruited subjects walking down the street (80 people in each season) and asked them to look at pictures of people yawning. Then the subjects answered questions about whether they yawned while looking at the pictures, how much sleep they'd gotten the night before, and how long they'd been outside.
The researchers found that the main variable affecting whether people yawned was the season. It's worth noting that "winter" in Tuscon was a balmy 22 degrees Celsius (71 degrees Fahrenheit), while summer was right around body temperature. In the summer, 24% of subjects reported yawning while they looked at the pictures. In the winter, that number went up to 45%.
Additionally, the longer people had been outside in the summer heat, the less likely they were to yawn. But in the winter, the opposite was true: People were more likely to yawn after spending more time outside. Gallup speculates that because the testing took place in direct sunlight, subjects' bodies were heating up, even though the air around them remained cooler. So a yawn became more refreshing to the brain the longer subjects stood outside in the winter, but only got less refreshing as they sweltered in the summer.
The study used contagious yawning rather than spontaneous yawning, presumably because it's easier to hand subjects pictures of yawning people than to aggressively bore them. Gallup notes that contagious and spontaneous yawning are physically identical ("a stretching of the jaw and a deep inhalation of air," if you were wondering), so one can stand in for the other. Still, it would be informative to study people in a more controlled setting--in a lab rather than on the street, and preferably not aware that they're part of a yawning study.
A lab experiment would also allow researchers to directly observe whether their subjects yawned, rather than just asking them. In the field, researchers walked away while subjects were looking at the pictures, since people who know they're being watched are less likely to yawn. But self-reported results might not be accurate. The paper points out that "four participants in the winter condition did not report yawning during the experiment but yawned while handing in the survey to the experimenter."
Still, it seems there's real connection between brain temperature and yawning. It will take more research (and more helplessly yawning subjects) to elucidate exactly what the connection is. Even if brain temperatures always rise right before a yawn and fall afterward, cooling the brain might not be the point of the yawn--another factor could be causing the impulse to yawn, and the temperature changes could be a side effect. Studying subjects in a truly cold environment, and showing that they are once again less likely to yawn (because outside air would cool their brains too much), would provide another piece of evidence that temperature triggers the yawn in the first place.
None of this tells us why yawning is so catching, though. Personally, I think I yawned at least a thousand times while reading and writing about this paper. Maybe I should have taken some advice from an older study by Andrew Gallup, which found that you can inhibit yawning by breathing through your nose or putting something chilly on your forehead.
Photo: Wikipedia/National Media Museum
Andrew C. Gallup, & Omar Tonsi Eldakar (2011). Contagious yawning and seasonal climate variation. Frontiers in Evolutionary Neuroscience
... Read more »
Andrew C. Gallup, & Omar Tonsi Eldakar. (2011) Contagious yawning and seasonal climate variation. Frontiers in Evolutionary Neuroscience. info:/
Antidepressant sales have been rising for many years in Western countries, as regular Neuroskeptic readers will remember.Most of the studies on antidepressant use come from the USA and the UK, although the pattern also seems to hold for other European countries. The rapid rise of antidepressants from niche drugs to mega-sellers is perhaps the single biggest change in the way medicine treats mental illness since the invention of psychiatric drugs.But while a rise in sales has been observed in many countries, that doesn't mean the same causes were at work in every case. For example, in the USA, there is good evidence that more people have started taking antidepressants over the past 15 years.In the UK, however, it's a bit more tricky. Antidepressant prescriptions have certainly risen. However, a large 2009 study revealed that, between 1993 and 2005, there was not any significant rise in people starting on antidepressants for depression. Rather, the rise in prescriptions was caused by patients getting more prescriptions each. The same number of users were using more antidepressants.Now a new paper has looked at antidepressant use over much the same period (1995-2007), but using a different set of data. Pauline Lockhart and Bruce Guthrie looked at pharmacy records of drugs actually dispensed, not just prescribed, and their data only covers a specific region, Tayside in Scotland. The 2009 study was nationwide.So what happened?The new paper confirmed the 2009 survey's finding of a strong increase in the number of antidepressant prescriptions per patient.However, unlike the old study, this one found an increase in the number of people who used antidepressants each year. It went up from 8% of the population in 1995, to 13% in 2007 - an extremely high figure, higher even than the USA.In other words, more people took them, and they took more of them on average - adding up to a threefold increase in antidepressants actually sold. The increase was seen across men and women of all ages and social classes.There's no good evidence of an increase in mental illness in Britain in this period, by the way.But why did the 2009 paper report no change in antidepressant users, while this one did? It could be that the increase was localized to the Tayside area. Another possibility is that there was an increase nationwide, but it wasn't about people with depression.The 2009 study only looked at people with a diagnosis of depression. Yet modern antidepressants are widely used for other things as well - like anxiety, insomnia, pain, premature ejaculation. Maybe this non-depression-based use of antidepressants is what's on the rise.Lockhart, P. and Guthrie, B. (2011). Trends in primary care antidepressantprescribing 1995–2007 British Journal of General Practice... Read more »
Lockhart, P. and Guthrie, B. (2011) Trends in primary care antidepressant prescribing 1995–2007. British Journal of General Practice. info:/
Lots of press has been given in the past week to two late 7th to early 9th century burials found at the site of Kilteasheen in Ireland. According to the news reports and the documentary (which won't air in the U.S. until 2012, but which you can see on YouTube... for now), archaeologists excavating at the site from 2005-2009 uncovered over 130 graves. Two of them - both males - were buried with stones in their mouths, and one of the men also had a large stone on top of his torso. Aside from a 2008 report of a 4,000-year-old burial, these two early 8th century Irish burials seem to be the oldest evidence of what may be the practice of preventing "revenants" (zombies, vampires, and other undead people) from returning to the land of the living.
8th century male burial from Kilteasheen, Ireland,
with large stone on and under torso
(screencap from documentary)
Both Dorothy King (PhDiva) and Michelle Ziegler (Contagions) have already blogged about this. Dorothy points out some of the other evidence for "vampire" burials in Europe, such as the 10th-11th century cemetery of Celakovice near Prague that held a dozen people who were buried oddly (as with rocks in their mouths) and the so-called Vampire of Venice, a 60-year-old woman from a 1576 plague cemetery in Italy, who was buried with a large rock in her mouth (Nuzzolese and Borrini 2010).
The tradition of weighting down or otherwise defiling corpses (as with nails through the temple and stakes through the heart) seems to be a long one in Europe, born out of a fear of the dead that was related to the rise of Christianity, the lack of understanding of germ theory, and the increase in epidemic diseases.
There weren't, for example, vampires in Rome. The Romans actually had ongoing relationships with the dead, running pipes from the ground to the grave below in order to offer them food and drink and celebrating them at least once a year in the Parentalia. The Judeo-Christian idea that the dead should go into the ground and stay there means that deviations from this practice - as hair and nails seemed to grow after death, for example - probably caused a lot of general freaking out. But the simple introduction of monotheism may also have caused cultural stress, particularly in 7th century England, when kings were converting to Christianity and people were no longer sure what to believe.
Michelle points out that Ireland suffered through two major epidemics of bubonic plague, in 664 and 683, followed by a massive famine in 700. Based on the C14 dates reported in the documentary, it's possible these two burials date as early as the late 7th century. Rocks in or on the body of the deceased may have been meant to pin the person into the grave to prevent that person from rising or coming back, or may have been placed there because the mouth was where the soul escaped from. But rocks may also have been important in the mitigation of disease. Many of the archaeological examples of skeletons with mouth-rocks are assumed to have come from plague cemeteries. Some of the symptoms of bubonic plague are delirium, heavy breathing, and continuous blood-vomiting. People knew that plague could spread but didn't understand how, so blocking a person's mouth may have been an attempt to prevent the spread of the disease. The sight of a terminally ill person coughing up blood could even have been the catalyst for the invention of vampires, as a cultural explanation for disease before the advent of germ theory.
The Kilteasheen burials are likely too late to be plague-related, but even a small urban center could have had endemic tuberculosis, which causes some similar symptoms, like bloody sputum. I don't think a disease-based explanation can be completely ruled out for these burials.
8th century burial of a male, Kilteasheen, Ireland,
with stone in the mouth
(credit: Chris Read, found at MSNBC.com)
In sum, we can't be certain of the meaning of these Irish burials, but the long tradition of incapacitating the dead to prevent them from becoming revenants coupled with historical records of disease epidemics suggests the people who buried these men likely had a good reason for wanting them to stay dead.
Excavators at Kilteasheen estimate that there are around 3,000 burials at the site, so I suspect we'll be hearing more about this cemetery in the years to come. It will be interesting in particular to see if other burials in the cemetery were given the same mouth-rock treatment and whether the practice dates only to the 8th century or continues to later periods of the cemetery's use.
Watch the documentary, Mysteries of the Vampire Skeletons, on YouTube:
Further Reading and References:
McLeod, J. 2010. Vampires, a Bite-Sized History. Pier 9. [Google Books]
Nuzzolese E, & Borrini M. 2010. Forensic approach to an archaeological casework of "vampire" skeletal remains in Venice: odontological and anthropological prospectus. Journal of Forensic Sciences, 55 (6), 1634-7. PMID: 20707834.
Rickels, L. 1999. The Vampire Lectures. University of Minnesota Press. [Google Books]
Tsaliki, A. 2001. Vampires beyond legend - a bioarchaeological approach. In Proceedings of the XIII European Meeting of the Paleopathology Association, ed. M. La Verghetta and L. Capasso, pp. 295-300. [Read here]
Tsaliki, A. 2008.... Read more »
Nuzzolese E, & Borrini M. (2010) Forensic approach to an archaeological casework of "vampire" skeletal remains in Venice: odontological and anthropological prospectus. Journal of Forensic Sciences, 55(6), 1634-7. PMID: 20707834
A week and a half ago, Kibii and colleagues (2011) published reconstructions and re-analyses of two hips belonging to the 1.98 million-year old Australopithecus sediba. As with many fossil discoveries, these additions to the fossil record raise more questions than they answer. Unless the question was, "did A. sediba have a pelvis?" It did. Here's a good summary from the paper itself:Thus, Au. sediba is australopith-like in having a long superior pubic ramus and an anteriorly positioned and indistinctly developed iliac pillar...[and] Homo-like in having vertically oriented and sigmoid shaped iliac blades, more robust ilia, and a narrow tuberoacetabular sulcus...and the pubic body is upwardly rotated as in Homo. (p. 1410, emphases mine)So far as I can tell, the main way the hips are 'advanced' toward a more human-like condition is that the iliac blades are more upright and sweep forward more than in earlier known hominid hips. Here's the figure 2 from the paper (more sweet pics of the fossils are available here). NB that in both A. sediba hips much of the upper portions of the iliac blades are missing (reconstructed in white; this region is missing in lots of fossils), so it's possible they were more flaring like the australopith in the center photo.The authors' bottom-line, take-home point is that the A. sediba pelvis has features traditionally associated with large-brained Homo - but belonged to a small-brained species (based solely on the ~430 cc MH1 endocast). They argue that this means that many of these unique pelvic features did not evolve in the context of birthing large-brained babies, as has often been thought. They state that these features are thus "most parsimoniously attributed to altered biomechanical demands on the pelvis in locomotion," and suggest that this hypothetical locomotion was mostly bipedalism but with a good degree of climbing. Maybe, maybe not. This interpretation is consistent with the analysis of the A. sediba foot/ankle (Zipfel et al. 2011).The weird mix of ancient (australopith-like) and newer (Homo-like) pelvic features in A. sediba really raises the question of how australopithecines moved around. More intriguing is that the A. sediba pelvis has different Homo-like features than the ~1 million year old Busidima pelvis (Simpson et al. 2008), which has been attributed to Homo erectus (largely in aspects of the iliac blades). This raises the question of whether A. sediba is really pertinent to the origins of the genus Homo, and whether the Busidima pelvis belongs to Homo erectus or a late-surviving robust australopithecus (e.g. boisei, Ruff 2010).Also interesting is that the subpubic angle (in the pic above, the upside-down "V" created by the pubic bones just above the red labels) is pretty low in MH2. This is curious because modern human males and females differ in how large this angle is - females tend to have a large angle which contributes to an enlarged birth canal, whereas males have a low angle like MH2. But MH2 is considered female based on skeletal and dental size. This raises the additional questions of whether human-like sexual dimorphism had not evolved in hominids prior to 1.9 million years ago, and whether the sex of MH2 was accurately described.Finally, though the authors did a great job comparing this pelvis with those from other hominids, I think a major, more comprehensive comparative review of hominid pelves is in order. How does the older A. afarensis hip from Woranso (Haile-Selassie et al. 2010) inform australopithecine pelvic evolution? What about the possibly-contemporary-maybe-later hip from the nearby site of Drimolen (Gommery et al. 2002)? Given the subadult status of the MH1 individual, it would be interesting to compare with the WT 15000 Homo erectus fossils, or A. africanus subadults from Makapansgat, to examine the evolution of pelvic growth.Lots of interesting questions arise from these fascinating new fossils. "The more you know," right?ReferencesGommery, D. (2002). Description d'un bassin fragmentaire de Paranthropus robustus du site Plio-Pléistocène de Drimolen (Afrique du Sud)A fragmentary pelvis of Paranthropus robustus of the Plio-Pleistocene site of Drimolen (Republic of South Africa) Geobios, 35 (2), 265-281 DOI: 10.1016/S0016-6995(02)00022-0Haile-Selassie Y, Latimer BM, Alene M, Deino AL, Gibert L, Melillo SM, Saylor BZ, Scott GR, & Lovejoy CO (2010). An early Australopithecus afarensis postcranium from Woranso-Mille, Ethiopia. Proceedings of the National Academy of Sciences of the United States of America, 107 (27), 12121-6 PMID: 20566837Kibii, J., Churchill, S., Schmid, P., Carlson, K., Reed, N., de Ruiter, D., & Berger, L. (2011). A Partial Pelvis of Australopithecus sediba Science, 333 (6048), 1407-1411 DOI: 10.1126/science.1202521Ruff, C. (2010). Body size and body shape in early hominins – implications of the Gona Pelvis Journal of Human Evolution, 58 (2), 166-178 DOI: 10.1016/j.jhevol.2009.10.003... Read more »
Gommery, D. (2002) Description d'un bassin fragmentaire de Paranthropus robustus du site Plio-Pléistocène de Drimolen (Afrique du Sud)A fragmentary pelvis of Paranthropus robustus of the Plio-Pleistocene site of Drimolen (Republic of South Africa). Geobios, 35(2), 265-281. DOI: 10.1016/S0016-6995(02)00022-0
Haile-Selassie Y, Latimer BM, Alene M, Deino AL, Gibert L, Melillo SM, Saylor BZ, Scott GR, & Lovejoy CO. (2010) An early Australopithecus afarensis postcranium from Woranso-Mille, Ethiopia. Proceedings of the National Academy of Sciences of the United States of America, 107(27), 12121-6. PMID: 20566837
Kibii, J., Churchill, S., Schmid, P., Carlson, K., Reed, N., de Ruiter, D., & Berger, L. (2011) A Partial Pelvis of Australopithecus sediba. Science, 333(6048), 1407-1411. DOI: 10.1126/science.1202521
Ruff, C. (2010) Body size and body shape in early hominins – implications of the Gona Pelvis. Journal of Human Evolution, 58(2), 166-178. DOI: 10.1016/j.jhevol.2009.10.003
Simpson, S., Quade, J., Levin, N., Butler, R., Dupont-Nivet, G., Everett, M., & Semaw, S. (2008) A Female Homo erectus Pelvis from Gona, Ethiopia. Science, 322(5904), 1089-1092. DOI: 10.1126/science.1163592
Why does nature allow us to lie to ourselves? Humans are consistently and bafflingly overconfident. We consider ourselves more skilled, more in control, and less vulnerable to danger than we really are. You might expect evolution to have weeded out the brawl-starters and the X-Gamers from the gene pool and left our species with a firmer grasp of our own abilities. Yet our arrogance persists.
In a new paper published in Nature, two political scientists say they've figured out the reason. There's no mystery, they say; it's simple math.
The researchers created an evolutionary model in which individuals compete for resources. Every individual has an inherent capability, or strength, that simply represents how likely he or she is to win in a conflict. If an individual seizes a resource, the individual gains fitness. If two individuals try to claim the same resource, they will both pay a cost for fighting, but the stronger individual will win and get the resource.
Of course, if everyone knew exactly how likely they were to win in a fight, there would be no point in fighting. The weaker individual would always hand over the lunch money or drop out of the race, and everyone would go peacefully on their way. But in the model, as in life, there is uncertainty. Individuals decide whether a resource is worth fighting for based on their perception of their opponents' strength, as well as their perception of their own strength. Both are subject to error. Some individuals in the model are consistently overconfident, overestimating their capability, while others are underconfident, and a few are actually correct.
Using their model, the researchers ran many thousands of computer simulations that showed populations evolving over time. They found that their numerically motivated populations, beginning with individuals of various confidence levels, eventually reached a balance. What that balance was, though, depended on their circumstances.
When the ratio of benefits to costs was high--that is, when resources were very valuable and conflict was not too costly--the entire population became overconfident. As long as there was any degree of uncertainty in how individuals perceived each other's strength, it was beneficial for everyone to overvalue themselves.
At medium cost/benefit ratios, where either costs or benefits somewhat outweighed the other, the computerized populations reached a stable mix of overconfident and underconfident individuals. Neither strategy won out; both types of people persisted. In general, the more uncertainty was built into the model, the more extreme individuals' overconfidence or underconfidence became.
When the cost of conflict was high compared with the benefit of gaining a resource, the entire population became underconfident. Without having much to gain from conflict, individuals opted to avoid it.
The authors speculate that humans' tendency toward overconfidence may have evolved because of a high benefit-to-cost ratio in our past. If the resources available to us were valuable enough, and the price of conflict was low enough, our ancestors would have been predicted to evolve a bias toward overconfidence.
Additionally, our level of confidence doesn't need to wait for evolution to change; we can learn from each other and spread attitudes rapidly through a region or culture. The researchers call out bankers and traders, sports teams, armies, and entire nations for learned overconfidence.
Though our species' arrogance may have been evolutionarily helpful, the authors say, the stakes are higher today. We're not throwing stones and spears at each other; we have large-scale conflicts and large-scale weapons. In areas where we feel especially uncertain, we may be even more prone to grandiosity, like the overconfident individuals in the model who gained more confidence when they had less information to go on. When it comes to negotiating with foreign leaders, anticipating natural disasters, or taking a stand on climate change, brains that have evolved for self-confidence could get us in over our heads.
Johnson, D., & Fowler, J. (2011). The evolution of overconfidence Nature, 477 (7364), 317-320 DOI: 10.1038/nature10384
... Read more »
Just a short time ago, I had a paper at the European Association of Archaeologists meeting in Oslo. I unfortunately couldn't attend the conference, so Rob Tykot presented it. The paper was fun to write, though, and lays out the bioarchaeological evidence (albeit sparse at the moment) for women who immigrated to Imperial Rome. Following is the complete presentation. Comments are always welcome!
Foreign women in Imperial Rome: the isotopic evidence
K. Killgrove, Vanderbilt UniversityR. Tykot, University of South FloridaJ. Montgomery, Durham University
A significant amount of classical scholarship over the years has been dedicated to understanding the demographic make-up of the population of Imperial Rome. Without a proper census, however, classical demographers lack several key pieces of information necessary for reconstructing the number of citizens, slaves, and foreigners at Rome (Noy 2000:16).
Tombstones provide the most solid evidence of immigrants who died in Rome. Here we have an example of the inscription on a tombstone of a soldier, noting he was from Noricum (Corpus Inscriptionum Latinarum vi 3225, translated in Noy 2011). For the most part, though, the epigraphic habit was largely the province of the wealthy, educated elite, leaving us with little information about the lower classes. Demographic estimates of foreigners at Rome range from 5% to 35%, suggesting that as many as one out of every three people in Rome arrived there from elsewhere. Below is the inscription from a large tomb that a group of free slaves built in Rome (Année Epigraphique 1972, 14, translated in Noy 2011). They all appear to have belonged to the same household (as they share a name and the designation C.L., “freed slave of Gaius”) yet came to Rome from various places: Greece, Asia Minor, and north Africa. The practice of commemorating one’s homeland is rare, though, and it is unclear how many slaves and free immigrants came from Italy or from further afield in the Empire (Morley 1996, p. 39). Finally, the epigraphical record of immigrants to Rome is gender-biased, as the vast majority of inscriptions that mention immigrants are those of males (Noy 2000, p. 60). Part of this bias is attributable to the commemoration of soldiers, but males outnumber females three to one even in civilian immigrant inscriptions (Noy 2000, p. 61, Table 2).
In order to learn more about female immigrants to the Imperial capital, we undertook a bioarchaeological study of human skeletal remains from Rome. Through a combination of isotope analyses, palaeopathology, and burial style, we identified previously unknown female immigrants in the archaeological record of Rome and were able to reconstruct key aspects of their life histories.
Our skeletal material comes from the cemetery of Casal Bertone, which was located just 2 km from the center of Rome and was in use from the 2nd-3rd centuries AD (Musco et al. 2008). The majority of the graves were located within a simple necropolis, which included unmarked pit burials as well as burials a cappuccina. An above-ground mausoleum that slightly postdates the necropolis was found as well, and it may have held people of higher social status. Out of the 138 burials, we chose a stratified sample of 30 adults to subject to strontium, oxygen, carbon, and nitrogen isotope analyses – 19 males and 11 females.
This graph shows the strontium and oxygen isotope results for the first molars of adults from Casal Bertone. The approximate isotope range of Rome is represented by a box comprising the upper and lower bounds of expected Sr and O values. No other Sr studies in the Italian peninsula have been done on human skeletal remains, so the local range was estimated conservatively using geochemical modeling that took into account the fact that Rome was supplied by aqueducts that drew water from sources with distinctly different geology than is found in the volcanic Alban Hills (Killgrove 2010a, 2010b). By combining Sr with an O range from previously published human skeletal data (Prowse et al. 2007), however, it is easier to see nonlocals. Here, females T82A and T39 are fairly clearly immigrants to Rome because of low/high O isotopes and rather low Sr. T42, on the other hand, is a borderline case since measurement error could put her within the local O range for Rome. Clearly, though, isotope analysis of human skeletal remains is a viable way to identify female immigrants in the bioarchaeological record of Rome, particularly those who were not commemorated as such on tombstones.
Three of the 11 females we tested (27%) were probably immigrants to Rome. Out of the 19 males studied, 6 were immigrants (32%). More interesting, though, is the sex ratio in the immigrant population. Whereas the sex ratio in tombstones that commemorate immigrants at Rome is 78% male versus 22% female, the ratio of immigrants discovered through skeletal evidence is 66% male versus 33% female. This is, granted, a small sample but suggests that the bias towards male immigrants may in the future be rectified by studying skeletal data.
Epigraphy does occasionally tell us a little about the lives of female immigrants. The tombstone of freedwoman Valeria Lycisca, for example, specifically notes that she came to Rome at age 12 (Corpus Inscriptionum Latinarum vi 28228). Isotope analysis of the skeleton can give us similar information, in that it can help narrow the window of time in which a person immigrated. Two of the Casal Bertone female immigrants – T39 and T82A – also had third molars that could be subjected to Sr isotope analysis. Both produced M3 Sr values that were very close to their M1 values. The difference between T39’s first and third molars is .00016, and the difference between T82A’s first and third molars is .00017. Their M3 values still place them towards the low end of the calculated Sr range of Rome. People in the low end were probably immigrants from an area with younger geology (such as the southern, volcanic areas of Italy); however, it is possible people in this range were locals who consumed a significant amount of Roman aqueduct water (roughly 90% of all water consumed) throughout childhood. Oxygen isotope analysis on the M3s has not yet been done. Based on the small differences between these women’s M1s and M3s, it is likely that both immigrated to Rome after the development of their M3s was complete. Therefore, T39, a woman of about 15-17 years at the time of her death, likely died shortly after arrival in Rome.
... Read more »
K. Killgrove. (2010) Identifying immigrants to Imperial Rome using strontium isotope analysis. Journal of Roman Archaeology. info:/
Prowse TL, Schwarcz HP, Garnsey P, Knyf M, Macchiarelli R, & Bondioli L. (2007) Isotopic evidence for age-related immigration to imperial Rome. American Journal of Physical Anthropology, 132(4), 510-9. PMID: 17205550
A group of researchers from the IPHES (Institut Català de Paleoecologia Humana i Evolució Social) reports on the discovery of a handheld wooden implement from Mousterian deposits at Abric Romaní, Spain. The tool was found in Level P which dates to about 56,000 years BP, and its morphology suggests that it might have been a small spade/shovel, or perhaps a poker, given its association to a hearth... Read more »
CARBONELL, E., & CASTRO-CUREL, Z. (1992) Palaeolithic wooden artefacts from the Abric Romani (Capellades, Barcelona, Spain). Journal of Archaeological Science, 19(6), 707-719. DOI: 10.1016/0305-4403(92)90040-A
Castro-Curel, Z., & Carbonell, E. (1995) Wood Pseudomorphs From Level I at Abric Romani, Barcelona, Spain. Journal of Field Archaeology, 22(3), 376-384. DOI: 10.1179/009346995791974206
Vallverdú, J., Vaquero, M., Cáceres, I., Allué, E., Rosell, J., Saladié, P., Chacón, G., Ollé, A., Canals, A., Sala, R.... (2010) Sleeping Activity Area within the Site Structure of Archaic Human Groups. Current Anthropology, 51(1), 137-145. DOI: 10.1086/649499
It is well known that the modern world religions which trace their origins to the Axial Age are centrally concerned with death. Some might call this concern an obsession. Of these world religions, only Hinduism does not have Axial roots. This is not to say that “Hinduism” (which is neither singular nor unified) was unaffected [...]... Read more »
Blackburn, Stuart H. (1985) Death and Deification: Folk Cults in Hinduism. History of Religions, 24(3), 255-274. info:/
What's the secret to becoming a good father? What would William Cosby do?I for one have no idea BUT! a study published today in PNAS early edition finds an association between studly vs. paternal behavior, and levels of everyone's favorite hormone, testosterone (T).Using longitudinal data, researchers (Gettler et al. in press) found that, in general, a young guy with higher levels of circulating T is more likely than a guy with low T to become a father w/in a few years. MOREOVER! this erstwhile-high-T-and-now-father then experiences a relatively sharper decrease in T than would be expected simply because of aging. PLUS! fathers who interacted with their kids on a daily basis had lower T than fathers who didn't hang around their kids too often.One thing neat about this study is that it uses longitudinal instead of cross-sectional data. A cross-sectional version of this study would've sampled a bunch of dudes (hopefully somewhat randomly) only once. This can be problematic because it's then hard to interpret the results in light of the many sources of variation between people. This study, on the other hand, sampled a tonne (n = 694) of guys at more than one occasion, so they can tell how individuals' testosterone levels tend to change in paternal vs. free-spirited circumstances.The last line of the paper is pretty intriguing: "[these results] add to the evidence that human males have an evolved neuroendocrine architecture shaped to facilitate their role as fathers and caregivers as a key component of reproductive success." (Gettler et al. in press: p. 5/6) This is especially interesting in light of the Ardipithecus ramidus-related evidence for a great antiquity of humans' paternal proclivity (Lovejoy 1981, Lovejoy et al. 2009). Just how and why testosterone responds to/mediates this fatherly 'reproductive strategy' is mysterious to me. And of course, linking this hormonal phenomenon with anything as old as Ardi is a challenge I'm certainly not up to. Still neat, though.My personal circulating T levels are consistently through the roof. So in the event that I become a father, it will be interesting to see if the subsequent, astronomical hormone drop, predicted by this study, won't cause my entire body to collapse in on itself.ReferenceGettler LT et al. in press. Longitudinal evidence that fatherhood decreases testosterone in human males. Proceedings of the National Academy of Sciences... doi: 10.1073/pnas.1105403108Lovejoy, C. (1981). The Origin of Man Science, 211 (4480), 341-350 DOI: 10.1126/science.211.4480.341Lovejoy CO (2009). Reexamining human origins in light of Ardipithecus ramidus. Science (New York, N.Y.), 326 (5949), 740-8 PMID: 19810200Photo credit: google (image) "Bill Cosby Fatherhood"... Read more »
Lovejoy CO. (2009) Reexamining human origins in light of Ardipithecus ramidus. Science (New York, N.Y.), 326(5949), 740-8. PMID: 19810200
Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.
If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.
Research Blogging is powered by SMG Technology.
To learn more, visit seedmediagroup.com.