Post List

Anthropology posts

(Modify Search »)

  • October 4, 2011
  • 09:45 PM

The Millet-Eaters of the Roman Empire

by Kristina Killgrove in Powered By Osteons

Just a few days ago, only the second isotope study of millet consumption in the Roman Empire was published, by Pollard and colleagues in the American Journal of Physical Anthropology.  In a small Romano-British cemetery in Kent (late 3rd-early 4th century AD), a salvage archaeology project uncovered a dozen burials that were simple in nature: only coffin nails and hobnails from boots were found in most graves.  Among these simple farmers, though, was an individual with a surprisingly high carbon isotope value, so Pollard and colleagues undertook a dietary (C/N) and migration (Sr/O) study of the individuals.

The anomalous partially complete skeleton was that of a male over the age of 45 buried wearing hobnail boots. The individual's nitrogen isotope ratio was a bit high (11.2 permil), indicating aquatic resource consumption, but was not higher than average for Roman Britain.  His carbon isotope ratio from collagen, however, came in at -15.2 permil, in stark comparison to the average of the other individuals of -19.8 permil (see below).  This difference may not seem dramatic until you factor in the standard deviation - variation within the d13C ratios of the others from the site was only 0.3!  This person was therefore eating a whole bunch of C4 resources - millet, sorghum, or animals foddered on those grains.

Figure 3 from Pollard et al. showing the anomalous individual (SK12671)
compared with other Romano-British sites and the two anomalous individuals
published in Muldner et al. 2011.

Evidence of C4 plant consumption is surprisingly absent from the archaeological record of the Roman world, even though authors like Pliny note that millet and beans were frequently eaten together by people in rural Italy.  As far as I know, only one bioarchaeological study has been done on skeletons from Italy looking at C4 resource use (Tafuri et al. 2009).  Researchers found evidence of millet consumption in the elevated d13C ratios of people from northern Italy in the Bronze Age compared with people in southern Italy.  Another Romano-British cemetery yielded two individuals with a mixed C3-C4 diet, where carbon isotope values ranged from -16.8 permil to -15.8 permil.  So this new person from Kent provides the highest d13C ratio obtained so far from bone collagen in the Roman period.  Below is a graph of the Bronze Age millet-eaters and the Romano-British people from Pollard and colleagues' study:

Figure 4 from Pollard et al. 2011 comparing Romano-British
samples with Bronze Age north Italian samples

Curiously, Pollard and colleagues didn't look at carbon values from bone apatite, but they did look at the carbon isotope ratio of the dental apatite, which in this individual was -7.2 permil, also significantly higher than the values from the others, which range from -13.8 to -11.5 permil.  This likely means that his C4 resource use was in the form of direct consumption of millet rather than from consuming protein from animals that were foddered on millet.
Finally, they investigated the individual's strontium and oxygen isotope ratios to see if he perhaps immigrated to Britain from an area with more evidence of millet production and consumption, like Italy.  This is where the paper gets interesting - the strontium ratio is .708826 and the oxygen (from carbonate) is 26.1 permil.  These values are within the range of expectation for someone from southern Britain, so the authors could not rule out a local origin for the man.  However, my dissertation work (Killgrove 2010) showed that those values are equally likely to occur in or near Rome - my local strontium range for Rome is .7079-.7102, and the local oxygen range (drawn from Prowse et al. 2007) is 24.9-27.1 permil.  Pollard and colleagues suggest that this man may have come from northern Italy, where growing millet was common, but I am not convinced because his strontium isotope ratio of .7088 is far too low for the older geology of northern Italy, unless he was located near the east coast (and then his oxygen ratio should be lower).  Rome itself is around .7090, and .7088 - if we assume a western Italian origin - is more like Naples.  Granted, it is extraordinarily difficult to pinpoint homeland, and part of this article addresses the problems with identifying immigrants through just Sr and O isotope analyses.  As I have started to write up my Sr/O study for publication, it's something I'm keeping in mind.  Interestingly, the authors suggest that the inclusion of hobnailed boots in this man's burial may signify that he was "walking back" from Britain to his true homeland.
But the publication of this article - in AJPA no less - makes me excited because I'm sending off my C/N isotope article tomorrow to the Journal of Archaeological Science.  And in that article, I have a section on individual ET20, a male in his 30s from the site of Castellaccio Europarco, in the Roman suburbs.  ET20 has an astoundingly high d13C ratio: -12.5 permil.  This is on par with the isotope ratio of millet itself, and carbon ratios this high tend only to be found in populations that ate maize (corn).  However, the d13C ratio from ET20's bone apatite is only -8.6 permil, which is not dramatically higher than the rest of the population, suggesting that this individual was consuming his C4 resources in the form of animals who were foddered on millet.  His d15N ratio is 8.3 permil, which is a bit lower than expected from the population, so perhaps he was eating beans along with his millet or millet-fed animals like Pliny suggests.  I did do Sr/O on this individual, and they came back at .709631 and 25.3 permil, respectively.  Both of these are within my admittedly broad "local" range of Rome, but no one else among the locals has such a high d13C ratio.  I suggest in my dissertation (Killgrove 2010) that he may have come from northern Italy - the strontium ratio is higher than expected from Rome, indicating a childhood spent on slightly older geology.  I also found with ET20 that his d13C ratio from enamel apatite was -4.0 permil - so he changed his diet between the time he was born and the time he died at Rome.  Here's a quick graph from the forthcoming paper showing just how far to the right (C4 use) ET20 is in comparison with others from Castellaccio and Casal Bertone (compare with the graphs above, where no one reaches the high carbon value that ET20 does):

From Killgrove & Tykot, n.d.

At any rate, the person that Pollard and colleagues found (and the two people found by Muldner et al. 2011) show that we have a lot left to learn about C4 resource use in the Roman Empire.  Millet may have been considered a substandard grain by many authors, the kind of food that rural or poor people eat, but there is growing evidence that many people were consuming at least a C3-C4 mixed diet and several people were eating quite a bit of millet or animals foddered on the grain.  Isotopes are letting us tease out differences in diet at the levels of the individual and the population - especially in Rome, as I've blogged about before here.  Although the overall diet mostly tracks with historical and artistic records from the Roman world, the diversity in the lower-class diet is surprising and intriguing, and I think will eventually be able to tell us more about things like status.  Watch this space for more on the diet of my Romans as I work through the process of submitting and revising my C/N isotope article this week!


Killgrove, K. (2010).  Migration and mobility in Imperial Rome.  PhD dissertation, University of North Carolina at Cha... Read more »

Muldner, G, Chenery, C, & Eckardt, H. (2011) The "headless Romans": multi-isotope investigations of an unusual burial ground from Roman Britain. Journal of Archaeological Science, 280-290. info:/

Pollard AM, Ditchfield P, McCullagh JS, Allen TG, Gibson M, Boston C, Clough S, Marquez-Grant N, & Nicholson RA. (2011) "These boots were made for walking": The isotopic analysis of a C(4) Roman inhumation from Gravesend, Kent, UK. American Journal of Physical Anthropology. PMID: 21959970  

Prowse TL, Schwarcz HP, Garnsey P, Knyf M, Macchiarelli R, & Bondioli L. (2007) Isotopic evidence for age-related immigration to imperial Rome. American Journal of Physical Anthropology, 132(4), 510-9. PMID: 17205550  

  • October 4, 2011
  • 11:28 AM

The Nanny and Aural Sexual Selection

by Samuel Arbesman in

Sexual selection, like many evolutionary concepts, was first anticipated by Charles Darwin and has since been elaborated in great detail. It is a powerful concept, explaining everything from the unwieldy nature of the peacock to the changing curves of Playboy centerfolds over the years. But this is all selection at the visual level. Just as [...]... Read more »

Apicella, C., & Feinberg, D. (2009) Voice pitch alters mate-choice-relevant perception in hunter–gatherers. Proceedings of the Royal Society B: Biological Sciences, 276(1659), 1077-1082. DOI: 10.1098/rspb.2008.1542  

  • October 3, 2011
  • 08:00 AM

Tracing the Trickle-down in Roman Recycling

by Krystal D'Costa in Anthropology in Practice

Citizens of the Ancient World seem to have made a solid go at “going green.” Ongoing research by Harriet Foster and Caroline Jackson (2010) revealed hints of color deriving from previously blown glass in colorless glass, indicating that Romans often reused glass, adding batches of broken vessels into the raw material from which they fashioned [...]

... Read more »

Foster, Harriet and Caroline Jackson. (2010) The Composition of Late Romano-British Colourless Vessel Glass: Glass Production and Consumption. Journal of Archaeological Science, 3068-3080. info:/10.1016/

Stern, E. (1999) Roman Glassblowing in a Cultural Context. American Journal of Archaeology, 103(3), 441. DOI: 10.2307/506970  

  • October 1, 2011
  • 03:53 PM

Entoptics or Doodles: Children of the Cave

by Cris Campbell in Genealogy of Religion

There was a time when Paleolithic cave paintings were construed primarily through the lens of “art,” an interpretive stance which assumes that at least some Paleolithic peoples were “artists” who painted for pleasure. Because this lens is so subjective (and creative), all manner of interpretations were offered. Whether prosaic or fanciful, this approach raised troubling [...]... Read more »

Lewis-Williams, David, & Dowson, T.A. (1988) The Signs of All Times: Entoptic Phenomena in Upper Palaeolithic Art . Current Anthropology, 29(2), 201-245. info:/

  • October 1, 2011
  • 07:51 AM

The Recession and Death

by Neuroskeptic in Neuroskeptic

The present economic crisis has led to more suicides in Europe - but fewer deaths in road traffic accidents. So says a brief report in The Lancet. The authors show that suicide rates in people under the age of 65, which have been falling for several years in Europe, rose in 2008 and again in 2009, in line with unemployment figures. The overall effect was fairly small - 2009 was no worse than 2006. It still corresponds to a 5% annual increase in most countries. In Greece, Ireland, and Latvia the rise was about 15%.That's sad but not perhaps very surprising.What's interesting though is that road traffic fatalities fell sharply. In Lithuania, they dropped by nearly half, although they were very high to begin with, and in Spain and Ireland they fell by 25%.This presumably reflects the fact that people are just driving less, and perhaps slower. We've got less money to spend on fuel, and fewer jobs and things to need to drive to.The authors note that although fewer road deaths is generally a good thing, there's one downside - a shortage of donor organs for transplantation. Road accidents are a prime source of organs because they're one of the few times that young, healthy people die leaving most of the body intact.Stuckler D, Basu S, Suhrcke M, Coutts A, & McKee M (2011). Effects of the 2008 recession on health: a first look at European data. Lancet, 378 (9786), 124-5 PMID: 21742166... Read more »

Stuckler D, Basu S, Suhrcke M, Coutts A, & McKee M. (2011) Effects of the 2008 recession on health: a first look at European data. Lancet, 378(9786), 124-5. PMID: 21742166  

  • September 29, 2011
  • 08:19 AM

Mass Grave of Children in Peru

by Katy Meyers Emery in Bones Don't Lie

Earlier this month, archaeologist revealed a large mass grave containing the remains of children and llamas. The grave was found on the coast of Peru, near the ancient Chimú capital of Chan Chan. The 800 year old grave contains the remains … Continue reading →... Read more »

Centurion, Curo, and Klaus. (2010) Bioarchaeology of human sacrifice: violence, identity and the evolution of ritual killing at Cerro Cerrillos, Peru. Antiquity. info:/

  • September 28, 2011
  • 09:42 AM

Where to put Australoithecus sediba?

by Eric in APE

It took me some time to decide what I should do with Australopithecus sediba on this Blog, in the end I decided to concentrate on the aspects I at least know a little bit of, one of them is taxonomy.I had to reconstruct a bunch of phylogenetic trees in the last few months and I found a someof free online tools which enabled me to do this without using any fancy (and expensive) Computer Programs. The only disadvantage of these resources is that they were originally made for molecular data sets. This made my work a little bit more complicated since I had to modify my morphological datasets in a way that these programs were able to work with them. I won’t talk about the exact process right now; instead I want to show you some of the stuff I did with Australopithecus sediba.First of all, let’s have a look at a classic tree which illustrates the phylogenetic relationships among the genus Homo. I took the Tree from Strait et al. (1997) for this particular example:Strait et al. (1997)There’s nothing really special about this tree, sure you could discuss whether or not the shown phylogeny represent the true relationships of these fossils, but discussing this stuff always tends to get boring, since you have to look at the characters and you need to discuss the validity of each of themTo make things a little more interesting, I took the character matrix from Strait et al. and included Australopithecus sediba. The characters for Australopithecus sediba were taken from the initial description of this Fossil (Berger et al., 2010). This is the tree you get, when you run this modified matrix through an Analysis:Same character matrix but with A. sediba.Sediba ruined everything!What in the first tree looked like a nice and clear relationship is now collapsed into something completely indifferent.To make things clear, the taxonomic position of Homo habilis and Homo rudolfensis never was pretty clear. In fact, the latter species was established, because the initial hypodigm (the total sum of all fossils which describe a species) of Homo habilis was so diverse in its morphology that it was split up into two separate species. The “new” species was then called Homo rudolfensis. I won’t talk up the exact reasons why this was the case, since it would make this post too long, but I will eventually come back to this topic in another post.Let’s go back to Australopithecus sediba for the moment. It’s not only that the fossil practically ruins the common taxonomic picture of relationships of early homo, it’s also very young. Right now, Australopithecus sediba is dated at about 1.9 million years, this is very young, if you keep in mind that there are fossils of Homo habilis and Homo rudolfensis which are much older then 2 million years. There are also possible fossils from Homo ergaster/erectus which are only slightly younger then the sediba fossils. Now add the about 1.7-1.8 million year old remains from Dmanisi/Georgia to this mess and you can see how complicated this whole story starts to look.Fortunately the tree I showed you at the beginning of this post isn’t completely useless since it shows that Australopithecus sediba falls somewhere within the relationship of Homo ergaster/erectus, Homo rudolfensis and Homo habilis. So let’s have look at the possible relationships and the possible consequences of each scenario: Scenario if A. sediba would share a LCA with the Genus Homo In this scenario, Australopithecus sediba would share a last common ancestor with the Genus Homo. The only problem which arises from this tree is that you have to discuss what you should do with the Homo rudolfensis and habilis fossils which pre-date the emergence of Australopithecus sediba in the fossil record.All other scenarios basically ruin our contemporary picture of the Genus Homo: Two of the possible relationships if A. sediba would be place somewhere within the Genus Homo   No matter which scenario we look at, none of them shows the Genus Homo as a monophyletic group. This means that either we have to include Australopithecus sediba within the genus Homo which I’m not very fond of since it would lead to an even weaker definition of it. Or we have to exclude Homo habilis and/or Homo rudolfensis from the genus Homo. The Genus Homo would then begin with Homo ergaster/Homo erectus and everything before that species would be either inside the genus Australopithecus or in a complete new genus.Personally, I have no Idea what I should make out of this stuff. Right now everything seems to contradict itself and I think we need to have much more knowledge about this certain period of time. This means of course more fossils from this period but also more research on the already known fossils.What I think we can safely right now is that the emergence of the genus Homo didn’t happen in a gradualistic fashion where one species slowly evolved into the next one. I think what we have here is a series of, possible independent, speciation events. This would explain why we have that many species that look similar to another but who overlap in spatial as well as temporal aspects and whose phylogenetic relationships are completely unclear. I have some more thoughts on this matter and I will write another Post where I go into much more detail. For now, all I can say is that, although Australopithecus sediba completely ruins the contemporary phylogeny, it might help us to really understand what happened back then.References: ... Read more »

Berger, L., de Ruiter, D., Churchill, S., Schmid, P., Carlson, K., Dirks, P., & Kibii, J. (2010) Australopithecus sediba: A New Species of Homo-Like Australopith from South Africa. Science, 328(5975), 195-204. DOI: 10.1126/science.1184944  

Strait, D., Grine, F., & Moniz, M. (1997) A reappraisal of early hominid phylogeny. Journal of Human Evolution, 32(1), 17-82. DOI: 10.1006/jhev.1996.0097  

  • September 27, 2011
  • 03:34 PM

The Ways We Talk About Pain

by Krystal D'Costa in Anthropology in Practice

Excerpts from the Personal Journal of Krystal D’Costa [i] Tuesday: I fell. Again. This time it was while getting out of the car. I’m not sure how I managed it. I got my foot caught on the door jamb and tumbled forward. I hit my shin—hard—against the door jamb and I think I tweaked my [...]

... Read more »

Pia Haudrup Christensen. (1999) "It Hurts": Children's Cultural Learning About Everyday Illness. Stichting Ethnofoor, 12(1), 39-52. info:/

  • September 27, 2011
  • 04:01 AM

Schizophrenia And The Developing World Revisited

by Neuroskeptic in Neuroskeptic

A major international study threatens to overturn what we thought we knew about schizophrenia. People with schizophrenia are more likely to get better if they live in poor countries: that's been known for about 25 years. In the 1980s, a series of pioneering World Health Organization (WHO) studies looked at the prognosis for people diagnosed with schizophrenia around the world.All of the data showed that people in developed countries were less likely to recover than those from poorer areas.This paradoxical finding sparked no end of debate. What is it about these countries that makes them a better place to get schizophrenia? Patients in richer countries tend to have access to more and "better" psychiatric care, the latest drugs, and so on. Does this mean that those treatments are useless - worse, harmful? That's been the interpretation of some people.But is it true? Not always, says a new study, W-SOHO. It's out in the British Journal of Psychiatry.The authors compared schizophrenia outcomes in 37 countries. They recruited outpatients who were starting, or changing, antipsychotic medication. They found that in terms of "clinical" remission - i.e. improvement in the delusions, hallucinations, and other symptoms of schizophrenia - people in the developing world did indeed fare better than those from rich countries.Over a 3 year period, 80-85% of patients from East Asia, the Middle East, and Latin America who started off ill, showed clinical remission, compared to 60-65% in Europe. That's not new: it confirms what the old WHO data showed.But the new study also looked at "functional" remission - essentially, being able to participate in society:having good social functioning for a period of 6 months. Good social functioning included those participants who had: (a) a positive occupational/vocational status, i.e. paid or unpaid full- or part-time employment, being an active student in university or housewife; (b) independent living; and (c) active social interactions, i.e. having more than one social contact during the past 4 weeks or having a spouse or partner.For functional remission, Northern Europe (e.g. the UK, France, Germany) was the best place to get sick, with 35% achieving it. Not a very high figure, but better than elsewhere: it was just 18% in the Middle East and 25% in East Asia, despite these areas having the highest chances of clinical remission. Latin America did pretty well, however, at 29%.This is a very important finding if it's true. Is it solid? First off, were Northern European patients just less ill to start with? Not really. They had the highest rates of suicide attempts. They tended to be older, and to have been diagnosed at a later age, which was correlated with worse functional remission. Regression analyses confirmed that region was a predictor of remission controlling for all the other variables.However, Northern European patients did tend to have better function at baseline. They were more likely to be employed, living independently, and socially active when they entered the study. 63% were living independently which is much higher than anywhere else: it was 24% in Middle East and Latin America. 23% had a paid job compared to 17-19% in developing countries. That's not a flaw in the study as such but it does suggest that the differences, whatever they are, are already in place before people get treated.One concern I have is that the definition of "functional remission" may be North Europe-centric. "Living independently" is something we aspire to but in other places, with a strong tradition of the extended family household, the idea that it would be a bad thing for someone with schizophrenia to be living with their family might seem silly. If that means they'll be cared for and supported, what's wrong with it?And in terms of paid employment, Northern Europe just has a stronger economy than most other places (erm... well, it did back in 2000 when these data were collected), so maybe it's no surprise that people with schizophrenia were more likely to have paid jobs.In terms of the study itself, it was extremely large with over 17,000 patients enrolled. But here's the thing: this study was run by Lilly, the drug company who make olanzapine, an antipsychotic used in schizophrenia. Three of the authors on the paper are Lilly employees, and the lead author was a consultant for them. The study deliberately sampled lots of people taking olanzapine, presumably in order to find out whether they did better.None of this necessarily means that the data aren't valid, but I'm just not sure I trust Lilly over the WHO.Haro JM, Novick D, Bertsch J, Karagianis J, Dossenbach M, & Jones PB (2011). Cross-national clinical and functional remission rates: Worldwide Schizophrenia Outpatient Health Outcomes (W-SOHO) study. The British journal of psychiatry : the journal of mental science, 199, 194-201 PMID: 21881098... Read more »

Haro JM, Novick D, Bertsch J, Karagianis J, Dossenbach M, & Jones PB. (2011) Cross-national clinical and functional remission rates: Worldwide Schizophrenia Outpatient Health Outcomes (W-SOHO) study. The British journal of psychiatry : the journal of mental science, 194-201. PMID: 21881098  

  • September 26, 2011
  • 05:00 PM

Revenge of the Fishball: The Magnificent Fish Tapeworm

by Rebecca Kreston in BODY HORRORS

The fish tapeworm Diphyllobothrium latum has reared its narrow scolex head around the world in freshwater fish-eating communities. This article looks at its history, culinary (mis)adventures and global travels. Included is a breathtaking video of the tapeworm in action.... Read more »

  • September 26, 2011
  • 04:55 AM

Stone axes and the Little Ice Age (LIA)

by Umberto in Up and Down in Moxos

What do stone axes have to do with the LIA ?In his famous paper entitled “The anthropogenic greenhouseera began thousands of years ago” Ruddiman [2003]put forward a fascinating idea: “CO2 oscillations of ∼10ppm in the last 1000 years are too large to be explained by external(solar-volcanic) forcing, but they can be explained by outbreaks of bubonicplague that caused historically documented farm abandonment in western Eurasia.Forest regrowth on abandoned farms sequesteredenough carbon to account for the observed CO2 decreases. Plague-driven CO2changes were also a significant causal factor in temperature changes during theLittle Ice Age (1300–1900 AD)”. There has been a lot of controversy surroundingRuddiman’s paper.More recently, the ideathat plagues caused farmland abandonment and were followed by re-forestationhas been applied to the Amazon Basin and the LIA. Severalscholars have proposed that the depopulation caused by the diseases thatEuropeans brought to the Americas after 1492 induced a large scalere-forestation which, in turn, decreased the amount of atmospheric CO2 andcontributed to the LIA [Dull et al., 2010; Faust etal., 2006; Nevle and Bird, 2008].In order to assess the likelihood of this hypothesis we need to know i) populationsize in pre-Columbian America and ii) the kind of agriculture pre-Columbians practiced.Citing Denevan, Nevleand Bird [2008] write that “Evidence forthe habitation and modification of American landscapes by tens of millions ofPre-Columbian agriculturalists [Denevan, 1992] exists in thewidespread distribution of anthropogenic Amazonian Dark Earth soils, raisedfields, irrigated terrace zones, roads, aqueducts, and numerous large-scaleearthworks distributed throughout Amazonia, the Andes, Central America, andparts of North America”.Many of the papersaddressing this topic cite Denevan with regards to pre-Columbian populationdensities and agriculture. So, what are Denevan’s views on the matter? I will focuson Amazonia, as it is the largest forested area in the world and most of thework on pre-Columbian population density and agriculture that is cited tosupport this hypothesis have been done in Amazonia (for example the worksof Denevan himself, Erickson and Heckenberger).How many people lived in Amazonia in 1491?The first estimate wasgiven by Betty Meggers who said that population density in pre-Columbian Amazoniawas 0.3 people Km-2. She didn’t do any distinction betweenfloodplains (varzea) and uplands (terra firme) because varzea’s fertility wasoffset by unexpected and destructive floods, which made varzea as unsuitablefor people as terra firme.Denevan then proposed amodel in which people settled on the rivers’ bluffs. They were able to takeadvantage of the varzea but avoided the danger of the floods. According to [Denevan,1992] population density was 14.6 people km-2 in the varzea and 0.2 people km-2 in the terra firmeforests. It is interesting that Denevan’s estimate for terra firme is lower thanMeggers’ estimate. This is important as terra firme represents 98% of the Amazonianrain forest.In2003, Denevan changed idea and wrote: “For varzea population densitywould be 10.4 per square kilometer […] For terra firme forests it isimpossible to estimate an average population density and a total population […]Estimating average population densities for the savannas with any confidence isimpossible.” Then, he concluded: “...consequently I now reject thehabitat-density method I used in the past to estimate a Greater Amazoniapopulation in 1492 of from 5.1 to 6.8 million. I nevertheless still believethat a total of at least 5 to 6 million is reasonable” [Denevan, 2003].The stone axesAlthoughDenevan has rejected his own estimate of 0.2 people km-2 for terrafirme, it is still important to highlight how he justified that his estimatewas smaller than Meggers’. Denevan defends that pre-Columbians did not practiceslash and burn agriculture because they did not have metal tools and cuttingthe forest with stone axes would have been too much work. Hence, they preferredto live in savannahs, where they developed raised field agriculture, or on theriver bluffs, where Amazonian Dark Earth (ADE) sites are actually found. In Denevan’s view,raised fields and ADEdeveloped in order to minimize the need of clearing the forest: pre-Columbianspreferred to build raised fields and ADE because such type of agricultural intensification required less workthan cutting the forest with stone axes. Thevery same archaeological evidence that Nevle and Bird [2008] use to infer high rates of pre-Columbian deforestation are usedby Denevan to infer that pre-Columbians actually did not cut the forest!Thequestions I have should now be clear: 1) could have such a small population of 0.2people km-2 significantly modified the Amazon forests? 2) How didthey have such an impact if they had to cut the forest with stone axes? 3) Do raisedfield agriculture and ADE suggesthigh levels of deforestation? Or is it the other way round?I don’t want to bemisinterpreted here; I am not saying that pre-Columbian population was small ort... Read more »

  • September 25, 2011
  • 07:07 PM

Pictures worth thousands of words and dollars

by zacharoo in Lawn Chair Anthropology

Looking into subdural empyema, which is a meningeal infection you don't want, I stumbled upon a study from the roaring 1970s - the glorious Nixon-Ford-Carter years - using computerized axial tomography (hence, CAT scan) to visualize lesions within the skull (Claveria et al. 1976). Nowadays people refer to various similar scanning techniques simply as "CT" (for computed tomography, though this is not exactly the same as magnetic resonance imaging, MRI).It's pretty amazing how medical imaging has advanced in the 35 years since this study. For example, to the right is a CAT scan from Claveria et al. (1976, Fig. 4). These are transverse images ("slices") through the brain case, the top of the images corresponding to the front of the face. You can discern the low-density (darker) brain from the higher density (lighter) bone - the sphenoid lesser wings and dorsum sellae, and petrous pyramids of the temporal bones are especially prominent in the top left image. In the bottom two images you can see a large, round abscess in the middle cranial fossa. Whoa.What makes this medical imaging technique so great is that it allows a view inside of things without having to dissect into them. Of course, the downside is that it relies on radiation, so ethically you can't be so cavalier as to CT scan just any living thing. If I'd been alive in 1976, CAT scanning would've blown my mind. Still, the image quality isn't super great here, there's not good resolution between materials of different densities, hence the grainy images.But since then, some really smart people have been hard at work to come up with new ways to get better resolution from computerized tomography scans, and the results are pretty amazing. To the left is a slice from a synchrotron CT scan of the MH1 Australopithecus sediba skull (Carlson et al. 2011, Supporting on line material, Fig. S10). You're basically seeing the fossil face-to-face ... if someone had cut of the first few centimeters of the fossil's face. Just like the movie Face Off.Quite a difference from the image above. Here, we can distinguish fossilized bone from the rocky matrix filling in the orbit, brain case and sinuses. Synchrotron even distinguishes molar tooth enamel from the underlying dentin (see the square). The post-mortem distortion to the (camera right) orbit is clear. It also looks as though the hard palate is thick and filled with trabecular bone, as is characteristic of robust Australopithecus (McCollum 1999). Interesting...Even more remarkable, the actual histological structure of bone can be imaged with synchrotron imaging. Mature cortical bone is comprised of these small osteons (or Haversian systems), that house bone cells and transmit blood vessels to help keep bone alive and healthy. Osteons are very tiny, submillimetric. To the right is a 3D reconstruction of an osteon and blood vessels, from synchrotron images (Cooper et al. 2011). The scale bar in the bottom right is 250 micrometers. MICROmeters! Note the scan can distinguish the Haversian canal (red part in B-C) from vessels (white part in B). Insane!Not only has image quality improved over the past few decades, but CT scanning is being applied outside the field of medicine for which it was developed; it's becoming quite popular in anthropology. What I'd like to do, personally, with such imaging is see if it can be used to study bone morphogenesis - if it can be used to distinguish bone deposition vs. resorption, and to see how these growth fields are distributed across a bone during ontogeny. This could allow the study the proximate, cellular causes of skeletal form, how this form arises through growth and development. If it could be applied to fossils, then we could potentially even see how these growth fields are altered over the course of evolution: how form evolves.ReferencesCarlson KJ, Stout D, Jashashvili T, de Ruiter DJ, Tafforeau P, Carlson K, & Berger LR (2011). The endocast of MH1, Australopithecus sediba. Science (New York, N.Y.), 333 (6048), 1402-7 PMID: 21903804Claveria, L., Boulay, G., & Moseley, I. (1976). Intracranial infections: Investigation by computerized axial tomography Neuroradiology, 12 (2), 59-71 DOI: 10.1007/BF00333121Cooper, D., Erickson, B., Peele, A., Hannah, K., Thomas, C., & Clement, J. (2011). Visualization of 3D osteon morphology by synchrotron radiation micro-CT Journal of Anatomy, 219 (4), 481-489 DOI: 10.1111/j.1469-7580.2011.01398.xMcCollum, M. (1999). The Robust Australopithecine Face: A Morphogenetic Perspective Science, 284 (5412), 301-305 DOI: 10.1126/science.284.5412.301... Read more »

Carlson KJ, Stout D, Jashashvili T, de Ruiter DJ, Tafforeau P, Carlson K, & Berger LR. (2011) The endocast of MH1, Australopithecus sediba. Science (New York, N.Y.), 333(6048), 1402-7. PMID: 21903804  

Cooper, D., Erickson, B., Peele, A., Hannah, K., Thomas, C., & Clement, J. (2011) Visualization of 3D osteon morphology by synchrotron radiation micro-CT. Journal of Anatomy, 219(4), 481-489. DOI: 10.1111/j.1469-7580.2011.01398.x  

  • September 24, 2011
  • 01:09 PM

Etruscan Rite & Roman Religion

by Cris Campbell in Genealogy of Religion

“Man is born free, and everywhere he is in chains.”
With this famous sentence, Jean-Jacques Rousseau begins his masterful critique of political power. Less well known is another sentence from The Social Contract (1762): “No State has ever been founded without Religion serving as its base.”
My reading of history is that Rousseau was right. State-formation [...]... Read more »

Briquel, Dominique. (2007) Tages Against Jesus: Etruscan Religion in Late Roman Empire. Etruscan Studies, 10(1), 153-161. info:/

  • September 22, 2011
  • 12:09 PM

The ritual boundaries of household worship

by Nikolaos Markoulakis in Tropaion

In between two columns that represent
the interior of an oikos
stands a woman, reaching to an altar;
a wreath is hung on the wall behind her.
A scene comparable with Menanders
discription of a domestic ritual boundary.
Musée du Louvre CA 1857. © Perseus 1992
For a number of brief posts, the past six years, I discussed elements of the commonly known 'household' worship, supported with

... Read more »

John Pedley. (2005) Sanctuaries and the Sacred in the Ancient Greek World. Cambridge University Press. info:/10.2277/052100635X

  • September 21, 2011
  • 02:09 PM

Consciousness, Dreams & The Supernatural

by Cris Campbell in Genealogy of Religion

The notion of binaries or opposites is deeply entrenched in Western culture and thought. Although it seems perfectly natural to perceive and categorize the world in terms of dichotomies (black-white, either-or), what seems natural is actually learned. Our teacher in this regard is Aristotle, who was so impressed by the Pythagorean Table of Opposites that [...]... Read more »

  • September 21, 2011
  • 11:12 AM

Are You Yawning Because Your Brain's Hot?

by Elizabeth Preston in Inkfish

Everyone knows yawning is the pinkeye of social cues: powerfully contagious and not that attractive. Yet scientists aren't sure what the point of it is. Is yawning a form of communication that evolved to send some message to our companions? Or is the basis of yawning physiological, and its social contagiousness unrelated? A new paper suggests that yawning--even when triggered by seeing another person yawn--is meant to cool down overheated brains.

We're not the only species that feels compelled to yawn when we see others doing it. Other primates, and possibly dogs, have been observed catching a case of the yawns. But Princeton researcher Andrew Gallup thinks the root cause of yawning is in the body, not the mind. After all, we yawn when we're alone, not just when we're with other people.

Previously, Gallup worked on a study that involved sticking tiny thermometers into the brains of rats and waiting for them to yawn. The researchers observed that yawning and stretching came after a rapid temperature rise in the frontal cortex. After the yawn and the stretch, rats' brain temperatures dropped back to normal. The authors speculated that yawning cools the blood off (by taking in a large amount of air from outside the body) and increases blood flow, thereby bringing cooler blood to the brain.

If yawning's function is to cool the brain, Gallup reasoned, then people should yawn less often when they're in a hot environment. If the air outside you is the same temperature as your body, it won't make you less hot.

To test that theory, researchers went out into the field--namely, the sidewalks of Tuscon, Arizona--in both the winter and the summer. They recruited subjects walking down the street (80 people in each season) and asked them to look at pictures of people yawning. Then the subjects answered questions about whether they yawned while looking at the pictures, how much sleep they'd gotten the night before, and how long they'd been outside.

The researchers found that the main variable affecting whether people yawned was the season. It's worth noting that "winter" in Tuscon was a balmy 22 degrees Celsius (71 degrees Fahrenheit), while summer was right around body temperature. In the summer, 24% of subjects reported yawning while they looked at the pictures. In the winter, that number went up to 45%.

Additionally, the longer people had been outside in the summer heat, the less likely they were to yawn. But in the winter, the opposite was true: People were more likely to yawn after spending more time outside. Gallup speculates that because the testing took place in direct sunlight, subjects' bodies were heating up, even though the air around them remained cooler. So a yawn became more refreshing to the brain the longer subjects stood outside in the winter, but only got less refreshing as they sweltered in the summer.

The study used contagious yawning rather than spontaneous yawning, presumably because it's easier to hand subjects pictures of yawning people than to aggressively bore them. Gallup notes that contagious and spontaneous yawning are physically identical ("a stretching of the jaw and a deep inhalation of air," if you were wondering), so one can stand in for the other. Still, it would be informative to study people in a more controlled setting--in a lab rather than on the street, and preferably not aware that they're part of a yawning study.

A lab experiment would also allow researchers to directly observe whether their subjects yawned, rather than just asking them. In the field, researchers walked away while subjects were looking at the pictures, since people who know they're being watched are less likely to yawn. But self-reported results might not be accurate. The paper points out that "four participants in the winter condition did not report yawning during the experiment but yawned while handing in the survey to the experimenter."

Still, it seems there's real connection between brain temperature and yawning. It will take more research (and more helplessly yawning subjects) to elucidate exactly what the connection is. Even if brain temperatures always rise right before a yawn and fall afterward, cooling the brain might not be the point of the yawn--another factor could be causing the impulse to yawn, and the temperature changes could be a side effect. Studying subjects in a truly cold environment, and showing that they are once again less likely to yawn (because outside air would cool their brains too much), would provide another piece of evidence that temperature triggers the yawn in the first place.

None of this tells us why yawning is so catching, though. Personally, I think I yawned at least a thousand times while reading and writing about this paper. Maybe I should have taken some advice from an older study by Andrew Gallup, which found that you can inhibit yawning by breathing through your nose or putting something chilly on your forehead.

Photo: Wikipedia/National Media Museum

Andrew C. Gallup, & Omar Tonsi Eldakar (2011). Contagious yawning and seasonal climate variation. Frontiers in Evolutionary Neuroscience

... Read more »

Andrew C. Gallup, & Omar Tonsi Eldakar. (2011) Contagious yawning and seasonal climate variation. Frontiers in Evolutionary Neuroscience. info:/

  • September 21, 2011
  • 02:25 AM

Antidepressants In The UK

by Neuroskeptic in Neuroskeptic

Antidepressant sales have been rising for many years in Western countries, as regular Neuroskeptic readers  will remember.Most of the studies on antidepressant use come from the USA and the UK, although the pattern also seems to hold for other European countries. The rapid rise of antidepressants from niche drugs to mega-sellers is perhaps the single biggest change in the way medicine treats mental illness since the invention of psychiatric drugs.But while a rise in sales has been observed in many countries, that doesn't mean the same causes were at work in every case. For example, in the USA, there is good evidence that more people have started taking antidepressants over the past 15 years.In the UK, however, it's a bit more tricky. Antidepressant prescriptions have certainly risen. However, a large 2009 study revealed that, between 1993 and 2005, there was not any significant rise in people starting on antidepressants for depression. Rather, the rise in prescriptions was caused by patients getting more prescriptions each. The same number of users were using more antidepressants.Now a new paper has looked at antidepressant use over much the same period (1995-2007), but using a different set of data. Pauline Lockhart and Bruce Guthrie looked at pharmacy records of drugs actually dispensed, not just prescribed, and their data only covers a specific region, Tayside in Scotland. The 2009 study was nationwide.So what happened?The new paper confirmed the 2009 survey's finding of a strong increase in the number of antidepressant prescriptions per patient.However, unlike the old study, this one found an increase in the number of people who used antidepressants each year. It went up from 8% of the population in 1995, to 13% in 2007 - an extremely high figure, higher even than the USA.In other words, more people took them, and they took more of them on average - adding up to a threefold increase in antidepressants actually sold. The increase was seen across men and women of all ages and social classes.There's no good evidence of an increase in mental illness in Britain in this period, by the way.But why did the 2009 paper report no change in antidepressant users, while this one did? It could be that the increase was localized to the Tayside area. Another possibility is that there was an increase nationwide, but it wasn't about people with depression.The 2009 study only looked at people with a diagnosis of depression. Yet modern antidepressants are widely used for other things as well - like anxiety, insomnia, pain, premature ejaculation. Maybe this non-depression-based use of antidepressants is what's on the rise.Lockhart, P. and Guthrie, B. (2011). Trends in primary care antidepressantprescribing 1995–2007 British Journal of General Practice... Read more »

Lockhart, P. and Guthrie, B. (2011) Trends in primary care antidepressant prescribing 1995–2007. British Journal of General Practice. info:/

  • September 19, 2011
  • 04:40 PM

Archaeology of the Undead

by Kristina Killgrove in Powered By Osteons

Lots of press has been given in the past week to two late 7th to early 9th century burials found at the site of Kilteasheen in Ireland.  According to the news reports and the documentary (which won't air in the U.S. until 2012, but which you can see on YouTube... for now), archaeologists excavating at the site from 2005-2009 uncovered over 130 graves.  Two of them - both males - were buried with stones in their mouths, and one of the men also had a large stone on top of his torso.  Aside from a 2008 report of a 4,000-year-old burial, these two early 8th century Irish burials seem to be the oldest evidence of what may be the practice of preventing "revenants" (zombies, vampires, and other undead people) from returning to the land of the living.

8th century male burial from Kilteasheen, Ireland,
with large stone on and under torso
(screencap from documentary)
Both Dorothy King (PhDiva) and Michelle Ziegler (Contagions) have already blogged about this.  Dorothy points out some of the other evidence for "vampire" burials in Europe, such as the 10th-11th century cemetery of Celakovice near Prague that held a dozen people who were buried oddly (as with rocks in their mouths) and the so-called Vampire of Venice, a 60-year-old woman from a 1576 plague cemetery in Italy, who was buried with a large rock in her mouth (Nuzzolese and Borrini 2010).
The tradition of weighting down or otherwise defiling corpses (as with nails through the temple and stakes through the heart) seems to be a long one in Europe, born out of a fear of the dead that was related to the rise of Christianity, the lack of understanding of germ theory, and the increase in epidemic diseases.
There weren't, for example, vampires in Rome. The Romans actually had ongoing relationships with the dead, running pipes from the ground to the grave below in order to offer them food and drink and celebrating them at least once a year in the Parentalia.  The Judeo-Christian idea that the dead should go into the ground and stay there means that deviations from this practice - as hair and nails seemed to grow after death, for example - probably caused a lot of general freaking out.  But the simple introduction of monotheism may also have caused cultural stress, particularly in 7th century England, when kings were converting to Christianity and people were no longer sure what to believe.
Michelle points out that Ireland suffered through two major epidemics of bubonic plague, in 664 and 683, followed by a massive famine in 700.  Based on the C14 dates reported in the documentary, it's possible these two burials date as early as the late 7th century.  Rocks in or on the body of the deceased may have been meant to pin the person into the grave to prevent that person from rising or coming back, or may have been placed there because the mouth was where the soul escaped from.  But rocks may also have been important in the mitigation of disease.  Many of the archaeological examples of skeletons with mouth-rocks are assumed to have come from plague cemeteries. Some of the symptoms of bubonic plague are delirium, heavy breathing, and continuous blood-vomiting. People knew that plague could spread but didn't understand how, so blocking a person's mouth may have been an attempt to prevent the spread of the disease. The sight of a terminally ill person coughing up blood could even have been the catalyst for the invention of vampires, as a cultural explanation for disease before the advent of germ theory.

The Kilteasheen burials are likely too late to be plague-related, but even a small urban center could have had endemic tuberculosis, which causes some similar symptoms, like bloody sputum.  I don't think a disease-based explanation can be completely ruled out for these burials.

8th century burial of a male, Kilteasheen, Ireland,
with stone in the mouth
(credit: Chris Read, found at
In sum, we can't be certain of the meaning of these Irish burials, but the long tradition of incapacitating the dead to prevent them from becoming revenants coupled with historical records of disease epidemics suggests the people who buried these men likely had a good reason for wanting them to stay dead.
Excavators at Kilteasheen estimate that there are around 3,000 burials at the site, so I suspect we'll be hearing more about this cemetery in the years to come.  It will be interesting in particular to see if other burials in the cemetery were given the same mouth-rock treatment and whether the practice dates only to the 8th century or continues to later periods of the cemetery's use.

Watch the documentary, Mysteries of the Vampire Skeletons, on YouTube:
Part 1
Part 2
Part 3

Further Reading and References:

McLeod, J. 2010.  Vampires, a Bite-Sized History.  Pier 9.  [Google Books]

Nuzzolese E, & Borrini M. 2010. Forensic approach to an archaeological casework of "vampire" skeletal remains in Venice: odontological and anthropological prospectus. Journal of Forensic Sciences, 55 (6), 1634-7. PMID: 20707834.

Rickels, L.  1999.  The Vampire Lectures.  University of Minnesota Press. [Google Books]

Tsaliki, A. 2001. Vampires beyond legend - a bioarchaeological approach.  In Proceedings of the XIII European Meeting of the Paleopathology Association, ed. M. La Verghetta and L. Capasso, pp. 295-300.  [Read here]

Tsaliki, A. 2008.... Read more »

  • September 18, 2011
  • 09:06 PM

[insert clever quip about australopithecus hips]

by zacharoo in Lawn Chair Anthropology

A week and a half ago, Kibii and colleagues (2011) published reconstructions and re-analyses of two hips belonging to the 1.98 million-year old Australopithecus sediba. As with many fossil discoveries, these additions to the fossil record raise more questions than they answer. Unless the question was, "did A. sediba have a pelvis?" It did. Here's a good summary from the paper itself:Thus, Au. sediba is australopith-like in having a long superior pubic ramus and an anteriorly positioned and indistinctly developed iliac pillar...[and] Homo-like in having vertically oriented and sigmoid shaped iliac blades, more robust ilia, and a narrow tuberoacetabular sulcus...and the pubic body is upwardly rotated as in Homo. (p. 1410, emphases mine)So far as I can tell, the main way the hips are 'advanced' toward a more human-like condition is that the iliac blades are more upright and sweep forward more than in earlier known hominid hips. Here's the figure 2 from the paper (more sweet pics of the fossils are available here). NB that in both A. sediba hips much of the upper portions of the iliac blades are missing (reconstructed in white; this region is missing in lots of fossils), so it's possible they were more flaring like the australopith in the center photo.The authors' bottom-line, take-home point is that the A. sediba pelvis has features traditionally associated with large-brained Homo - but belonged to a small-brained species (based solely on the ~430 cc MH1 endocast). They argue that this means that many of these unique pelvic features did not evolve in the context of birthing large-brained babies, as has often been thought. They state that these features are thus "most parsimoniously attributed to altered biomechanical demands on the pelvis in locomotion," and suggest that this hypothetical locomotion was mostly bipedalism but with a good degree of climbing. Maybe, maybe not. This interpretation is consistent with the analysis of the A. sediba foot/ankle (Zipfel et al. 2011).The weird mix of ancient (australopith-like) and newer (Homo-like) pelvic features in A. sediba really raises the question of how australopithecines moved around. More intriguing is that the A. sediba pelvis has different Homo-like features than the ~1 million year old Busidima pelvis (Simpson et al. 2008), which has been attributed to Homo erectus (largely in aspects of the iliac blades). This raises the question of whether A. sediba is really pertinent to the origins of the genus Homo, and whether the Busidima pelvis belongs to Homo erectus or a late-surviving robust australopithecus (e.g. boisei, Ruff 2010).Also interesting is that the subpubic angle (in the pic above, the upside-down "V" created by the pubic bones just above the red labels) is pretty low in MH2. This is curious because modern human males and females differ in how large this angle is - females tend to have a large angle which contributes to an enlarged birth canal, whereas males have a low angle like MH2. But MH2 is considered female based on skeletal and dental size. This raises the additional questions of whether human-like sexual dimorphism had not evolved in hominids prior to 1.9 million years ago, and whether the sex of MH2 was accurately described.Finally, though the authors did a great job comparing this pelvis with those from other hominids, I think a major, more comprehensive comparative review of hominid pelves is in order. How does the older A. afarensis hip from Woranso (Haile-Selassie et al. 2010) inform australopithecine pelvic evolution? What about the possibly-contemporary-maybe-later hip from the nearby site of Drimolen (Gommery et al. 2002)? Given the subadult status of the MH1 individual, it would be interesting to compare with the WT 15000 Homo erectus fossils, or A. africanus subadults from Makapansgat, to examine the evolution of pelvic growth.Lots of interesting questions arise from these fascinating new fossils. "The more you know," right?ReferencesGommery, D. (2002). Description d'un bassin fragmentaire de Paranthropus robustus du site Plio-Pléistocène de Drimolen (Afrique du Sud)A fragmentary pelvis of Paranthropus robustus of the Plio-Pleistocene site of Drimolen (Republic of South Africa) Geobios, 35 (2), 265-281 DOI: 10.1016/S0016-6995(02)00022-0Haile-Selassie Y, Latimer BM, Alene M, Deino AL, Gibert L, Melillo SM, Saylor BZ, Scott GR, & Lovejoy CO (2010). An early Australopithecus afarensis postcranium from Woranso-Mille, Ethiopia. Proceedings of the National Academy of Sciences of the United States of America, 107 (27), 12121-6 PMID: 20566837Kibii, J., Churchill, S., Schmid, P., Carlson, K., Reed, N., de Ruiter, D., & Berger, L. (2011). A Partial Pelvis of Australopithecus sediba Science, 333 (6048), 1407-1411 DOI: 10.1126/science.1202521Ruff, C. (2010). Body size and body shape in early hominins – implications of the Gona Pelvis Journal of Human Evolution, 58 (2), 166-178 DOI: 10.1016/j.jhevol.2009.10.003... Read more »

Haile-Selassie Y, Latimer BM, Alene M, Deino AL, Gibert L, Melillo SM, Saylor BZ, Scott GR, & Lovejoy CO. (2010) An early Australopithecus afarensis postcranium from Woranso-Mille, Ethiopia. Proceedings of the National Academy of Sciences of the United States of America, 107(27), 12121-6. PMID: 20566837  

Kibii, J., Churchill, S., Schmid, P., Carlson, K., Reed, N., de Ruiter, D., & Berger, L. (2011) A Partial Pelvis of Australopithecus sediba. Science, 333(6048), 1407-1411. DOI: 10.1126/science.1202521  

Simpson, S., Quade, J., Levin, N., Butler, R., Dupont-Nivet, G., Everett, M., & Semaw, S. (2008) A Female Homo erectus Pelvis from Gona, Ethiopia. Science, 322(5904), 1089-1092. DOI: 10.1126/science.1163592  

Zipfel, B., DeSilva, J., Kidd, R., Carlson, K., Churchill, S., & Berger, L. (2011) The Foot and Ankle of Australopithecus sediba. Science, 333(6048), 1417-1420. DOI: 10.1126/science.1202703  

  • September 16, 2011
  • 03:15 PM

Evolved for Arrogance

by Elizabeth Preston in Inkfish

Why does nature allow us to lie to ourselves? Humans are consistently and bafflingly overconfident. We consider ourselves more skilled, more in control, and less vulnerable to danger than we really are. You might expect evolution to have weeded out the brawl-starters and the X-Gamers from the gene pool and left our species with a firmer grasp of our own abilities. Yet our arrogance persists.

In a new paper published in Nature, two political scientists say they've figured out the reason. There's no mystery, they say; it's simple math.

The researchers created an evolutionary model in which individuals compete for resources. Every individual has an inherent capability, or strength, that simply represents how likely he or she is to win in a conflict. If an individual seizes a resource, the individual gains fitness. If two individuals try to claim the same resource, they will both pay a cost for fighting, but the stronger individual will win and get the resource.

Of course, if everyone knew exactly how likely they were to win in a fight, there would be no point in fighting. The weaker individual would always hand over the lunch money or drop out of the race, and everyone would go peacefully on their way. But in the model, as in life, there is uncertainty. Individuals decide whether a resource is worth fighting for based on their perception of their opponents' strength, as well as their perception of their own strength. Both are subject to error. Some individuals in the model are consistently overconfident, overestimating their capability, while others are underconfident, and a few are actually correct.

Using their model, the researchers ran many thousands of computer simulations that showed populations evolving over time. They found that their numerically motivated populations, beginning with individuals of various confidence levels, eventually reached a balance. What that balance was, though, depended on their circumstances.

When the ratio of benefits to costs was high--that is, when resources were very valuable and conflict was not too costly--the entire population became overconfident. As long as there was any degree of uncertainty in how individuals perceived each other's strength, it was beneficial for everyone to overvalue themselves.

At medium cost/benefit ratios, where either costs or benefits somewhat outweighed the other, the computerized populations reached a stable mix of overconfident and underconfident individuals. Neither strategy won out; both types of people persisted. In general, the more uncertainty was built into the model, the more extreme individuals' overconfidence or underconfidence became.

When the cost of conflict was high compared with the benefit of gaining a resource, the entire population became underconfident. Without having much to gain from conflict, individuals opted to avoid it.

The authors speculate that humans' tendency toward overconfidence may have evolved because of a high benefit-to-cost ratio in our past. If the resources available to us were valuable enough, and the price of conflict was low enough, our ancestors would have been predicted to evolve a bias toward overconfidence.

Additionally, our level of confidence doesn't need to wait for evolution to change; we can learn from each other and spread attitudes rapidly through a region or culture. The researchers call out bankers and traders, sports teams, armies, and entire nations for learned overconfidence.

Though our species' arrogance may have been evolutionarily helpful, the authors say, the stakes are higher today. We're not throwing stones and spears at each other; we have large-scale conflicts and large-scale weapons. In areas where we feel especially uncertain, we may be even more prone to grandiosity, like the overconfident individuals in the model who gained more confidence when they had less information to go on. When it comes to negotiating with foreign leaders, anticipating natural disasters, or taking a stand on climate change, brains that have evolved for self-confidence could get us in over our heads.

Johnson, D., & Fowler, J. (2011). The evolution of overconfidence Nature, 477 (7364), 317-320 DOI: 10.1038/nature10384

... Read more »

Johnson, D., & Fowler, J. (2011) The evolution of overconfidence. Nature, 477(7364), 317-320. DOI: 10.1038/nature10384  

join us!

Do you write about peer-reviewed research in your blog? Use to make it easy for your readers — and others from around the world — to find your serious posts about academic research.

If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.

Register Now

Research Blogging is powered by SRI Technology.

To learn more, visit