Is hindsight really 20/20? When we look at the past, we tend to imagine things as we wish they were, and not recall things as they actually were—nostalgia can be problematic. Romanticism of the past has given rise to ideas like the “Ruined Landscape” or “Lost Eden theory” which create pristine images of the past and argue that human activity is largely to blame for the overall degradation of landscapes. There is no denying that humans have had a lasting impact on the environment, however biologist Jacques Blondel (2006) suggests that these ideas overlook the ways human activity has actually contributed to the maintenance, diversity, and embellishment of landscapes (714). Blondel acknowledges that there is a middle ground between these ideas when it comes to the relationship we have with our landscapes—a balance must exist between resistance and resilience, between disturbance and recovery. While Blondel focuses his discussion largely on the Mediterranean, perhaps these ideas can also be applied to our own local landscapes, and help us understand how biodiversity can evolve in these circumstances.
One of the first significant marks—disturbance effects—by humans on their landscapes was likely their role in the disappearance of large land mammals:(T)here is much evidence of the direct responsibility of humans in the extinction of the ‘megafauna’ of Mediterranean islands. These island faunas included strange mammal assemblages with dwarf hippos and elephants the size of pigs. Archaeological sites in Cyprus indicate that human colonization began as early as 10,500 years ago and was soon followed by the rapid decimation of these mammals (Blondel 2006: 716).The loss of these animals had a direct consequence for the local environment: Forests and other vegetation could grow unchecked and previously open landscapes could be reclaimed over time. However, the rise of permanent settlements meant that such landscapes were not left to natural forces. The disturbance effect was compounded as humans managed these resources in the absence of large grazing animals, which of course leads us to the most “obvious consequence of human action”: deforestation.
Fossils of large mammals on display at AMNH.
In the Mediterranean at the time of Blondel’s writing, forests covered 9.4% of the Mediterranean Basin [pdf] —about 15% of the original vegetation. The rest of the land, believed to be so thick with vegetation that a “monkey could have travelled from Spain to Turkey almost without leaving the canopy,” has been “redesigned” via human intervention from about 10,000 years ago when Near and Middle East hunters began establish permanent settlements and produce their own food supply (2006: 714-715). They employed a formulaic approach to creating sustainable agro-silvio-pastoral ecosystems:Forest management through wood-cutting and coppicing, controlled burning, plant domestication, livestock husbandry, grazing and browsing, as well as water management and terracing have been for centuries the main tools for producing intermediate disturbance regimes (Blondel 2006: 716).For example, as already noted early humans may have contributed to the demise of large grazing mammals, however, the spaces these animals created by their behavior became important and necessary as humans became more settled and involved in their local environment. Fire cycles became an important means of maintaining open spaces and mosaic landscapes [pdf].
This method of clearing and shaping the landscape seems harsh and does require a steady turnover for plant and animal communities, but these sorts of changes were partially compensated for by intraspecific and interspecific adaptation in response to the changes in habitat:Cultivated plants in the Mediterranean Basic from the Neolithic onward included grain crops, fodder plants, oil-producing plants, fruit crops, vegetables, and a vast range of condiments, dyes, and tanning agents. The remarkable combination of protein-rich pulses and cereals that were domesticated in Neolithic farming villages of the Fertile Crescent, along with domesticated sheep, goats, cattle, and in some cases pigs, appears to have facilitated the rapid spread of herding and farming economies throughout the rest of the Old World. The perennial plant alfalfa, a source of fodder and green manure, apparently was also domesticated in the Middle East between 6,000 and 8,000 years ago and soon carried to all parts of the basin (Blondel 2006:717).In addition, ungulates (hoofed animals) were also domesticated, dogs took on a greater importance in providing protection, and wild boars found a niche in consuming the edible refuse produced by humans. Sheep and goats became major grazers and browsers in the area, and their popularity gave rise to several local varieties. Domesticated animals provided meat, milk, wool, opportunities for the production of tools and clothing, and assistance with labor. The picture that emerges is one where there is a dialogue of sorts between the landscape and its human inhabitants where changes build upon one another:The long-term accumulation of local differentiation during glacial times and subsequent human-induced selection processes together have resulted in the development of more than 145 varieties of domesticated bovids and 49 varieties of sheep. Over the centuries, hundreds of varieties of olives, almond, wheat, and grape, which have been selected intensively by humans, have also added to the biological diversity of the Mediterranean … human influence on population undoubtedly constituted a significant selective factor in their evolution through the process of domestication (Blondel 2006: 718).
The Mediterranean Basin showing the range of olive trees.
Blondel’s point is not to minimize or overlook the devastating effects of human disturbance events, but to draw attention to the ways in which our environment can react to us. The unique topography of the Mediterranean coupled with human efforts were factors in the evolution of plants and animals. However that is not to say that there is not a limit to the resilience in the region. Resilience—the ability of the landscape to redefine itself in terms of the changes that are occurring—is possible only as long as lasting damage is avoided:Gradual changes in land use practices, in humans’ use of chemicals or other factors, might have little effect until a threshold is reached beyond which restructuring occurs, restructuring that can be difficult to reverse (Blondel 2006: 727).Essentially, ecosystems tend toward stability cycling through positive and negative feedback from human activity. That stability is threatened permanently, resulting in badlands, once humans adopt industrial style agriculture and begin to use fertilizers and pesticides.
... Read more »
Blondel, J. (2006) The ‘Design’ of Mediterranean Landscapes: A Millennial Story of Humans and Ecological Systems during the Historic Period. Human Ecology, 34(5), 713-729. DOI: 10.1007/s10745-006-9030-4
In 1527 an expedition led by the Spanish nobleman Pánfilo de Narváez left Spain with the intention of conquering and colonizing Florida. Accompanying the expedition as treasurer was Álvar Núñez Cabeza de Vaca, who ended up being one of a handful of survivors of the disastrous expedition. Cabeza de Vaca later wrote an account of [...]... Read more »
Epstein, J. (1991) Cabeza de Vaca and the Sixteenth-Century Copper Trade in Northern Mexico. American Antiquity, 56(3), 474. DOI: 10.2307/280896
I polled my Twitter followers recently to find out what they wanted me to cover, and heard back a resounding "CONTRACEPTIVES!" So first I am going to re-post a series I wrote on my lab blog in July of 2009, with significant editing and updating. I think after these reposts I'll have a better idea of where it would make sense for me to contribute more, if at all. This is post two of five. Part one can be found here.What is a normal menstrual cycle? Are you normal? Am I? Women spend a lot of time worrying about this, and most of them seem to assume they fall outside of normal. So, I'd like to spend a little time unpacking the concept of normal reproductive functioning in the second part of this series.What is normal?Recently, in the beginning of an evolutionary medicine volume, I read in the editors’ opening comments that there is “nothing biologically normal” about monthly menses, as a way to put forward the idea that women should take continuous oral contraceptives (Stearns and Koella 2008, p. 4). This led me to wonder what it means to be biologically normal. That offhand, and troubling, statement about there being "nothing biologically normal" about month-long cycles and frequent menses, makes an entire population of women feel as though they are, then, biologically "abnormal." I don't think this is useful.My understanding of biologically normal is that the body varies and is responding to its environment in a way that we would consider to be adaptive. It is normal for the female reproductive system to allocate resources in response to its environment.Women from industrial populations are at the far end of the spectrum of variation in reproductive function, but we have not fallen off the end of the continuum. On the one hand, I appreciate the attempt of the authors to introduce the idea that American physiology is not the global standard that we should use to evaluate all human populations. However, any body that responds appropriately to its ecology is, by the definition I've always learned, normal.The reason Stearns and Koella (2008), and Eaton et al (1994; 2002), and others have been advocating continuous hormonal contraceptive use in industrial populations is that it may decrease reproductive cancer rates. The relatively higher incidence of reproductive cancers in industrial populations is a consequence of our flexibly responsive bodies being in an environment of low energy constraint. Once upon a time, we were eating less and moving more. Age at menarche (that’s when we get our first menstrual period) used to be much later, menses itself wasn’t particularly heavy or cumbersome, and few cycles were ovulatory (meaning that an egg is released for possible fertilization). Soon after reaching menarche (as in, within a few years) a woman has her first child. She breastfeeds intensively for the first few years, but continues to breastfeed at least occasionally for four years, maybe more. At some point towards the end of breastfeeding, or sometimes not even until breastfeeding was done, she would resume cycling, and in a few cycles likely get pregnant again.This pattern would continue, with some variations based on miscarriages, increasing age, seasonal variation in food availability, and other issues, until the woman hit menopause. Of course, for many women, their lives ended around that point or even before, but plenty of women survived to be grandmothers, if observation of current forager populations is any indication. This means that for most of a woman’s reproductive life she was pregnant or breastfeeding, and cycling only occasionally. Strassmann (1997) has a great analysis of this and comparison between populations: the punchline is that an industrialized woman of today has around 400 menstrual cycles, while our ancestors, if modern foragers are an indication, had 50-100.Now let’s look at today’s industrialized woman: like men, she eats more and moves around less, largely because she is in school or working rather than getting her own food. She hits menarche earlier, and menses are more frequent and copious than her ancestors, which creates lots of tissue remodeling in the endometrium (the lining of the uterus). Many of her cycles are ovulatory, necessitating frequent tissue remodeling for the ovaries. She may cycle for years before having her first child, even decades, and with those frequent cycles come a higher exposure to endogenous (coming from within the body rather than outside it, as in a pill) sex steroids like estradiol and progesterone. Even if she breastfeeds for years, she will likely resume menstrual cycling sooner than her ancestors because she is better fed. She will probably have fewer pregnancies and births than her ancestors, which means more cycles in between pregnancies. She will most likely make it to menopause and beyond; because she is so much more likely to make it past menopause we are far more likely to notice the negative effects of all that hormone exposure, in the form of reproductive cancers.So while I disagree with the idea that there is “nothing biologically normal” about frequent menstrual cycles, I certainly agree that they are not doing us any favors. But is it the reproductive system that is at fault or the lifestyle? Should we artificially suppress the system in order to promote health, or make changes to the way we live? I want to complicate things further and ask if it is actually true that continuous oral contraceptive use would actually reduce reproductive cancer risk, and if so, at what age should it be administered? Currently, I'm not convinced that getting women on to oral contraceptives for their entire reproductive years is wise. But that's for another post.The third part of this series will address population variation in reproductive function, and how this impacts the efficacy and side effect incidence of hormonal contraceptives.ReferencesEaton SB, Pike MC, Short RV, Lee NC, Trussell J, Hatcher RA, Wood JW, Worthman CM, Blurton-Jones NG, Konner MJ, Hill KR, & Bailey R (1994). Women's reproductive cancers in evolutionary context Quarterly Review of Biology, 69 (3), 353-367Eaton, S.B., Strassmann, B.I., Nesse, R.M., Neel, J.V., Ewald, P.W., Williams, G.C., Weder, A.B., Eaton III, S.B., Lindeberg, S., Konner, M.J., Mysterud, I., & Cordain, L. (2002). Evolutionary health promotion Preventive Medicine, 34, 109-118Stearns S, and Koella J, editors. 2008. Evolution in health and disease. 2nd ed. Oxford: Oxford University Press.Strassmann, BI (1997). The biology of menstruation in Homo sapiens: Total lifetime menses, fecundity, and nonsynchrony in a natural-fertility population Current Anthropology, 38 (1), 1... Read more »
Eaton SB, Pike MC, Short RV, Lee NC, Trussell J, Hatcher RA, Wood JW, Worthman CM, Blurton-Jones NG, Konner MJ.... (1994) Women's reproductive cancers in evolutionary context. Quarterley Review of Biology, 69(3), 353-367.
Eaton, S.B., Strassmann, B.I., Nesse, R.M., Neel, J.V., Ewald, P.W., Williams, G.C., Weder, A.B., Eaton III, S.B., Lindeberg, S., Konner, M.J.... (2002) Evolutionary health promotion. Preventive Medicine, 109-118.
Strassmann, BI. (1997) The biology of menstruation in Homo sapiens: Total lifetime menses, fecundity, and nonsynchrony in a natural-fertility population. Current Anthropology, 38(1), 123-129. DOI: A1997WD24700015
When it comes to the synthesis of genetics and history we live an age of no definitive answers. L. L. Cavalli-Sforza’s Great Human Diasporas would come in for a major rewrite at this point. One of the areas which has been roiled the most within the past ten years has been the origin and propagation [...]... Read more »
Wolfgang Haak, Oleg Balanovsky, Juan J. Sanchez, Sergey Koshel, Valery Zaporozhchenko, Christina J. Adler, Clio S. I. Der Sarkissian, Guido Brandt, Carolin Schwarz, Nicole Nicklisch.... (2010) Ancient DNA from European Early Neolithic Farmers Reveals Their Near Eastern Affinities. PLoS Biology. info:/10.1371/journal.pbio.1000536
A group of Italian psychiatrists claim to explain How Neuroscience and Behavioral Genetics Improve Psychiatric Assessment: Report on a Violent Murder Case.The paper presents the horrific case of a 24 year old woman from Switzerland who smothered her newborn son to death immediately after giving birth in her boyfriend's apartment. After her arrest, she claimed to have no memory of the event. She had a history of multiple drug abuse, including heroin, from the age of 13. Forensic psychiatrists were asked to assess her case and try to answer the question of whether "there was substantial evidence that the defendant had an irresistible impulse to commit the crime." The paper doesn't discuss the outcome of the trial, but the authors say that in their opinion she exhibits a pattern of "pathologically impulsivity, antisocial tendencies, lack of planning...causally linked to the crime, thus providing the basis for an insanity defense."But that's not all. In the paper, the authors bring neuroscience and genetics into the case in an attempt to providea more “objective description” of the defendant’s mental disease by providing evidence that the disease has “hard” biological bases. This is particularly important given that psychiatric symptoms may be easily faked as they are mostly based on the defendant’s verbal report.So they scanned her brain, and did DNA tests for 5 genes which have been previously linked to mental illness, impulsivity, or violent behaviour. What happened? Apparently her brain has "reduced gray matter volume in the left prefrontal cortex" - but that was compared to just 6 healthy control women. You really can't do this kind of analysis on a single subject, anyway.As for her genes, well, she had genes. On the famous and much-debated 5HTTLPR polymorphism, for example, her genotype was long/short; while it's true that short is generally considered the "bad" genotype, something like 40% of white people, and an even higher proportion of East Asians, carry it. The situation was similar for the other four genes (STin2 (SCL6A4), rs4680 (COMT), MAOA-uVNTR, DRD4-2/11, for gene geeks).I've previously posted about cases in which a well-defined disorder of the brain led to criminal behaviour. There was the man who became obsessed with child pornography following surgical removal of a tumour in his right temporal lobe. There are the people who show "sociopathic" behaviour following fronto-temporal degeneration.However this woman's brain was basically "normal" at least as far as a basic MRI scan could determine. All the pieces were there. Her genotypes was also normal in that lots of normal people carry the same genes; it's not (as far as we know) that she has a rare genetic mutation like Brunner syndrome in which an important gene is entirely missing. So I don't think neurobiology has much to add to this sad story.*We're willing to excuse perpetrators when there's a straightforward "biological cause" for their criminal behaviour: it's not their fault, they're ill. In all other cases, we assign blame: biology is a valid excuse, but nothing else is.There seems to be a basic difference between the way in which we think about "biological" as opposed to "environmental" causes of behaviour. This is related, I think, to the Seductive Allure of Neuroscience Explanations and our fascination with brain scans that "prove that something is in the brain". But when you start to think about it, it becomes less and less clear that this distinction works.A person's family, social and economic background is the strongest known predictor of criminality. Guys from stable, affluent families rarely mug people; some men from poor, single-parent backgrounds do. But muggers don't choose to be born into that life any more than the child-porn addict chose to have brain cancer.Indeed, the mugger's situation is a more direct cause of his behaviour than a brain tumour. It's not hard to see how a mugger becomes, specifically, a mugger: because they've grown up with role-models who do that; because their friends do it or at least condone it; because it's the easiest way for them to make money.But it's less obvious how brain damage by itself could cause someone to seek child porn. There's no child porn nucleus in the brain. Presumably, what it does is to remove the person's capacity for self-control, so they can't stop themselves from doing it.This fits with the fact that people who show criminal behaviour after brain lesions often start to eat and have (non-criminal) sex uncontrollably as well. But that raises the question of why they want to do it in the first place. Were they, in some sense, a pedophile all along? If so, can we blame them for that?Rigoni D, Pellegrini S, Mariotti V, Cozza A, Mechelli A, Ferrara SD, Pietrini P, & Sartori G (2010). How neuroscience and behavioral genetics improve psychiatric assessment: report on a violent murder case. Frontiers in behavioral neuroscience, 4 PMID: 21031162... Read more »
Rigoni D, Pellegrini S, Mariotti V, Cozza A, Mechelli A, Ferrara SD, Pietrini P, & Sartori G. (2010) How neuroscience and behavioral genetics improve psychiatric assessment: report on a violent murder case. Frontiers in behavioral neuroscience, 160. PMID: 21031162
How does one become a fan? Choose an allegiance? Decide that you’re going to wear bright green, or purple and gold, or paint your face orange and black? In many cases, these allegiances are decided for us—handed down via familial loyalties or decided by geographic boundaries. I raised this question on Twitter a few weeks ago, and the results all indicated that team alliance is linked to one’s point-of-entry into fandom: if you begin watching Team A and learning about the sport via Team A, and your network is tied to Team A, then you’re likely to become a fan of Team A. And like all habits, longstanding fan ties are difficult to break.
But is acceptable to support Team B if you live in Team A territory? Particularly if Teams A and B are rivals? Initial ties are important in this scenario. It’s fine if you move to the Midwest from Massachusetts and want to continue to support a New England team—you’re maintaining loyalty to your geographic origins, and that’s totally acceptable. There’s a reason for you to break with the group. But in the absence of relocation, can you support a team with no apparent ties to the location or network to which you belong?
S is a HUGE New England Patriots football fan. It runs counter to our network where the New York Giants and the New York Jets reign supreme—and an allegiance to either would apparently be preferable to siding with the evil Coach Belichick and his platoon of Patriots. He’s a Met fan in accordance with the reasons given for team attachment by others: he comes from a line of Mets fans, and was raised in close proximity to the former Shea Stadium. There is both a network connection and a geographic connection that ties him to this baseball team, but his football allegiance has raised more than a few eyebrows and subjected him to taunts and criticism from colleagues, friends, and family alike. In football, he’s a displaced fan.
Michael Miller argues that fandom is an outlet for expression that may be lacking in the fan’s day-to-day life (1997: 125). In these contests where there must be a winner, the game offers a finiteness that often is not attainable is the norms of the average fan’s life: workers do not “win” at the end of eight hours, and in relationships, there are no measures of “familial performance” (1997: 124). Furthermore, Miller arguesThe joy and beauty of being a fan ultimately derives from the fact that his/her allegiance can never be effectively challenged. A sports fan is never challenged for holding his/her views as he/she might be for being a Democrat, Republican, Liberal, Conservative, Capitalist can be. These latter identifications are supposed to be represent deliberate choices often reached through intellectual inquiry, investigation, thought, and decision. This is not the case with the fan whose very being is characterized by an emotional attachment that cannot be rationalized. The fan is never required to justify his/her “faith” in a player or team (1997: 126).But this is only true when you fit with the prevalent group—unless you have good reason (i.e., relocation) for breaking with the norms. S is constantly questioned and taunted when they lose. Which I suppose is to be expected in any town that has a deeply rooted sports tradition.
Pat Patriot, logo for the NE Patriots
from 1961 to 1992. © NE Patriots
S became a fan of the Pats when he was about eight years old, well before they had their string of Super Bowl wins in the 2000s. When asked today why he supports the team he invariably replies, “It’s the Patriots!” The term to him is filled with nationalist imagery&38212”Aren’t we patriots?” is a favorite counter question he likes to pitch. He believes they should be America’s team. He also says, as an eight-year-old he was swayed by the logo and the colors, but the Giants, a local team, also share similar colors, so by this argument were it not for the logo he should have favored the Giants. But S was actually introduced to football outside of the normal familial team alliances, which could help explain his out-group association: His father, though a Giants fan, watched a lot of 49er’s football by way of introducing him to the sport, and those early years of fandom must have been immensely influential. This experience may have allowed the normal bonds of fandom to be bent.
Over time athletes and teams come to represent the public they play for, and fans believe that they can sway the outcome of these matches through their actions. It’s why we don the gear, paint our faces, and persevere through times of loss. Sports are one means by which we leave our mark on the larger world—and this “we” includes the spectators, the fans who are participants in their own right. Sociologists Raymond Schmitt and Wilbert Leonard III (1986) raise the concept of the “postself” as a means of explaining this belief. The postself is “the presentation of his or her self in history” (1088). The idea is that via our actions and choices—the teams we support and their performance—will impact how we are remembered.
The postself, Schmitt and Leonard argue, drives athletes, but because sports is a social experience, the postself of the athletes can possibly be extended to the participants:Although we did not find unequivocal evidence that fans identified with the athlete’s postself, this foes remain a distinct possibility. Caughey has emphasized the extent to which American fans identity with various types of media figures … “Through their simple connections with sport teams, the personal images of fans are at stake when their teams take the field. The team’s victories and defeats are reacted to as personal successes and failures. Another investigation found that 28% of 1252 adult Americans said that they sometimes, often, or always fantasize that they are the competing athlete when watching their favorite sport (1099).Support of a particular team allows us to become a part of that team’s victories and losses—our histories become intertwined. I’ll never forget Endy Chavez’s catch as a Met that kept hope alive in the NLCS in 2006. It was the catch heard ‘round the world. With so much uncertainty in our lives and in the world, perhaps these are the small ways in which we author our history and identity with some degree of control.
S has chosen a team whose logo and colors carry a message about him and his beliefs. It's no different than his decision to cheer for the Mets. In latter instance however, he has been handed a prepackaged view on what the team represents to the group and has adopted those views later in life as his own. In the former, he has created his own representation. In both cases, these teams constitute his personal history, and will comprise his postself. And since sports are a large part of his life, his participation is something that will figure prominently in his biography though it runs counter to the expectation of his peers.
In a post called Us, Them, and Non-Zero Sumness Patrick Clarkin did a fantastic job a few weeks ago analyzing intergroup conflict in sports. Drawing on Muzafer Sherif’s 1954 experiment, Clarkin leads the reader through a discussion on the way competition helps to exacerbate conditions of otherness. In situations where there must be a victor, such as in sports, a zero-sum condition emerges where the success of one group necessitates the failure of another group. In this setting, rivalries and conflicts emerge as a means of achieving the goa... Read more »
Miller, Michael. (1997) American Football: The Rationalization of the Irrational. International Journal of Politics, Culture, and Society, 11(1), 101-127. info:/
My wife, along with her many other jobs – paid and unpaid – is the local director of a campus exchange program that brings US students to Wollongong, New South Wales. Because of her background in outdoor education and adventure therapy, she does a great job taking visiting Yanks on weekend activities that get the students to see a side of life in Australia that they might not otherwise see. From Mystery Bay on the South Coast, to Mount Guluga with an Aboriginal guide, to abseiling (rapeling) in the Blue Mountains, to surf lessons at Seven Mile Beach, I think she does a great job, and I frequently tag along to help and enjoy being reminded of the distinctiveness of my adopted home.
Abdulai Abubakari holds his infant child, Fakia. (Peter DiCampo/VII Agency)
Invariably, either at the beach or in the Blue Mountains, at night, students will confront a clear, dark Australian sky, staggered at just how many stars fill the darkness from horizon to horizon. I’ve seen the US students – well, not all of them get into it – just stand, necks craned backwards, and stare. What they thought was darkness was actually full of innumerable points of light.
I’m sympathetic because I had a similar experience one clear night in the Chapada Diamantina (the Diamond Plateau) in Brazil, when I couldn’t believe how, given real darkness, desert-like humidity, and clear, pollution-free air, the sky was crowded with sources of light, just smeared with stars. For the first time, I felt like I understood the name, the ‘Milky Way,’ because I could see the uninterrupted blur toward the centre of our galaxy.
I was reminded of my experience with seeing stars, as if for the first time, and the reactions of the American students when I stumbled across the photos of Peter DiCampo (click here for Peter’s website), an American freelance photographer and former members of the Peace Corps who volunteered in the village of Voggu in rural Ghana. His photo essay, Full Frame: Life without lights, is up at Global Post, an online American newspaper launched at the start of 2009. His beautiful photos of life by flashlight, candle and gaslight, capture the atmosphere in this part of Ghana without electricity, and got me to thinking about artificial light and the way the sensory environment affects human development (additional photos at Peter’s personal website, including photos from darkness in Kurdistan).
Dark photos as activism
Peter’s photos are a form of social activism as well as both photojournalism and art. As he describes, the images seek to convey concretely life in Voggu without focusing entirely on deprivation, drawing attention to the villagers’ condition without simply recapitulating familiar visual stereotypes in images (and I think he’s quite successful):
The villagers of Voggu are among the 1.6 billion people worldwide who live without electricity.
I had a simple plan: to photograph only with the light available, so that the reader can see only what the subjects are able to see….
I have no desire to contribute to a stereotypical view of Africa, presenting people as miserable and helpless — but I have every desire to use my photographs toward humanitarian means. How to reconcile the two?
The photographs – of night markets and kids reading the Qur’an and actors shooting a movie scene, as well as flashlight-lit portraits – don’t just bring us to a different geographical locale, but to a profoundly different sensory reality (as do the photos from Kurdistan that appear on his own website). For most Americans and Australians, I suspect, living truly in the dark, with only tiny patches of light, would be a rarity.
Studies of nightglow, the light reflected back by humid or polluted atmosphere from ground level electric lighting, suggest that large portions of Europe, North America, and most urban areas are constantly swathed in low levels of ambient glow. Nightglow effectively banishes night, shifting the daily cycle for every one, and every thing, living in these areas.
The effects of light at night
Navara and Nelson (2007), in the Journal of Pineal Research, review the diverse effects of artificial light on biological systems. They discuss the extensive research on the negative effects of ambient nighttime light on animals, including disruptions to reproduction, migration, foraging behaviour and predation. I won’t discuss these effects in any detail, but one of the sadder examples is that hatchling sea turtles often use the contrast between dark bush behind the beach and lighter horizon over the water to orient themselves when they are born. Too much light on the inland horizon can confuse them so that they cannot find their way to water before predators get them.
Humans are also affected in a host of ways by nighttime light. These light-derived condition are widespread, even pervasive; as Navara and Nelson (2007: 216) review: ‘In 2001, the percentage of the world’s population living under sky brightness higher than baseline levels was 62%, with the percentages of US and European populations exposed to brighter than normal skies lying at 99%’ (Navara and Nelson cite Cinzano and colleagues 2001 atlas of the night sky). For some of these populations, true night is never experienced, for artificial light is consistently brighter than a full moon.
The increasing prevalence of high intensity artificial light that tends to be blue (rather than incandescent yellow) is especially troubling because light near this wavelength affects the pineal gland, which regulates melatonin production; long-term light exposure correlates with shrinkage of the pineal gland. Just 39 minutes of incandescent light at night can cause melatonin synthesis to drop by 50% (Navara and Nelson 2007: 217). As Korkmaz and colleagues (2009: 267) cite, elevated melatonin levels correlate with darkness and have been referred to as ‘the chemical expression of darkness.’ Altered or disrupted production of melatonin has a range of neuroendocrine effects on a range of bodily systems, including the metabolism of prolactin, glucocorticoids, adrenocorticotropic hormone, corticotrophin releasing factor and serotonin. Long-term exposure to light at night can contribute to chronic sleep deficit, especially through the effects on melatonin, with ‘countless’ other effects, according to Navara and Nelson’s (2007: 217) review.
Over the long term, disruption of the light-dark cycle can affect body composition, contribute to obesity, negatively impact gut efficiency, and otherwise disrupt metabolism, especially the moderation of energy uptake. These changes have been linked to diabetes, heart disease, and other metabolic problems. Moreover, chronic exposure to low levels of light at night can lead to oxidative stress, which can damage immune cells and even contribute to higher incidence of cancer and rates of physiological aging (see Navara and Nelson 2007 for research review).
Because of the diverse endocrine disruptions caused by ambient night light, some health advocates argue for a decrease in the number of lights, for a modification of design, or for a shift to using lights which function less in the blue-violet part of the spectrum, which seem to cause the most biological disruption.
The dark in Ghana
But Peter’s photos also struck me because I was fascinated by the way that perception, too, might be altered. When I emailed Peter to ask his permission to use his photo in this post, I asked him how he felt his perceptions were affected by living in an area without streetlights and neon signs and all the other electricity-based technologies that transform our experiences of night. He wrote back:
As far as perception or vision – it was obvious to me that I was the clumsiest person in town. My Ghanaian friends, it seemed, could simply see in the dark. Maybe they couldn’t read a book, but they knew who was coming when he or she was still a great distance away, and they didn’t stumble as they walked around at night. I, however, wasn’t there long enough to adjust, apparently!
I also asked Peter how living with dark night affected him more generally, his state of mind and overall experience of the Ghanaian countryside. His answer highlights a range of issues:
... Read more »
Navara, K., & Nelson, R. (2007) The dark side of light at night: physiological, epidemiological, and ecological consequences. Journal of Pineal Research, 43(3), 215-224. DOI: 10.1111/j.1600-079X.2007.00473.x
Philip Piper et al reported the discovery of the presence of Panthera tigris in the island of Palawan, Philippines. The team of archaeologists who were excavating Ille Cave near El Nido, found the tiger bones in a “large human-derived animal bone assemblage dating to at least the early 11th millennium BP that included the remains [...]... Read more »
Piper, P., Ochoa, J., Lewis, H., Paz, V., & Ronquillo, W. (2008) The first evidence for the past presence of the tiger Panthera tigris (L.) on the island of Palawan, Philippines: Extinction in an island population. Palaeogeography, Palaeoclimatology, Palaeoecology, 264(1-2), 123-127. DOI: 10.1016/j.palaeo.2008.04.003
Cookie Monster © Sesame Street
It’s not quite news that Cookie Monster no longer eats cookies. Well, he eats ONE cookie. After he fills up on vegetables! Vegetables!! Understandably, the public was outraged, and in response, Cookie felt the need to clarify: He still eats cookies—for dessert—but he likes fruit and vegetables too. Cookie Monster needed to reassert his identity, so he did what anyone would do: He interviewed with Matt Lauer.* The message was plain:He’s a Cookie Monster and Cookie Monsters eat cookies. They dream of cookies. They would bathe in cookies if they could. They can’t get enough of cookies. But can Cookie Monsters eat fruits and vegetables too?
Sure, I can understand why Cookie has been dissuaded from pursuing his pastry delights with abandon. We’re in the midst of an obesity epidemic and children are most threatened. As adults, they’ll face a number of complications, including increased risk for diabetes, heart disease, high blood pressure, sleep apnea, and other woes. Cookie Monster IS a role model. Sesame Street has been a beacon in children’s education for decades, and the habits children learn in their early years will likely follow them through their lives. So Cookie has given up his plate of delicious cookies, practices restraint, and eats more leafy greens in the hopes that young children will do the same. Now I watched Cookie Monster devour plates of cookies when I was growing up, and I have to tell you … I don’t tend to devour plates of cookies now as an adult. I learned healthy eating habits from my parents. I understood that Cookie Monster was meant to eat cookies in a way that I wasn’t. (And truthfully, how many of those cookies actually made it into his mouth anyway? Most wound up as crumbs all over his fur, which was more reason NOT to devour cookies as he did.) But I suppose that having Cookie model moderation may support parental messages at home. As I said, I get it.
Cookie Monster raps about health foods.
(They taste so good!) © Sesame Street
However, are we forcing our standards on Cookie? Forcing him to change who he really is? Cookie Monster actually begin singing (really, rapping) about eating healthy foods in the late 80s. There was no uproar then perhaps because the shift seemed less radical. Who cared if he was a closet veggie eater? But is it acceptable for Cookie to change? Is Cookie Monster's identity caught in flux as a result of conflicting messages about who he is and who we as a society, as his network, expect him to be? And in all seriousness, what kind of message does this send to children? That they should repress who they are in favor of the norm? That there is an ideal to strive toward? That once they’ve established an identity, they can’t change? Cookie Monster’s cookie/vegetable dilemma provides a good opportunity to investigate the mechanisms of identity formation.
Daniel MacFarland and Heili Pals (2005) evaluated internal and external motives that inspire change, and determined that change to the network is the driving factor in identity shifts over time. MacFarland and Pals thoroughly dissect social identity theory (SIT) and identity theory (IT), competing models of identity change, and conclude that in both theories the actor perceives an inconsistency which leads to a change in identity. Essentially, the actor seeks to establish an identity because he or she believes that it fails to meet a standard. In the case of Cookie Monster, we have two standards in conflict: Cookie is a monster who eats cookies (internal), and Cookie needs to promote moderation as demanded by his larger network of fans and supporters and in keeping with social trends (external). We shall see how these theories lead to pressure for Cookie Monster to change.
SIT describes categories as a context for identity development where the individual responds to category traits and develops an identity that permits desirable group membership. Motivation for change therefore comes from the individual’s perception of how well he or she fits in with the established category. However, categories are not fluid:Categories are more than labels; they act as constitutive rules or representational systems of meaning that are recognized by wide segments of a society. Categories establish expectations of and for behavior, and even suggest a narrative history of group membership … Category labels can reflect notions of status, permanence, size, and other meanings that influence the actor’s motive to improve his or her situation (MacFarland and Pals 2005: 291).So categories like race and gender tend to shape identities in very concrete ways because it is hard to separate self from visible traits of these categories (292). Cookie Monster is a monster who likes cookies. He is bound by this identity, and it places an expectation on him that he will behave in particular ways. He strives to be the best Cookie Monster he can be, as is evidenced from the clip with Matt Lauer where he insists that he can have the cookie—after he has had the veggies.
IT suggests that social networks provide the context for identity development, which in this case, is a response to shifting relationships:According to IT, the self consists of a collection of “role identities.” Persons switch role identities depending on the salience of those identities to the context. According to Stryker and Burke (2000), network contexts create a hierarchy of salience among the various identities that constitute the self, and lead the actor to invoke the same role or to alter performances over time. Therefore the feedback from a relational context defines identity salience, and this in turn gives rise to motives for identity change (MacFarland and Pals 2005: 290).
C is for cookie.
IT is external and fluid, driven by the changing needs of the network. Our society—Cookie Monster’s larger network—is oriented toward a certain image of acceptable eating—even though it has largely not been the norm—and Cookie has to shift to meet this expectation. However what we have is an inconsistency between the ideal self (the self that the individual would like to be), the actual self (the self that the individual is), and the public self (the self that other perceives the individual to be). Cookie Monster would like to be able to eat cookies and vegetables (ideal), but feels that he has to eat more vegetables and downplay his cookie eating tendencies (actual), while the public thinks that he is hypocritical for enjoying vegetables but a subset wants him to eat more greens (public).
To understand what ... Read more »
Mcfarland, D., & Pals, H. (2005) Motives and Contexts of Identity Change: A Case for Network Effects. Social Psychology Quarterly, 68(4), 289-315. DOI: 10.1177/019027250506800401
Two seemingly unrelated evens have occurred in my life the last two days which have caused me to think. I spent the day yesterday helping out with the campaigns of some of the local candidates here in Southeastern Michigan. Obviously the overall effect was not as successful as I would have liked. I can’t say, [...]... Read more »
Hersey, M. (1993) Lewis Henry Morgan and the anthropological critique of civilization. Dialectical Anthropology, 18(1), 53-70. DOI: 10.1007/BF01301671
White, L. (1960) : Lewis Henry Morgan: American Scholar . Carl Resek. American Anthropologist, 62(6), 1073-1074. DOI: 10.1525/aa.1960.62.6.02a00220
Michael A. Elliott, . (2008) Other Times: Herman Melville, Lewis Henry Morgan, and Ethnographic Writing in the Antebellum United States. Criticism, 49(4), 481-503. DOI: 10.1353/crt.0.0041
Service, E. (1988) : Lewis Henry Morgan and the Invention of Kinship . Thomas R. Trautmann. American Anthropologist, 90(2), 443-444. DOI: 10.1525/aa.1988.90.2.02a00410
Low weight at birth is associated with all sorts of health troubles later in life, so it seems a great idea to give nutritional supplements to pregnant women in developing nations, to add some heft to their babies. Yet the results aren't impressive. (The anthropologist Christopher W. Kuzawa notes, for instance, that this review of 13 such programs found the average weight improvement for the babies was a paltry one ounce.) Which illustrates the state of work on "fetal origins"—the theory that pre-birth experiences in the womb have a powerful effect decades later on the adult mind and body. On the one hand, as Annie Murphy Paul writes in Origins, her fine new book about the field, the idea suggests that we should ensure that developing fetuses have a healthy environment. On the other hand, the work so far can't say how to do that.
The other day, at this conference I heard Kuzawa propose an explanation for some of this befuddlement: We can "tell" the baby-to-be that food is abundant by making sure its mother ate well last month; but its development may depend instead on how that mother ate through her entire life.
A keystone of much "fetal origins" work is that the developing infant responds to cues about the kind of world it will have to join. Is food scarce or plentiful? Is life anxiety-filled or calm? Do people live to be 90 around here, or drop by 62? The mother's experience of life, translated into hormones, blood sugar, blood pressure and other chemical signals, helps determine which of the developing baby's genes are activated and how much, molding it to fit its future environment.
New parents easily imagine this fetus as a clueless investor, reacting every hour to the latest news flash. (Hence their neurotic fears about that one sip of alcohol, bite of sushi, or late-night fight that will ruin the budding child's life.) Instead, Kuzawa proposes, we should see the developing baby as a long-term player, looking for clues about its world on different time scales: months, years or even generations.
That perspective could provide a framework to organize and explain disparate pieces of data that Murphy Paul mentions. For instance, some research has found that a mother's malnourishment late in pregnancy puts her child at higher risk, decades later, for diabetes, while malnourishment in early gestation is a risk for heart disease. And the stresses of the Arab-Israeli war in 1967 seem to increase the risk of schizophrenia in adults who were gestated then—if their mothers were in the second month of pregnancy, but not the fourth or fifth. All this suggests there are distinct periods in a pregnancy, each one particularly sensitive to one environmental influence, but not others.
Kuzawa proposes to use evolutionary reasoning to find the hidden logic of these different windows. Doing so, he says, will probably require figuring out the time-frame of the each cue that an embryo uses to prepare itself for its world.
Which brings us back to nutritional supplements: If fetuses were "tuned" to today's cues about their environment, then they should respond to maternal nutritional supplements much more than they do. But it stands to reason, Kuzawa said, that a creature that will live for 30, 40 or even 90 years should not prepare itself for the environment of next month. Next month may be way out of line with typical conditions. To prepare for eating in the mother's world, the fetus ought to find out what her whole life was like.
Some eerie facts of fetal-growth research seem to line up with the idea. Here are a couple Kuzawa cited. In this study of mothers and children in 1930's England, a woman's adult height was not a great predictor of her daughter's birth weight. Much better was the mother's height back when she herself was 7. And this study in Guatemala found that infants who grew faster in their first three years of life were those whose mothers had eaten better as children.
Kuzawa thinks our species may have evolved a mechanism for "telling" the fetus what to expect on average over decades or even generations. By basing its development on its mother's childhood condition (which means, Kuzawa points out, that it's reflecting its grandmother's experience, and so on backwards in time), the fetus takes a long-term average of conditions in its world. It can't be "fooled" by one rich harvest or a plague year. That makes sense from an evolutionary point of view. But it also means the fetus can't be "fooled" by our well-meaning attempts to guide it for the few months of its mother's pregnancy.
How does this square with evidence that disasters and wars do have a big and immediate effect on the children gestated during the crisis? As Murphy Paul recounts, these "natural experiments" suggest that bad experiences have immediate effects. (Douglas Almond has found (pdf) that children whose mothers were pregnant during the 1918 flu pandemic were 15 percent more likely than near-peers to drop out of high school; they earned lower wages throughout life, and as older adults were 20 percent more likely to be disabled.)
Perhaps severe shocks overwhelm the usual pathways by which environment communicates with embryo, Kuzawa suggests. If that's so, then extreme cases like the 1918 flu pandemic or the Dutch Hunger Winter are a mixed blessing for fetal-origins research. On the one hand, they show dramatic effects which helped convince skeptics; but, on the other, they may not represent the way the system usually works.
For parents and policy wonks, too, Kuzawa's idea is a mix of good news and bad news. If the developing fetus is impervious to day-to-day ups and downs, our minor scrapes and flubs can't harm it. But that also means that our well-intentioned efforts won't help it much, either. At least, not until we take its own long view of life.
Kuzawa, C. (2005). Fetal origins of developmental plasticity: Are fetal cues reliable predictors of future nutritional environments? American Journal of Human Biology, 17 (1), 5-21 DOI: 10.1002/ajhb.20091
Martin RM, Smith GD, Frankel S, & Gunnell D (2004). Parents' growth in childhood and the birth weight of their offspring. Epidemiology (Cambridge, Mass.), 15 (3), 308-16 PMID: 15097011
Stein AD, Barnhart HX, Wang M, Hoshen MB, Ologoudou K, Ramakrishnan U, Grajeda R, Ramirez-Zea M, & Martorell R (2004). Comparison of linear growth patterns in the first three years of life across two generations in Guatemala. Pediatrics, 113 (3 Pt 1) PMID: 14993588 ... Read more »
Kuzawa, C. (2005) Fetal origins of developmental plasticity: Are fetal cues reliable predictors of future nutritional environments?. American Journal of Human Biology, 17(1), 5-21. DOI: 10.1002/ajhb.20091
Martin RM, Smith GD, Frankel S, & Gunnell D. (2004) Parents' growth in childhood and the birth weight of their offspring. Epidemiology (Cambridge, Mass.), 15(3), 308-16. PMID: 15097011
Stein AD, Barnhart HX, Wang M, Hoshen MB, Ologoudou K, Ramakrishnan U, Grajeda R, Ramirez-Zea M, & Martorell R. (2004) Comparison of linear growth patterns in the first three years of life across two generations in Guatemala. Pediatrics, 113(3 Pt 1). PMID: 14993588
My superficial reading of the paleo diet literature led me to think Dr. Loren Cordain was the modern originator of this trend, so I was surprised to find an article on the Stone Age diet and modern degenerative diseases in a 1988 American Journal of Medicine. Dr. Cordain started writing about the paleo diet around 2000, [...]... Read more »
Kuipers, R., Luxwolda, M., Janneke Dijck-Brouwer, D., Eaton, S., Crawford, M., Cordain, L., & Muskiet, F. (2010) Estimated macronutrient and fatty acid intakes from an East African Paleolithic diet. British Journal of Nutrition, 1-22. DOI: 10.1017/S0007114510002679
Eaton, S., Konner, M., & Shostak, M. (1988) Stone agers in the fast lane: Chronic degenerative diseases in evolutionary perspective. The American Journal of Medicine, 84(4), 739-749. DOI: 10.1016/0002-9343(88)90113-1
Were the Salem Witch Trials sparked by grain infected with toxic hallucinogens?... Read more »
Right up there with climate change, biodiversity conservation is one of the most challenging issues at the intersection of nature and culture. Part of this challenge arises because of genuine differences in how people value other species.
In an interesting forthcoming article in Conservation Biology, Chris Sandbrook and colleagues at Cambridge University argue that these value [...]... Read more »
Amid the chaos of a mass grave of plague victims, the 2006-2007 summer project team from the Archeoclub of Venice got a surprise. Among the dead they found evidence of belief in the undead, fear of the vampire. So how do you stop the undead from feasting on the corpses in the mass grave? The [...]... Read more »
Nuzzolese E, & Borrini M. (2010) Forensic Approach to an Archaeological Casework of "Vampire" Skeletal Remains in Venice: Odontological and Anthropological Prospectus*. Journal of forensic sciences. PMID: 20707834
How does rebel access to natural resources affect conflict? "How". Not "if". That is the question investigated by Päivi Lujala of the Norwegian University of Science and Technology, recently published in the Journal of Peace Research.
Or rather: Where previous research has either suggested a link or sought to explain it by an indirect effect through resource abundance tending to corrupt weak ... Read more »
Lujala, P. (2010) The spoils of nature: Armed civil conflict and rebel access to natural resources. Journal of Peace Research, 47(1), 15-28. DOI: 10.1177/0022343309350015
The latest stop in the #PDEx tour is being hosted by Barbara J. King:Since animals, including humans, are merely ambulatory vehicles for their selfish genes, according to the dominant framework, it would be to one's benefit to care for a niece or cousin that lost their mother but not for a stranger of which there was no genetic relation. This is because any genes that promoted such altruism towards unrelated individuals would end up losing out by using up resources that didn’t perpetuate themselves. However, these “altruistic genes” would be passed on and thrive if they were helping a kin member with similar genetic makeup. In the currency of reproductive fitness, nepotism pays.However, a study in the journal Primates by Cristiane Cäsar and Robert John Young report on a case of adoption among a wild group of black-fronted titi monkeys (Callicebus nigrifrons) from the rainforests of Brazil.Read the rest of the post here and stay tuned for the next entry in The Primate Diaries in Exile tour.Reference:Cäsar, C., & Young, R. (2007). A case of adoption in a wild group of black-fronted titi monkeys (Callicebus nigrifrons) Primates, 49 (2), 146-148 DOI: 10.1007/s10329-007-0066-x... Read more »
Cäsar, C., & Young, R. (2007) A case of adoption in a wild group of black-fronted titi monkeys (Callicebus nigrifrons). Primates, 49(2), 146-148. DOI: 10.1007/s10329-007-0066-x
The past ten years has obviously been very active in the area of human genomics, but in the domain of South Asian genetic relationships in a world wide context it has seen veritable revolutions and counter-revolutions. The final outlines are still to be determined. In the mid-1990s the conventional wisdom was that South Asians were [...]... Read more »
Gyaneshwer Chaubey, Mait Metspalu, Ying Choi, Reedik Mägi, Irene Gallego Romero, Pedro Soares, Mannis van Oven, Doron M. Behar, Siiri Rootsi, Georgi Hudjashov.... (2010) Population Genetic Structure in Indian Austroasiatic speakers: The Role of Landscape Barriers and Sex-specific Admixture. Mol Biol Evol. info:/10.1093/molbev/msq288
Did cooking make us human by providing the foundation for the rapid growth of the human brain during evolution? If so, what does this tell us about the diet that we should be eating, and can we turn back the culinary clock to an evolutionarily ideal diet? A number of provocations over the last couple of weeks have me thinking about evolution and diet, especially what our teeth and guts tell us about how our ancestors got their food.
I did a post on this a while back at Neuroanthropology.net, putting up my slides for the then-current version of my ‘brain and diet’ lecture from ‘Human evolution and diversity,’ but I’m also thinking about food and evolution because I just watched Nestlé food scientist, Heribert Watzke’s TED talk, The Brain in Your Gut. Watzke combines two intriguing subjects: the enteric nervous system, or your gut’s ‘second brain,’ and the evolution of diet. I’ll deal with the diet, gastro-intestinal system and teeth today, and the enteric nervous system another day because it’s a great subject itself (if you can’t wait, check out Scientific American).
This piece is going to ramble a bit, as it will also include some thoughts on the subject of diet and brain evolution sparked by multiple conversations: with Prof. Marlene Zuk (of the University of California Riverside), with Paul Mason (about Terrence Deacon’s article that he and Daniel wrote about), and following my annual lecture on human brain evolution as well as conversations today with a documentary crew from SBS. So let’s begin the meander with Dr. Watzke’s opening bit on why he thinks humans should be classified as ‘coctivors,’ that is, animals that eat cooked food, rather than ‘omnivores.’
Although I generally liked the talk, I was struck by some things that didn’t ring quite right, including Dr. Watzke’s opening bit about teeth (from the online transcript):
So everyone of you turns to their neighbor please. Turn and face your neighbors. Please, also on the balcony. Smile. Smile. Open the mouths. Smile, friendly. (Laughter) Do you — Do you see any Canine teeth? (Laughter) Count Dracula teeth in the mouths of your neighbors? Of course not. Because our dental anatomy is actually made, not for tearing down raw meat from bones or chewing fibrous leaves for hours. It is made for a diet which is soft, mushy, which is reduced in fibers, which is very easily chewable and digestible. Sounds like fast food, doesn’t it.
Okay, let’s not be pedantic about it, because we know that humans, in fact, do have canines. Watzke’s point is that we don’t have extended canines, long fangs that we find in most carnivorous mammals or in our primate relatives like chimps or gorillas.
The problem is that the absence of projecting canines in humans is a bit more interesting than just, ‘eat plants=less canine development.’ In fact, gorillas are completely vegetarian, and the males, especially, have massive canines; chimpanzees eat a very small amount of animal protein (something like 2% of their caloric intake), and they too have formidable canines. Our cousins don’t have extended canines because they need them for eating – rather, all evidence suggests that they need big fangs for fighting, especially intraspecies brawling among the males in order to reproduce.
Teeth of human (left), Ar. ramidus (middle), and chimpanzee (right), all males.
The case of chimpanzee canines is especially intriguing because, with the remains of Ardipithecus ramidus now more extensively discussed, a species potentially close to the last common ancestor of humans and chimps, we know very old hominids didn’t have pronounced canines. If the remains are indicative of our common ancestor with chimpanzees (and there’s no guarantee of that), then it’s not so much human canine shrinkage alone that’s the recent evolutionary development but also the re-development of chimpanzee canines, probably due to sexual competition.
Even with all the possible points of disagreement, the basic point is that human teeth are quite small, likely due both to shifts in our patterns of reproduction and sexual selection and to changes in our diet. Over the last few million years, our ancestors seemed to have gotten more and more of their calories out of meat, one argument goes, at the same time that our ancestors’ teeth were getting less and less capable of processing food of all sorts (or, for that matter, being effectively used as a weapon).
Hungrier and hungrier, with weaker jaws and smaller teeth
As I always remind my students in my lecture on human brain evolution, if big brains are so great, why doesn’t every animal have one? The answer is that big brains also pose certain challenges for an organism (or, if you prefer, ‘mo’ neurons, mo’ problums’).
The first and most obvious is that brains are hungry organs, devouring energy very fast and relentlessly, especially as they grow. The statistic that we frequently throw around is that the brain constitutes 2% of human body mass and consumes 25% of the energy used by the body; or, to put it another way, brain tissue consumes nine times as many calories as muscle at rest. So, if evolution is going to grow the brain, an organism is going to have to come up with a lot of energy – a smaller brain means that an animal both can eat less and be more likely to survive calorie drought.
But hominin brain growth also presents a few other problems, which sometimes get underestimated in accounts of our species’ distinctiveness. For example, natural selection had to solve a problem of excess heat, especially if big-brained hominids were going to do things that their big brains should tell them are ill advised, like run around in the hot sun. As your brain chews up energy, it generates heat, and the brain can overheat, a serious problem with sunstroke. The good news is that somewhere along the line our hominin ancestors picked up a number of adaptations that made them very good at shedding heat, from a low-fur epidermis and facility to produce copious sweat to a system of veins that run from the brain, shunting away heat (for a much more extensive discussion, see Sharma, ed. 2007, or the work of anthropologist Dean Falk, including her 1990 article in BBS laying out the ‘radiator theory’).
Not only is our brain hungry and hot; our enlarged cranium also poses some distinctive challenges for our mothers, especially as bipedalism has narrowed her birth canal by slowly making the pelvis more and more basket shape (bringing the hips under our centre of gravity). The ‘obstetrical dilemma,’ the narrowing of the birth canal at the same time that the human brain was enlarging, led to a bit of a brain-birth canal logjam, if you’ll pardon the groan-worthy pun (see Rosenberg and Trevathan 1995).
Although frequently presented as a significant constraint on brain growth (and I’m sure all mother... Read more »
Rosenberg, K., & Trevathan, W. (2005) Bipedalism and human birth: The obstetrical dilemma revisited. Evolutionary Anthropology: Issues, News, and Reviews, 4(5), 161-168. DOI: 10.1002/evan.1360040506
Suwa, G., Kono, R., Simpson, S., Asfaw, B., Lovejoy, C., & White, T. (2009) Paleobiological Implications of the Ardipithecus ramidus Dentition. Science, 326(5949), 69-69. DOI: 10.1126/science.1175824
In the 1990's archaeologists uncovered a grave in Connecticut dating from the mid-1800's that provided the first physical evidence of a historical belief in vampires in New England.... Read more »
Sledzik, P., & Bellantoni, N. (1994) Bioarcheological and biocultural evidence for the New England vampire folk belief. American Journal of Physical Anthropology, 94(2), 269-274. DOI: 10.1002/ajpa.1330940210
Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.
If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.
Research Blogging is powered by SMG Technology.
To learn more, visit seedmediagroup.com.