The Neurocritic

Visit Blog Website

305 posts · 320,444 views

Deconstructing the most sensationalistic recent findings in Human Brain Imaging, Cognitive Neuroscience, and Psychopharmacology

Sort by: Latest Post, Most Popular

View by: Condensed, Full

  • November 14, 2011
  • 03:08 PM

The Return of Physiognomy

by The Neurocritic in The Neurocritic

Physiognomy "is the assessment of a person's character or personality from their outer appearance, especially the face." Although one might think of physiognomy as an outdated pseudoscience, along with its brethren craniometry and phrenology, facial phenotyping has undergone a resurgence of interest. Most recently, a study by Wong et al. (2011) looked at facial width and financial success in male CEOs:Can head shape determine chances of business success?Research suggests that the shape of a chief executive's head can show whether he will be successfulBut why even ask such a question? In general, the authors noted that certain psychological traits (e.g., extraversion) are associated with leadership ability, so they wondered whether an objective physical trait could predict leadership success. More specifically, they examined whether the facial width-to-height ratio (WHR) of 55 male CEOs was related to the financial performance of their companies. There's actually a sizable literature on facial WHR and aggressiveness in men:Researchers have theorized that this relationship exists because higher facial WHRs make men seem more physically imposing, which minimizes the chance of retribution for their aggressive actions (Stirrat & Perrett, 2010). In addition, facial WHR is a sexually dimorphic trait thought to be influenced by the effects of testosterone during adolescence. It can be objectively measured from photographs, which in this case were obtained from internet sources. The Fortune 500 firms were selected based on extensive media coverage and availability of online photos.The 55 firms in our sample represented a range of industries, including computer manufacturing, transportation, and retail; on average, the firms had generated $38 billion in sales and had 119,684 full-time employees. The organizations in the sample included General Electric, Hewlett-Packard, and NIKE, Inc.Results indicated that high facial WHR did indeed predict financial performance. Is this because of a more aggressive leadership style? Other studies have found a relationship between facial WHR and physical aggression (Carré & McCormick, 2008). Does this mean that successful CEOs are more likely to win bar fights (adjusted for age)? Or to spend a greater amount of time in the penalty box, so to speak?Canadian researchers Carré and McCormick (2008) actually did find a correlation between facial WHR in hockey players and time spent in the penalty box, which was used as a proxy for physical aggressiveness. So should the most violent hockey players be the leaders of Fortune 500 companies? Perhaps, if they're companies with "cognitively simple" leadership teams,1 because the facial-financial link was stronger for CEOs of such firms.Or not. Wong et al. (2011) conclude:In sum, our study has advanced leadership research by showing that objective facial metrics of male leaders, as well as the broader context in which these leaders make decisions, are closely related to organizational performance. Although men with high facial WHRs may be aggressive and untrustworthy in interpersonal interactions (Carré & McCormick, 2008; Stirrat & Perrett, 2010), our research suggests that, at a societal level, organizational success may compensate for individual transgressions...What Luscious Lips You HaveThe above studies found significant physiognomic patterns in men, but these results did not hold for women. In contrast, a recent study (Brody & Costa, 2011)2 claimed that a female facial feature, prominence of the upper lip tubercule, correlated with....... the ability to achieve vaginal orgasm!Why would you ever propose such a thing? The infamous Stuart Brody has an agenda, and it's that unprotected penile-vaginal sex is the only mature and worthwhile form of sex.A clinical observation (by the present senior author in discussion with colleagues) of an association between a novel visible marker (of likely prenatal origin) and enhanced likelihood of vaginal orgasm among coitally experienced women led to the hypothesis empirically tested in the present study. The hypothesis is that a more prominent tubercle of the upper lip is associated with vaginal orgasm (measured both as ever having had a vaginal orgasm, as well as vaginal orgasm consistency in the past month). Now Professor Brody, what sort of "clinical observation" led you to this fanciful idea? Oh I don't know, perhaps the same one that led you to propose that you can tell by the way she walks (see Scicurious, Dr. Isis, and Jezebel). For extensive critiques of the methodology used in these studies (e.g., definitions of various sexual activities, bias, self-selection, etc.), I recommend reading Dr Petra.Back to the lip tubercle... Why the lip tubercle? Why not 2D:4D digit ratio, which is influenced by prenatal androgens? Brody and Costa:There is substantial variability in the degree to which the tubercle of the lip develops. Other than its mention in the basic anatomic literature and surgical literature (especially with regard to reconstruction of labial malformation or as part of a package of aesthetic modifications to the lips), we do not know of scientific literature on aspects of the tubercle of the lip that might directly impinge upon sexual function.OK then, the idea was pulled out of a "clinical observation" hat. Were there any other facial characteristics or bodily features that were examined but not found to correlate with penile-vaginal intercourse (PVI)?Then we have the offensive speculation is possible that a flatter or absent tubercle might have something in common with the at times subtle lip abnormalities associated with subtle neuropsychologic abnormalities in marginal cases of fetal alcohol syndrome...Ladies! If you have a flat or absent tubercle, you're neuropsychologically and sexually abnormal! And how was the tubercle defined? By the participants themselves, who looked in a mirror and interpreted the verbal definitions3 as they saw fit [91 of the 405 women who completed the online survey were excluded because they didn't have a mirror handy].... Read more »

  • October 31, 2011
  • 03:45 AM

Buried Alive!

by The Neurocritic in The Neurocritic

The pathological fear of being buried alive is called taphophobia1 [from the Greek taphos, or grave]. Being buried alive seems like a fate worse than death, the stuff of nightmares and horror movies and Edgar Allan Poe short stories. What could be pathological about such a fear? When taken to extremes, it can become a morbid, all-consuming obsession. In 1881, psychiatrist Enrico Morselli wrote about "two hitherto undescribed forms of Insanity" (English translation, 2001):As the result of some observations I have made in recent years, I propose to add two new and previously undescribed varieties to the various forms of insanity with fixed ideas, whose underlying phenomenology is essentially phobic. The two new terms I would like to put forward, following the nomenclature currently accepted by leading clinicians, are dysmorphophobia and taphephobia.The first condition consists of the sudden appearance and fixation in the consciousness of the idea of one’s own deformity; the individual fears that he has become deformed (dysmorphos) or might become deformed, and experiences at this thought a feeling of an inexpressible ansieta (anxiety). The second condition, taphephobia, consists of the sick person’s being plagued, at his approach to the time of his own death, by a fear of the possibility of being buried alive (taphe, grave), this fear becoming the source of a terribly distressing anguish. It is not necessary for me to give a very detailed description of these two new forms of rudimentary paranoia I have discovered and named, since in so doing I would only be repeating descriptions that have long been available among the many and varied forms of paranoia in books and the most important journals of psychiatry; instead, I shall limit myself to making some general comments on the conditions. The ideas of being ugly and of being buried whilst in a state of apparent death are not, in themselves, morbid; in fact, they occur to many people in perfect mental health, awakening however only the emotions normally felt when these two possibilities are contemplated. But, when one of these ideas occupies someone’s attention repeatedly on the same day, and aggressively and persistently returns to monopolize his attention, refusing to remit by any conscious effort; and when in particular the emotion accompanying it becomes one of fear, distress, anxiety and anguish, compelling the individual to modify his behaviour and to act in a pre-determined and fixed way, then the psychological phenomena have gone beyond the bounds of normal, andmay validly be considered to have entered the realm of psychopathology.Dysmorphophobia has come to be known as body dysmorphic disorder, a preoccupation with perceived defects in one's appearance (Buhlmann & Winter, 2011).Although taphophobia seems irrational now with modern definitions of brain death,2 it was a more prevalent (and realistic) fear in the 19th century. "Safety coffins" with air tubes, bells, flags, and burning lamps were a booming business. However, these contraptions failed to assuage an inventor with severe taphophobia (Dossey, 2007):One of the most popular safety devices in Victorian England was the Bateson Revival Device, invented by George Bateson, who made a fortune in sales. The gadget came to be known as Bateson’s Belfry. It consisted of an iron bell mounted on the coffin lid just above the deceased’s head, with a cord connected to the hand “such that the least tremor shall directly sound the alarm.” Ironically, his invention did nothing to relieve his own all-consuming fear of premature burial. In 1886, driven mad by his dread, he committed suicide by dousing himself with linseed oil and setting himself on fire.Would you rather burn to death or suffocate in a coffin? Excruciating physical pain vs. sheer panic,3 bloodied limbs, and mental anguish? Not a pleasant choice.Footnotes1 Also spelled taphephobia.2 Which are still controversial, nonetheless (Teitelbaum & Shemi, 2011).3 Well, not if you're The Bride in Kill Bill Vol. 2.ReferencesBuhlmann U, Winter A. (2011). Perceived ugliness: an update on treatment-relevant aspects of body dysmorphic disorder. Curr Psychiatry Rep. 13:283-8.Dossey L. (2007). The undead: botched burials, safety coffins, and the fear of the grave. Explore (NY). 3:347-54.Morselli, E., & Jerome, L. (2001). Dysmorphophobia and taphephobia: two hitherto undescribed forms of Insanity with fixed ideas. History of Psychiatry, 12 (45), 103-107 DOI: 10.1177/0957154X0101204505 [Introduction]Morselli, E. (2001). Dysmorphophobia and taphephobia: two hitherto undescribed forms of Insanity with fixed ideas. History of Psychiatry, 12 (45), 107-114 DOI: 10.1177/0957154X0101204506 [Translation of original Italian]Teitelbaum J, Shemi SD.... Read more »

  • October 23, 2011
  • 03:44 AM

Activation of the Hate Circuit While Reading 'Depression Uncouples Brain Hate Circuit'

by The Neurocritic in The Neurocritic

A recent article published in Molecular Psychiatry has the curious title, 'Depression uncouples brain hate circuit' (Tao et al., 2011). Hate circuit, you ask? Is there really any such thing? Is the existence of a distinctive brain circuit for hate so well-established that we ought to go about including it in the title of our papers? And what does it mean for this circuit to be uncoupled in depression? That depressed people no longer have coherent feelings of hatred?The current article refers to the one prior fMRI study on the topic, which examined the 'Neural correlates of hate' (Zeki & Romaya, 2008). The 17 participants were chosen because they expressed intense hatred for a particular individual. Sixteen people hated an ex-lover or a competitor at work, and one person hated a famous political figure (see Hate On Halloween for details of that study). Participants viewed pictures of a person they hate, and the resultant BOLD signal changes were compared to when they viewed pictures of neutral people.And the groundbreaking hypothesis? Love and hate might be represented by different brain states! Who knew?We hypothesized that the pattern of activity generated by viewing the face of a hated person would be quite distinct from that produced by viewing the face of a lover. The results identified 7 regions that were significantly more active for the Hated Face condition than for the Neutral Face condition. The flaming figure above illustrates a few of them, including the medial frontal gyrus [the anterior cingulate cortex (ACC) and the pre-SMA], the right putamen, and bilateral premotor cortex. The other regions were the frontal pole and our friend, bilateral insula [activated in all sorts of conditions from speech to working memory to reasoning to pain to disgust to the allure of Chanel No. 5 and "love" of iPhones]. An additional correlation analysis related degree of hatred to level of activation across 5,225 voxels (using an uncorrected statistical threshold of p≤0.01) and found three regions to be most related: right insula, right premotor cortex, and right ACC.These results set the stage for the current study by Tao et al. (2011), which compared the resting state functional connectivity patterns between controls and severely depressed individuals. The "resting state" or "default mode network" (DMN) is the brain activity observed when there is no active task (Raichle et al., 2001). In other words, the participants are free to daydream about their lover or to think about dinner or to remember the amusing movie from last night or to focus on feelings of despair. A specific group of brain regions has been identified as the DMN, and these are deactivated when participants have to perform a demanding cognitive or perceptual task.A new feature of the present study was the community mining algorithm used to determine coherent resting state networks among the 90 regions of interest (ROIs). First, a template was formed based on data from 37 healthy controls. Then the network connectivity for the control template was compared to that of two depressed groups: 15 unmedicated first-episode major depressive disorder (FEMDD) patients and 24 resistant major depressive disorder (RMDD) patients.The 6 "communities" or resting state networks are illustrated below. Note that RS1/DMN (red in the top figure) isn't identical to the typical DMN (orange in the bottom figure).Top: Adapted from Fig 1C (Tao et al., 2011). Left: Medial view of the surface of the brain. Right: The lateral view of the surface of the brain. Different colors represent different communities. Bottom: Adapted from Buckner et al. (2008). One big difference between the two schemes is that the dorsal ACC /SMA active task network (blue in the bottom left figure) is part of the DMN in Tao et al.'s community structure (red in the top left figure).But wait, where is the 'hate circuit'?? It emerged with some bizarre post hoc hand waving:It can be seen from Figures 2, 3 and 4 that the strongest evidence for reduced connectivity compared with control subjects in both FEMDD and RMDD is that between the insula and putamen in both brain hemispheres (s=0.4 and 0.25 for FEMDD and RMDD, respectively). Additionally, the link between the left superior frontal gyrus and the right insula is also reduced (s=0.2991 and 0.2658). Thus, the links between the three main components of the ‘hate circuit’ have become largely uncoupled. OK, I thought Zeki's 'hate circuit' included bilateral premotor cortex and the frontal pole, plus the putamen only in the right hemisphere. But somehow, the community mining algorithm determined that the insula [part of RSN4 - auditory network] has links with the putamen [RSN6 - subcortical network] and the dorsal superior frontal gyrus [RSN1 - DMN, and perhaps not the same area as in Zeki & Romaya] only in controls but not in the depressed participants. And this set of links in controls comprises the 'hate circuit' and nothing else.Adapted from Fig 4a (Tao et al., 2011). The common links of the first-episode major depressive disorder (FEMDD) and resistant major depressive disorder (RMDD) networks. Red lines are links that appear in depression network only while blue lines are links that appear in n... Read more »

Tao, H., Guo, S., Ge, T., Kendrick, K., Xue, Z., Liu, Z., & Feng, J. (2011) Depression uncouples brain hate circuit. Molecular Psychiatry. DOI: 10.1038/mp.2011.127  

  • October 11, 2011
  • 06:27 AM

Rising Mortality Rates for People with Serious Mental Illness

by The Neurocritic in The Neurocritic

Fig 1 (Hoang et al., 2011). Trend in standardised 365 day all cause mortality ratio for all people discharged from hospital with principal diagnosis of bipolar disorder or schizophrenia.The "mortality gap" is the differential between the mortality rates for the general population and for persons with serious mental illness (schizophrenia and bipolar disorder). A new study from England examined hospital records for psychiatric patients discharged between 1999 and 2006, and determined how many had died within one year (Hoang et al., 2011). The authors expected to see a drop in the mortality gap over time due to government programs:Over the past decade several strategies have been implemented in England and Wales aimed at reducing the mortality gap between people with serious mental illness and the general population, including those to address deliberate self harm and to reduce suicide (7 8 9), to decrease smoking (10 11 12), alcoholism, and drug misuse (13 14) and to deal with other lifestyles associated with increased mortality (15 16). Recent studies have suggested that the rate of suicide has been stabilising among people with mental disorders as a whole (17 18 19 20 21); however, trends in mortality for people with schizophrenia or bipolar disorder remain poorly characterised, particularly the relative contributions of natural and unnatural causes. The United Kingdom government’s recent mental health strategy states that “more people with mental health problems will have good physical health” as one of its objectives, specifically stating that “fewer people with mental health problems will die prematurely” (22). It is therefore timely to review the level of and trends in these recognised inequalities.However, as illustrated in Fig. 1 above, the opposite trend was observed, with increased mortality for those with schizophrenia and bipolar disorder. The standardized mortality ratios show a rise from ~30-60% greater than the general population to about double the population average:For people discharged with schizophrenia, the ratio was 1.6 in 1999 and 2.2 in 2006 (P<0.001 for trend). For bipolar disorder, the ratios were 1.3 in 1999 and 1.9 in 2006 (P=0.06 for trend). Ratios were higher for unnatural than for natural causes. About three quarters of all deaths, however, were certified as natural, and increases in ratios for natural causes, especially circulatory disease and respiratory diseases, were the main components of the increase in all cause mortality. These results are alarming (but not new, unfortunately) and similar to those reported by Chang et al. (2011) - see Improving the Physical Health of People With Serious Mental Illness. In that post, I mentioned the possible role of "second generation" or atypical antipsychotics, which can cause substantial weight gain and hence diabetes, hypertension, cardiovascular problems, high cholesterol, and stroke. To counteract these serious side effects, a regular part of mental health treatment should include programs that promote better physical health: smoking cessation and nutritionists and structured exercise classes in addition to standard psychiatric care and substance abuse treatment. For example, a six month intervention pilot study enrolled 63 overweight participants at psychiatric rehabilitation day programs and showed promising initial results (Daumit et al., 2010).These concerns were mentioned earlier in a systematic review of the literature by Saha et al. (2007), who urged immediate action:“in light of the potential for second-generation antipsychotic medications to further adversely influence mortality rates . . . optimizing the general health of people with schizophrenia warrants urgent attention.”ReferencesChang CK, Hayes RD, Perera G, Broadbent MT, Fernandes AC, Lee WE, Hotopf M, Stewart R. (2011). Life expectancy at birth for people with serious mental illness and other major disorders from a secondary mental health care case register in London. PLoS ONE 6(5):e19590.Hoang, U., Stewart, R., & Goldacre, M. (2011). Mortality after hospital discharge for people with schizophrenia or bipolar disorder: retrospective study of linked English hospital episode statistics, 1999-2006. BMJ, 343:d5422. DOI: 10.1136/bmj.d5422Daumit GL, Dalcin AT, Jerome GJ, Young DR, Charleston J, Crum RM, Anthony C, Hayes JH, McCarron PB, Khaykin E, Appel LJ. (2010). A behavioral weight-loss intervention for persons with serious mental illness in psychiatric rehabilitation centers. Int J Obes (Lond). 35(8):1114-23.Saha S, Chant D, McGrath J (2007). A systematic review of mortality in schizophrenia: is the differential mortality gap worsening over time? Arch Gen Psychiatry 64:1123-31. ~~~~~~~~~~~October 10 was World Mental Health Day, an event designed to raise public awareness of mental health issues:This year the theme is "Investing in mental health". Financial and human resources allocated for mental health are inadequate espec... Read more »

  • October 6, 2011
  • 12:19 PM

New York Times on Addiction and The Insula

by The Neurocritic in The Neurocritic

In Clue to Addiction, Brain Injury Halts SmokingBy BENEDICT CAREYPublished: January 26, 2007Scientists studying stroke patients are reporting today that an injury to a specific part of the brain, near the ear, can instantly and permanently break a smoking habit. People with the injury who stopped smoking found that their bodies, as one man put it, “forgot the urge to smoke.”The finding, which appears in the journal Science, is based on a small study [Naqvi et al., 2007]. But experts say it is likely to alter the course of addiction research, pointing researchers toward new ideas for treatment.While no one is suggesting brain injury as a solution for addiction, the finding suggests that therapies might focus on the insula, a prune-size region under the frontal lobes that is thought to register gut feelings and is apparently a critical part of the network that sustains addictive behavior.Hey, wait a minute!Didn't the NYT just publish an authoritative piece to the contrary? In You Love Your iPhone. Literally., Martin Lindstrom claimed the insula was a signifier of love and compassion, not addiction:WITH Apple widely expected to release its iPhone 5 on Tuesday, Apple addicts across the world are getting ready for their latest fix.But should we really characterize the intense consumer devotion to the iPhone as an addiction? A recent experiment that I carried out using neuroimaging technology suggests that drug-related terms like “addiction” and “fix” aren’t as scientifically accurate as a word we use to describe our most cherished personal relationships. That word is “love.”. . .But most striking of all was the flurry of activation in the insular cortex of the brain, which is associated with feelings of love and compassion. The subjects’ brains responded to the sound of their phones as they would respond to the presence or proximity of a girlfriend, boyfriend or family member. OK, OK, we all know by now that royal proclamations of brain function based on logical fallacies and unpublished (and never-to-be-peer-reviewed) commercial studies are not to be believed. NYT did publish a retort to this silliness, a Letter to the Editor (The iPhone and the Brain) in which "Forty-five neuroscientists respond to a recent Op-Ed about using brain imaging to analyze our attachment to digital devices." We also know that the insula is activated in a substantial percentage of all neuroimaging studies (Yarkoni et al. 2011; PDF). Reflecting this ubiquity, The Neurocritic blog archive contains 73 unique posts with the word "insula."But what of addiction and the insula? In their 2007 Science paper, Naqvi and colleagues performed a retrospective study of 69 stroke patients (all smokers): 19 with lesions in the insula and 50 with lesions elsewhere. The color coding in the figure below depicts the number of individuals with damage in specific brain regions.Fig. 1 (Naqvi et al., 2007). Number (N) of patients with lesion in each of the regions identified in this study, mapped onto a reference brain. Boundaries of anatomically defined regions are drawn on the brain surface. Regions not assigned a color contained no lesions. (Top) All patients. The horizontal line marks the transverse section of the brain shown in the top row. The vertical line marks the coronal section shown in the bottom row. (Middle) Patients with lesions that involved the insula. (Bottom) Patients with lesions that did not involve the insula.The likelihood of post-stroke smoking cessation did not differ between the insula and non-insula groups, but those with insula lesions who did quit smoking reported that it was easy to do so. The authors concluded that......smokers with brain damage involving the insula, a region implicated in conscious urges, were more likely than smokers with brain damage not involving the insula to undergo a disruption of smoking addiction, characterized by the ability to quit smoking easily, immediately, without relapse, and without persistence of the urge to smoke. The problem with this assertion is that it relies on memory for events that occurred an average of 8 yrs earlier, which could be subject to recall bias (Vorel et al., 2007). A better design would be a prospective study that follows patients from the time of stroke and then assesses subsequent smoking behavior. In fact, Bienkowski et al. (2010) performed such a study and failed to see a difference between their insula and non-insula groups at a 3 month follow-up. This suggests that the insula does not play a special role in addiction.What does this mean for iPhone love? Is Lindstrom right? Unlikely! He would have to demonstrate that insular strokes cause an inability to feel love for iPhones (or anything else, for that matter). Such a finding would suggest that an intact insula is necessary for the experience of love and compassion, and that the activity in his fMRI experiment was not a mere epiphenomenon.In the real world of peer-reviewed neuroimaging research, however, that sort of converging evidence is rarely obtained.Further ReadingNYT Editorial + fMRI = complete crapthe New York Times blows it big time on brain imagingNeuromarketing means never having to say you're peer reviewed (but here's your NYT op-ed space)fMRI Shows My Bullshit Detector Going Ape Shit Over iPhone Lust...and pollyannaish comment by Martin LindstromNYT Letter to the Editor: The uncut versionArticles on insular cortex from The Amazing World of Psychiatry: A Psychiatry BlogThe Insula Is The New Black... No Longer an Island, the Insula Is Now a Hub of High FashionReferencesBienkowski P, Zatorski P, Baranowska A, Ryglewicz D, Sienkiewicz-Jarosz H. (2010). Insular lesions and smoking cessation after first-ever ischemic stroke: a 3-month follow-up. Neurosci Lett. 478:161-4.... Read more »

Naqvi, N., Rudrauf, D., Damasio, H., & Bechara, A. (2007) Damage to the Insula Disrupts Addiction to Cigarette Smoking. Science, 315(5811), 531-534. DOI: 10.1126/science.1135926  

  • September 25, 2011
  • 05:40 AM

The Neurophysiology of Pain During REM Sleep

by The Neurocritic in The Neurocritic

In the last post, we learned about The Phenomenology of Pain During REM Sleep. Real life pain can intrude into dreams, as was shown for experimentally induced pain (Nielsen et al., 1993) and in hospitalized burn patients (Raymond et al., 2002). In this post we'll hear about a fascinating experiment that recorded laser evoked potentials directly from the brains of epilepsy patients who were being surgically monitored for seizures (Bastuji et al. 2011). Only under rare circumstances can intracranial electrodes be placed in the brains of humans, and the current study had the unique opportunity to record from three major pain regions simultaneously: the posterior insula (Brodmann area 13), the parietal operculum (somatosensory area S2), and the mid-anterior cingulate cortex (BA 24). These areas comprise the so-called "Pain Matrix"1 (PM), ornetwork of cortical structures that respond consistently to noxious mechanical or thermal stimuli. The lateral structures of the PM (posterior insula and suprasylvian operculum) are thought to subserve intensity coding and localization of pain inputs, while the medial PM system (anterior and mid-cingulate cortex) is linked to the attentional (orienting and arousing) components of pain.In the present study, Bastuji et al. (2011) recorded laser evoked potentials (LEPs) from these brain regions during different stages of sleep, as well as while the patients were awake. LEPs are a specific type of EEG response time-locked to the application of painful laser heat stimuli. When recorded from the scalp, a sequence of three LEPs is generated in rapid succession, within the first 400 milliseconds after laser stimulation. As described in a review by Plaghki and Mouraux (2005),Laser heat stimulators selectively activate Aδ and C-nociceptors ["pain receptors"] in the superficial layers of the skin. Their high power output produces steep heating ramps, which improve synchronization of afferent volleys and therefore allow the recording of time-locked events, such as laser-evoked brain potentials. Study of the electrical brain activity evoked by Aδ- and C-nociceptor afferent volleys revealed the existence of an extensive, sequentially activated, cortical network.The advantage of recording intracranial LEPs is that you know precisely when the pain-related activity occurred, as well as where the brain response was located (unlike with standard EEG). Two major components were observed: Component 1 (C1), peaking at ~200 ms post-stimulus and Component 2 (C2), peaking at ~300 ms. Because the components were of varying polarities depending on brain region, they weren't labelled according to the customary N2/P2 as seen on the scalp. Of primary interest was what happened to these components during Stage 2 sleep and REM sleep (see Fig. 3A below).Figure 3A (modified from Bastuji et al. 2011). Grand average LEPs in referential recording mode during wakefulness, sleep stage 2, and paradoxical sleep in the operculum (bottom), the insula (middle), and the mid-anterior cingulate (top). Traces recorded by the electrode contact yielding the largest amplitudes are superimposed on those from the adjacent contact. On the left part of the figure, for each structure, the coordinates of the contacts where the maximal amplitudes of the C1–C2 components were recorded are indicated on mean sagittal MRIs.Typically, painful stimuli at the nociceptive threshold will cause awakening ~30% of the time. In this study, the stimulus intensity of the laser2 was set individually in each participant to be slightly above pain threshold. C1 and C2 decreased in amplitude in all three brain regions during Stage 2 sleep, relative to wakefulness. During REM sleep, however, both components remained stable in amplitude (relative to Stage 2) in the operculum and insula, but they decreased dramatically in the cingulate. Recall that the medial mid-anterior cingulate cortex (ACC) is associated with the attentional and affective components of pain, while the lateral opercular and insular cortices are more related to the sensory aspects of pain. The authors suggest that this dissociation between the lateral and medial pain systems is what allows the experience of pain in dreams without being alerted enough to wake up. The fact that larger mid-ACC LEPs can predict when motor responses to pain will occur supports this interpretation.CODA (Notes from an Actual Pain Dream)Lately I've had a painful orthopedic issue (in real life). I also have a cat who is fond of laying on my legs at night, which is not comfortable at all under the circumstances. Yesterday morning, I had a terrible nightmare in which my real life leg pain was projected onto someone else in an exceptionally gruesome way. I was driving along an unknown neighborhood street when suddenly a man appeared in front of my car. It wasn't clear if he was on the hood or on the trunk of the car ahead of me or suspended in the air in a dream-like way. At any rate, if that wasn't bad enough, he pulled up the body of a man who had fallen under my car and had both his legs amputated from being run over -- one leg was amputated below the knee, the other was at the hip. The gravely injured man was still alive. I was absolutely horrified. All I could do is say "oh my god oh my god oh my god" over and over. At some point my car rolled backward down a steep hill and the other motorists behind me were exclaiming "oh my god oh my god" as well.It was an awful nightmare, and in the dream I was quite traumatized by the entire experience. Did I feel excruciating pain when I woke up? No, not really, just the usual ache.Further ReadingLEPs and pain perception can be reduced while looking at one's own hand or at beautiful artwork:It Hurts Less When I Can See ItPain & Paintings: Beholding Beauty Reduces Pain Perception and Laser Evoked PotentialsFootnotes1 "So-called" because the Pain Matrix might not be that specific to nociception after all (Iannetti & Mouraux, 2010).2 Laser pulses were delivered to the back of the hand opposite to the hemisphere with the implanted electrodes.References... Read more »

  • September 18, 2011
  • 05:26 AM

The Phenomenology of Pain During REM Sleep

by The Neurocritic in The Neurocritic

Coarse — Pain in DreamsHave you ever felt pain in dreams? I have. Once I dreamed I was lying on my stomach, getting a tattoo on my calf against my will. Because it was a particularly malevolent tattoo studio, I cried out in the dream. When I woke up, I felt no pain at all. It was false, a figment of the Pain Matrix. Another time a monkey bit me on the arm. Once again, the pain vanished upon awakening.I think these examples of what I'll call "fake pain" are unusual. More common are instances when you get a calf cramp or have pins and needles in your arm while sleeping, and this real life pain gets incorporated into dreams about tattoos or monkey bites. But even these possibilities have been discounted as unlikely, because of limitations on which sensory modalities can be represented in dreams (Nielsen et al., 1993):One possibility is that pain is beyond the representational capability of image formation processes -- that neither pain memories nor pain images are reproducible in the dreaming mode. A second possibility is that the sensory systems that might contribute to the representation of pain imagery are not functional during dreaming. This possibility is consistent with the finding that the high threshold polysynaptic afferent fibers that conduct pain sensations are actively inhibited during REM sleep in cats.But plenty of people have reported feeling pain in dreams, so why construct hypotheses about why it's impossible? So skeptics Tore A. Nielsen and three fellow psychology graduate students, along with an undergrad art therapy student, conducted experiments on themselves in a 1993 paper. They inflated a blood pressure cuff above the knee of their colleagues 5 min into a bout of REM sleep1 [to produce ischemia of the leg muscles, i.e. pins and needles or paralysis].Results indicated that pain sensations occurred in 13 out of 42 stimulation trials with usable dream reports (31%). In contrast, only one of the 21 non-stimulated control dreams contained a reference to pain (4.8%). Many of the dreams were realistic and took place in a sleep lab-like setting. Others were more fantastic; one was set at a rodeo, another at a dance party in a barn [the authors lived in Montreal]. Some were lucid2, like the "ugly shoe" dream:I'm in a small store trying on a pair of ugly shoes. I started walking. Then I staggered forward because I was waking up and not fully conscious. You were laughing at me. I said "come on, its not funny, I'm trying to wake up!" This is the second or third time I've been trying to wake up.Some of the participants were more likely to experience pain dreams than others. Subject B, who reported pain dreams on 70% of the stimulation trials, had knee surgery a few years prior and still felt numbness or tingling sensations on occasional. Most of the time, the pain sensations occurred in the appropriate leg for all participants. Interestingly, the "crampy pressure", "tingling", or "hurting a bit" sensations felt upon awakening were much less intense than those that occurred during the dream.When interpreting these subjective reports, one has to consider an expectation or priming effect, since all the students were focused on dream research, with extensive experience in the sleep lab. However, this was not the case in a study of 28 hospitalized burn patients (Raymond et al., 2002). Obviously, the severity of suffering in burn patients is intense and chronic, unlike having temporary "pins and needles" in your leg. Over a period of 5 days, pain dreams comprised 30% of all reported dreams, which is quite comparable to the artificial BP cuff study. The patients who reported pain dreams (39%) had more nightmares, worse sleep quality, and more post-traumatic stress symptoms. The other 61% of the patients did not have any pain dreams. Why?What sort of neurophysiological activity can account for painful sensations that are experienced during REM sleep? We'll find out in the next post.Footnotes1 It wasn't clear how they monitored for REM, since EEG methods were not described. However, the transcript of one dream suggested that EEG was in fact recorded:Then I was trying to get comfortable on the bed. All the electrodes but one for the EEG had fallen off; the others were dangling free.The dream transcript continues:You said that this was too bad. I had tossed around in bed trying to get comfortable. It was really cold and hurt my backside. There was almost no mattress; I was on a board. I was saying to you that we had hit rock bottom in this bed.The interesting part about this segment is that there was no BP cuff applied; out of 14 dreams this was the only one without external stimulation (kind of like my "fake pain" dreams).2 The subject was aware they were dreaming and tried to control the action.ReferencesNielsen TA, McGregor DL, Zadra A, Ilnicki D, & Ouellet L (1993). Pain in dreams. Sleep, 16 (5), 490-8 PMID: 7690981Raymond I, Nielsen TA, Lavigne G, Choinière M. (2002). Incorporation of pain in dreams of hospitalized burn victims. Sleep 25:765-70.

... Read more »

Nielsen TA, McGregor DL, Zadra A, Ilnicki D, & Ouellet L. (1993) Pain in dreams. Sleep, 16(5), 490-8. PMID: 7690981  

  • September 7, 2011
  • 07:07 AM

Chronic Ketamine for Depression: An Unethical Case Study?

by The Neurocritic in The Neurocritic

A year ago, Ketamine for Depression: Yay or Neigh? covered acute administration of the club drug (and dissociative anesthetic) ketamine for rapid (albeit transient) relief of major depression. That post was part of a blog focus on hallucinogenic drugs in medicine and mental health, organized by Nature editor Noah Gray following publication of a review article on The neurobiology of psychedelic drugs: implications for the treatment of mood disorders. At the time, I wrote:Although the immediate onset of symptom amelioration gives ketamine a substantial advantage over traditional antidepressants (which take 4-6 weeks to work), there are definite limitations (Tsai, 2007). Drawbacks include the possibility of ketamine-induced psychosis (Javitt, 2010), limited duration of effectiveness (aan het Rot et al., 2010), potential long-term deleterious effects such as white matter abnormalities (Liao et al., 2010), and an inability to truly blind the ketamine condition due to obvious dissociative effects in many participants.At present, what are the most promising uses for ketamine as a fast-acting antidepressant? Given the disadvantages discussed above, short-term use for immediate relief of life-threatening or end-of-life depressive symptoms seem to be the best indications.For the past few weeks, I've been wanting to do a follow-up post that looks at the ups and downs of the mTOR (mammalian target of rapamycin) protein kinase pathway, which is rapidly activated by ketamine. Although activation of mTOR leads to the beneficial effect of increased synaptogenesis in the medial prefrontal cortex (Li et al., 2010), it can also cause accelerated tumor growth, as recently noted by Yang et al., 2011 ("Be prudent of ketamine in treating resistant depression in patients with cancer"). However, I've been unable to complete this planned post, specifically because the topic of ketamine use in palliative care settings is something I wrote about last year, while watching my father die of cancer.More recently, an open label study in two hospice patients, each with a prognosis of only weeks or months to live, showed beneficial effects of ketamine in the treatment of anxiety and depression (Irwin & Iglewicz, 2010). A single oral dose produced rapid improvement of symptoms and improved end of life quality.To be blunt, the possibility of accelerated tumor growth is not an issue in terminal patients.In terms of medical ethics, it's easier for me to take a different angle and address the unusual case of a grievously and chronically depressed patient (Messer & Haller, 2010). An anonymous reader alerted me to this paper, which isn't indexed in PubMed. The case history is as follows:In January 2008, a 46-year old female with MDD was hospitalized for a course of electroconvulsive therapy (ECT). Successive interventions over 15 years had included trials of 24 psychotropic medications and 273 ECT treatments, 251 of which were bilateral [which can produce significant amnesia]. No intervention had produced remission but only a short-lived response to treatment...ECT during this admission was administered with ketamine as the anesthetic at 2 mg/kg given over 60 seconds. Surgical anesthesia occurred ~30 seconds after the end of intravenous injection and lasted ~10 minutes. There was no significant change in depression symptoms with the ketamine used as an anesthetic during the ECT treatment. Alternative treatments were reviewed for potential use. In addition to no significant recovery from her depression, the long-term use of ECT caused problems with memory loss and focused attention. She was unable to remember much of her history over the previous 15 years. Re-learning the information became futile since each course of ECT would eliminate what had been gained.I'm not going to weigh in here on ECT, beyond saying that it can be beneficial in some intractable patients [with fewer amnestic effects if unilateral]. But here we have an individual with profound ECT-induced amnesia who, although giving informed consent, was then treated with a highly unorthodox regimen of repeated ketamine infusions. The majority of registered clinical trials administer a single dose of ketamine, with one trial administering 5 additional ketamine infusions over a 2-week period. Relapse typically occurs within a week after a single dose.On the other hand Dr. Messer's clinical trial, Ketamine Frequency Treatment for Major Depressive Disorder, was withdrawn prior to enrollment because pilot study determined the trial would not be feasible. The planned regimen was 6 injections every other day for 12 days. But the actual treatment given to the 46 yr old woman was much more extensive: 22 doses over 4 months, followed by 21 doses over 1 yr (approximately):The first ketamine treatment led to a dramatic remission of depressive symptoms: the Beck Depression Inventory (BDI) score decreased from 22 to 6 (Figure). Three additional infusions administered every other day over 5 days produced remission lasting 17 days after the last infusion in this series. Three series of six ketamine infusions given every other day except weekends were repeated over the next 16 weeks (Figure). Each infusion sequence produced remission lasting 16, 28, and 16 days, respectively, followed by a relapse. After three remission/relapse cycles and before relapse could occur after the fourth infusion series, a maintenance ketamine regimen was established on August 27, 2008 using 0.5 mg/kg IBW at a 3-week inter-dose interval. The authors’ estimation for the maintenance dosing interval was based on the time frame between remission and relapse for this patient. Relapse to depression was prevented by treating prior to the onset of a relapse.First, I was struck by the starting BDI score of 22, which falls within the low end of moderate depression, with scores of 29-63 indicating severe depression. I don't want to question Dr. Messer's clinical diagnosis of the patient, but I would guess that a typical BDI II score of 22 might not call for drastic measures. But perhaps the original BDI was used, in which case 19-29 indicates moderate-severe depression (which is still not severe). Second, the number of infusions went well beyond what has been established as safe, particularly in the context of treatment-resistant depression.- click on image for a larger view -What were the cognitive effects? We don't really know, because there was no formal testing:As shown in the Figure, with maintenance infusions the patient has been in remission for >15 months. No concurrent pharmacotherapeutic agents have been administered or required during this time period, no adverse events have emerged, and there has been no cognitive impairment as is typical with ECT, polypharmacy, or from MDD itself.What we do know is that ketamine is cost-effective relative to ECT:The cost and personnel needed for a ketamine treatment are far less than that of... Read more »

Messer M, Haller IV (2010). Maintenance Ketamine Treatment Produces Long-term Recovery from Depression. (2010) Maintenance Ketamine Treatment Produces Long-term Recovery from Depression. Primary Psychiatry, 48-50. info:/

  • August 28, 2011
  • 04:21 AM

Drug Trials in 'At Risk' Youth

by The Neurocritic in The Neurocritic

Is it ethical to medicate healthy teenagers "at risk" of developing psychosis to prevent a symptom that may not occur? One such clinical trial in Australia was recently stopped before it could even begin:
Drug trial scrapped amid outcryJill Stark
August 21, 2011

FORMER Australian of the Year Patrick McGorry has aborted a controversial trial of antipsychotic drugs on children as young as 15 who are "at risk" of psychosis, amid complaints the study was unethical.The Sunday Age can reveal 13 local and international experts lodged a formal complaint calling for the trial not to go ahead due to concerns children who had not yet been diagnosed with a psychotic illness would be unnecessarily given drugs with potentially dangerous side effects.Quetiapine, sold as Seroquel, has been linked to weight gain and its manufacturer AstraZeneca, which was to fund the trial, last month paid $US647 million ($A623 million) to settle a lawsuit in the US, alleging there was insufficient warning the drug may cause diabetes.Dr. McGorry works at Orygen Youth Health in Parkville (near Melbourne) and is a proponent of early interventions for treating mental illness and substance abuse (McGorry et al., 2011). In discussing psychotic disorders, these authors say:
The importance of timely treatment initiation has been further underscored by new data from the Treatment and Intervention in Psychosis (TIPS) project showing that early treatment had positive effects on clinical and functional status at 2-year and 5-year follow-up in first episode psychosis. These studies showed that reducing the duration of untreated psychosis has longer-term effects on the course of negative symptoms, depressive symptoms, cognitive symptoms and social functioning, suggesting the possibility of secondary prevention of these pathologies in first-episode schizophrenia. But how about prevention, as opposed to early intervention? Are researchers and clinicians able to predict (with reasonable accuracy) who will develop schizophrenia? The scrapped clinical trial intended to see whether quetiapine would decrease or delay the risk to 15-40 yr old participants who showed "early signs" of developing a psychotic disorder. What are these early signs and prodromal symptoms (Mechelli et al., 2011)?
...a gradual deterioration of global and social functioning and the emergence of attenuated psychotic symptoms. However, not all people with these features progress to develop a full-blown psychotic disorder; 20% to 50% develop psychosis, usually within 24 months, but the remainder do not. Individuals first seen with this clinical syndrome are, thus, said to be at ultra-high risk (UHR) for psychosis.So anywhere from 50% to 80% of those showing prodromal symptoms and labeled ultra-high risk do not develop a psychotic disorder such as schizophrenia. Can neuroimaging improve this crude level of prediction? Ultimately, disordered thinking, delusions, and hallucinations arise from the brain -- right? -- so we should be able to see some abnormality on an MRI scan. A multi-site study enrolled 182 individuals at UHR for psychosis and 167 healthy controls. Voxel-based morphometry was used to quantify whole brain gray matter volumes, as well as 3 specific regions of interest (ROIs): the left parahippocampal gyrus in the medial temporal lobe (important for memory), the right inferior frontal gyrus (important for attention and cognitive control), and the left superior temporal gyrus (which may be implicated in auditory verbal hallucinations). Two years later, 48 UHR participants (26%) developed psychosis and the others did not.

Compared to controls, all subjects in the UHR group showed gray matter reductions in medial frontal regions, so this result was not predictive of whether full-blown psychosis would occur. Within the UHR group, however, a tiny region in the left anterior parahippocampal gyrus (6 voxels) differed in the UHR who later developed psychosis and those who did not. There were no volume reductions in the other two ROIs.

Figure 2 (adapted from Mechelli et al., 2011). Differences between ultra-high-risk (UHR) individuals who did (UHR-T) and did not (UHR-NT) develop psychosis. The UHR-T individuals had less gray matter volume than did the UHR-NT individuals in the left parahippocampal gyrus, bordering the uncus (MNI [Montreal Neurological Institute] coordinates x, y, and z: –21, 6, and –27, respectively). For visualization purposes, effects are displayed at P < .05 uncorrected.

So basically, only 6 voxels in the entire brain were capable of predicting whether or not a patient at ultra-high risk for psychosis will indeed be one of the 26% to progress to a clinically significant psychotic disorder. To examine the actual diagnostic accuracy, the authors then took gray matter volumes from the peak voxel, performed cross-validation analyses using a predictive linear model, and determined that the average predictive accuracy was only 62% (sensitivity = 61% and specificity = 65%). This means the single voxel measure incorrectly predicted that 39% of healthy UHR would become psychotic, while it missed a diagnosis in 35% who would later develop psychosis.

On the basis of these results would you recommend MRI scans of the left parahippocampal gyrus to refine the cohort given Seroquel to reduce or delay the risk of psychosis? I would say no, especially not if you're Australian (Dazzan et al., 2011). Another paper by many of the same authors reported that of 102 UHR Australians, 28 converted to a psychotic disorder, and these individuals showed volume reductions in frontal [and other] regions, relative to the UHR subgroup who remained healthy (Dazzan et al., 2011). Left parahippocampal gyrus was nowhere to be found. In fact, none of the 3 ROIs from Mechelli et al. were selected as ROIs by Dazzan et al. Furthermore, neither of these papers cited the other, despite the fact that they shared 8 authors in common.

In my view, the results thus far seem disappointing to those looking for the neuroanatomical correlates of psychiatric disorders. What does this mean for future structural MRI studies searching for changes that will predict the onset of psychosis?


Dazzan P, Soulsby B, Mechelli A, Wood SJ, Velakoulis D, Phillips LJ, Yung AR, Chitnis X, Lin A, Murray RM, McGorry PD, McGuire PK, Pantelis C. (2011). Volumetric Abnormalities Predating the Onset of Schizophrenia and Affective Psychoses: An MRI Study in Subjects at Ultrahigh Risk of Psychosis. Schizophr Bull. Apr 25. [Epub ahead of print]

McGorry PD, Purcell R, Goldstone S, Amminger GP. (2011). Age of onset and timing of treatment for mental and substance use disorders: implications for preventive intervention strategies and models of care. Curr Opin Psychiatry 24:301-6.

... Read more »

Mechelli, A., Riecher-Rossler, A., Meisenzahl, E., Tognin, S., Wood, S., Borgwardt, S., Koutsouleris, N., Yung, A., Stone, J., Phillips, L.... (2011) Neuroanatomical Abnormalities That Predate the Onset of Psychosis: A Multicenter Study. Archives of General Psychiatry, 68(5), 489-495. DOI: 10.1001/archgenpsychiatry.2011.42  

  • August 5, 2011
  • 06:51 PM

A New Sexual Femunculus?

by The Neurocritic in The Neurocritic

Figure 3A (adapted from Komisaruk et al., 2011). Group-based composite view of the clitoral, vaginal, and cervical activation sites, all in the medial paracentral lobule, but regionally differentiated. We interpret this as due to the differential sensory innervation of these genital structures, i.e., clitoris: pudendal nerve, vagina: pelvic nerve,1 and cervix: hypogastric and vagus nerves."Femunulus" is a neologism for "female homuculus" The neuroanatomical definition of homunculus is a "distorted" representation of the sensorimotor body map (and its respective parts) overlaid upon primary somatosensory and primary motor cortices. The figure below illustrates the sensory homunculus, where each body part is placed onto the region of cortex that represents it, and the size of the body part is proportional to its cortical representation (and sensitivity). It's rare to see the genitals represented at all. And if they are present, they are inevitably male genitals.Homunculus image from Reinhard Blutner.See the G-Rated [i.e., genital-less] flash explanation of homunculus.A New Clitoral Homunculus?To remedy this puritanical and androcentric situation, Swiss scientists at University Hospital in Zurich conducted a highly stimulating study in 15 healthy women to map the somatosensory representation of the clitoris (Michels et al., 2009).Michels and colleagues began by reviewing the work of Wilder Penfield et al.:During the last 70 years the description of the sensory homunculus has been virtually a standard reference for various somatotopical studies (Penfield and Boldrey 1937; PDF). This map consists of a detailed description of the functional cortical representation of different body parts obtained via electrical stimulation during open brain surgery. In their findings they relied on reported sensations of different body parts after electrical stimulation of the cortex. Assessment of the exact location was generally difficult and sometimes led to conflicting results. The genital region was especially hard to assess due to difficulties with sense of shame.In contrast to electrical stimulation of the brain, modern mapping studies have used sensory stimulation to map the penis with fMRI (e.g., Kell et al., 2005). But as of 2009, there were no comparable fMRI studies of female genitalia. So how is such a study conducted, methodologically speaking? Electrical stimulation of the dorsal clitoral nerve was compared to electrical stimulation of the hallux (big toe). It was all very clinical, no sexual arousal involved. Here's the experimental protocol (Michels et al., 2009):Prior to the imaging session, two self-attaching surface disc electrodes (1 × 1 cm) were placed bilaterally next to the clitoris of the subjects so that we were able to stimulate the fibers of the dorsal clitoral nerve. Before the start of the experiment, electrical test stimulation was performed to ensure that subjects could feel the stimulation directly at the clitoris. In addition, the strength of electrical stimulation was adjusted to a subject-specific level, i.e. that stimulation was neither felt [as] painful nor elicited – in case of clitoris stimulation – any sexual arousal. Functional imaging was performed in a block design with alternating rest and stimulation conditions, starting with a rest condition. ... In addition to the clitoris stimulation, we performed in eight of the recorded subjects a second experimental session, in which we applied electrical stimulation of the right hallux using the same type of electrodes, stimulation and scan paradigm.Their neuroimaging results revealed no evidence of clitoral representation on the medial wall (i.e., the paracentral lobule, as shown above in Komisaruk et al.'s Figure 3A and the male homunculus). Instead, electrical stimulation produced significant activations predominantly in bilateral prefrontal areas and the precentral, parietal and postcentral gyri, including S1 and S2. Click here to see Fig. 3 of Michels et al., 2009.However, that experiment involved electrical stimulation of the dorsal clitoral nerve, which was not sexually arousing. What if the stimulation occurred in a more naturalistic fashion?Even newer clitoral, vaginal, cervical and nipple homunculi?Now, Komisaruk et al., (2011) have expanded the somatosensory map of female sexual organs by having the participants engage in self-stimulation of the clitoris, vagina, cervix, and nipple while laying in a 3T scanner. For comparison, the investigators stimulated the thumb and the big toe. Should we be concerned about differential movement artifact (of the head, hand, arm, pelvis) in these varied stimulation conditions? We'll leave that question aside for the moment and examine the experimental protocol, which consisted of 30 sec of rest and 30 sec of the various stimulation modalities, each followed by 30 sec rest (in 5 min blocks):...Control trials consisted of an experimenter rhythmically tapping a participant's thumb or toe in separate trials to establish reference points on the sensory cortex. Experimental mapping trials consisted of participants self-stimulating, by hand or personal device, using “comfortable” intensity, the clitoris, anterior wall of the vagina, the cervix, or the nipple, in separate, randomized-sequence trials. Clitoral self-stimulation was applied using rhythmical tapping with the right hand. Vaginal self-stimulation (of the anterior wall) was applied using the participant's own stimulator (typically a 15 mm-diameter S-shaped acrylic rounded-top cylinder). Cervical self-stimulation was applied using a similar-diameter, glass or acrylic straight rounded-tip cylinder brought to the study by each participant. Nipple self-stimulation was applied using the right hand to tap the left nipple rhythmically...What's a "comfortable" intensity, and how are we sure the hand and the di... Read more »

  • August 2, 2011
  • 05:00 AM

The Man Who Mistook a Harmonica for a Cash Register

by The Neurocritic in The Neurocritic

One of the most famous books written by Oliver Sacks, popular author and beloved behavioral neurologist, is The Man Who Mistook His Wife for a Hat. One of the chapters describes the case of a patient with visual agnosia, or the inability to recognize objects.Below is a conversation between Sacks and Dr. P, the patient with visual agnosia.I showed him the cover [of a National Geographic Magazine], an unbroken expanse of Sahara dunes.'What do you see here?' I asked.'I see a river,' he said. 'And a little guest-house with its terrace on the water. People are dining out on the terrace. I see coloured parasols here and there.' He was looking, if it was 'looking', right off the cover into mid-air and confabulating nonexistent features, as if the absence of features in the actual picture had driven him to imagine the river and the terrace and the colored parasols.I must have looked aghast, but he seemed to think he had done rather well. There was a hint of a smile on his face. He also appeared to have decided that the examination was over and started to look around for his hat He reached out his hand and took hold of his wife's head, tried to lift it off, to put it on. He had apparently mistaken his wife for a hat! His wife looked as if she was used to such things.Visual agnosia is caused by an acquired brain injury to high-level object processing areas in lateral occipital and ventral temporal cortices. Primary and secondary visual regions are spared, meaning that basic visual responses are not compromised. Language and naming are intact, as is the ability to identify objects through other modalities (e.g., auditory, tactile).A case study published in Neuron (Konen et al., 2011) describes a patient similar to Dr. P. Patient SM is a right-handed, 36 year old male who sustained a closed head injury in an automobile accident at the age of 18. He recovered after the accident but was left with visual agnosia and prosopagnosia, an impairment in recognizing faces. The damaged area of his brain was fairly circumscribed1 and smaller in size than in many other patients with visual agnosia:The lesion was situated within LOC, anterior to hV4 and dorsolateral to VO1/2, and was confined to a circumscribed region in the posterior part of the lateral fusiform gyrus in the RH [right hemisphere]. Typically, this region responds more to intact objects than scrambled objects and damage to this circumscribed area is likely the principle etiology of SM's object agnosia.Figure 4 (modified from Konen et al., 2011). Lesion Site of SM in Anatomical Space. (C) Axial view of the lesion site marked in green. The slices were cut along the temporal poles for enlarged representation of occipitotemporal cortex.In addition, detailed topographic mapping of visual cortex was conducted using fMRI in SM and controls. Responses in early cortical areas (prior to the lesioned fusiform gyrus in the feedforward processing stream) were intact in SM.Figure 1 (Konen et al., 2011). Topographically Organized Areas and Lesion Site in SM (A) and Control Subject C1 (B). Flattened surface reconstructions of early and ventral visual cortex. The color code indicates the phase of the fMRI response and region of visual field to which underlying neurons responded best. Retinotopic mapping revealed regular patterns of phase reversals in both hemispheres of SM that were similar to healthy subjects such as C1. SM's lesion is shown in black, located anterior to hV4 and dorsolateral to VO1/2. LH = left hemisphere; RH = right hemisphere.Conversely, the hemodynamic response to object presentation was reduced in the area surrounding the lesion, as expected. But the most remarkable and surprising aspect of the study is that reductions in object-related responses were also observed in the corresponding region of SM's intact left hemisphere. How might this be explained?...while the RH lesion might be primary, this lesion has remote and widespread consequences, with functional inhibition of homologous regions in the structurally intact hemisphere. Such a pattern raises the question whether the observed brain-behavior correspondence serves as the neural underpinning of the impairment or whether reconceptualizing SM's agnosia in terms of disruption to an interconnected more distributed neural system might be a better characterization of SM's pattern and of agnosia more generally.The authors discuss their findings in the video below, where Marlene Behrmann mentions that SM mistook a picture of a harmonica for a cash register.Video Abstract (mp4)Highlights► Unilateral lesion of lateral fusiform gyrus in right hemisphere causes object agnosia ► Agnosic patient exhibits normal retinotopy and visual responsivity in visual cortex ► Object-responsive and object-selective responses are reduced in both hemispheres ► Cortical plasticity evident with reorganization of intermediate and higher-order areasFootnote1 The Methods section notes additional damage in the corpus callosum and left basal ganglia.ReferenceKonen, C., Behrmann, M., Nishimura, M., & Kastner, S. (2011). The Functional Neuroanatomy of Obj... Read more »

  • July 23, 2011
  • 07:37 PM

Neuro Bliss and Neuro Codeine

by The Neurocritic in The Neurocritic

Lindsay Lohan drinking Neuro Bliss.NEUROBRANDS®, LLC is a company that markets a series of colorful and attractively designed "nutritional drinks", known as Neuro® Drinks.Neuro Gasm Is Part Of The New Neuro CultureFor a company that has great product placement (with many celebrity endorsements), carefully crafted packaging, and regularly issued press releases, they sure are modest about their marketing efforts:"Neuro Drinks® offer consumers an alternative to products that perpetuate our self-medicating caffeine-dependent society. Designed to sustain and enhance your active lifestyle with natural ingredients, each beverage is packed with essential vitamins, minerals, amino acids and botanicals at dosages backed by scientific research. Just real results — no marketing hype."I recently purchased NeuroBliss® from a local store. As with other Neuro products, it's difficult to tell from the packaging what sort of flavor one should expect. From the white milky color it looks like it might be coconut, but smelling the brew yields a citrus-like odor (from citric acid). The taste is vaguely like grapefruit, or rather like grapefruit-flavored fizzy codeine.The NeuroBliss® bottle claims there are no artificial colors or flavors, but I'm not sure which flavor is actually natural (other than chamomile and the generically listed "natural flavors"). There are a lot of vitamins along with chemical stabilizers and preservatives (gum acacia, ester gum, sodium benzoate, potassium sorbate), plus the unproven active ingredients that purportedly make you blissful.Nutritional information for Neuro Bliss.These unproven active ingredients include:"L-Theanine, an amino acid found in green tea which has been clinically proven to help reduce stress, works by altering brain waves, shifting them from the beta spectrum to the alpha spectrum — where a person is focused and alert, but calm. In contrast to this claim, a study by Gomez-Ramirez et al. (2009) found that a 250-mg dose of L-theanine significantly reduced background alpha power during a demanding attentional cueing task. There were no alterations in the cue-related, anticipatory changes in alpha activity. In other words, this compound may be considered activating but not calming. L-theanine is an analog to glutamate, an abundant excitatory neurotransmitter that crosses the blood-brain barrier.Testimonial from consumer Sandra Kiume: "It did make me more alert and aware of the foul taste of the beverage."ReferenceGomez-Ramirez, M., Kelly, S., Montesi, J., & Foxe, J. (2008). The Effects of l-theanine on Alpha-Band Oscillatory Brain Activity During a Visuo-Spatial Attention Task. Brain Topography, 22 (1), 44-51 DOI: 10.1007/s10548-008-0068-zOrigin Of The Term Fight Or Flight With Neuro Bliss-with Marina Orlova!

... Read more »

  • July 17, 2011
  • 11:27 AM

The Google Stroop Effect?

by The Neurocritic in The Neurocritic

The Google logo.Notice the logo is multi-colored (as pointed out by Neurobonkers). Seeing "Google" printed in a solid color (or in any other font, for that matter) would likely result in a Stroop effect, or a slower response time in identifying the color of the font, relative to that of a neutral word.Is Google making us stupid?That question, and its original exposition in The Atlantic, has been furthering the career of Nicholas G. Carr. His subsequent book, The Shallows: What the Internet Is Doing to Our Brains, expanded upon his broader thesis that the internet is damaging to our cognitive capacity and the way we think. Numerous writers, both pro and con, have debated whether the internet and social networking sites (and computers in general) are harmful, so I won't belabor that point here. Instead, I'll cover a new article in Science that purportedly found Google Effects on Memory (Sparrow et al., 2011).Cognitive Consequences of Having Information at Our FingertipsThe paper by Sparrow et al. (2011) conducted four experiments to determine whether the ability to access previously learned information reduces the effort put forth in remembering and retrieving the information. Specifically, the authors view the internet as a form of transactive memory, a means to offload some of the daily cognitive burden from our brains to an external source. Or, as succinctly expressed in ars technica, why bother to remember when you can just use Google?This is nothing new, nor is it something dependent on the internet. In 1985 Wegner et al. (PDF) examined the way that married couples can have a division of labor along the lines of which facts to remember (Bohannon, 2011):For example, a husband might rely on his wife to remember significant dates, while she relies on him to remember the names of distant friends and family—and this frees both from duplicating the memories in their own brains. Sparrow wondered if the Internet is filling this role for everyone, representing an enormous collective act of transactive memory. Another example is illustrated by the phenomenon of the open book test. If students know they can use their textbooks to answer questions on an exam, they may put forth less effort into rote memorization of facts, and may instead learn the organization of each chapter, familiarizing themselves with where particular facts are located within the text. That indeed is what was demonstrated in Experiments 3 and 4, but in terms of accessing the information online or from a computer's hard drive.The Google Stroop EffectExperiment 1 asked whether the participants were primed to access computer-related words when faced with difficult trivia questions, relative to when they answered easy trivia questions (examples below).Appendix A: Easy Questions1. Are dinosaurs extinct?2. Was Moby Dick written by Herman Melville?3. Is the formula for water H20?4. Is a stop sign red in color?5. Are there 24 hours in a day?. . .16. Does a triangle have 3 sides?Appendix B: Hard Questions1. Does Denmark contain more square miles than Costa Rica?2. Did Benjamin Franklin give piano lessons?3. Does an Italian deck of card contain jacks?4. Did Alfred Hitchcock eat meat?5. Are more babies conceived in February than in any other month?. . .16. Is a quince a fruit?The way the authors assessed automatic priming of internet- and computer-related words is by using a modified version of the ever-popular Stroop test. Name the font color of these words but don't read the words themselves:REDBLUEGREENNow do the same for this set of words:RED BLUE GREENBet you were faster for the first set. That's because reading is a much more automatic process than naming the ink color in which the words are printed. This conflict between response options produces interference and slows reaction times (RTs) in the task.The modified Stroop task used by Sparrow et al. relied on attentional salience rather than response conflict. Instead of color words, the participants viewed words related to computers and search engines, or words not related to these things:This color naming contained 8 target words related to computers and search engines (e.g., Google, Yahoo, screen, browser, modem, keys, internet, computer), and 16 unrelated words (e.g., Target, Nike, Coca Cola, Yoplait, table, telephone, book, hammer, nails, chair, piano, pencil, paper, eraser, laser, television).First off, you'll note that there are twice as many control words as there are computer words1. More importantly, you'll also notice that the unrelated words included prominent brand names (some of which are strongly associated with a particular color) and a grab bag of nouns from different semantic categories (furniture, tools, writing implements, musical instrument, etc.). The Google logo is multi-colored (as we've said before), and the current Yahoo logo is purple (it used to be red).Hmm. So already we're looking at quite a confound. Nonetheless, the authors expected a larger Stroop effect for the search engines for different reasons:In this case, we expect participants to have computer terms in mind, because they desire access to the information which would allow them to answer difficult questions. Participants are presented with words in either blue or red, and were asked to press a key corresponding with the correct color. At the same time, they were to hold a 6 digit number in memory, creating cognitive load.Why? Why oh why did the authors want to create a cognitive load during the Stroop? This turns the whole study into a dual task experiment, requiring the participants to multi-task: a key press for red or blue (which requires retrieval of ... Read more »

  • July 11, 2011
  • 06:23 PM

Underwear Models and Low Libido

by The Neurocritic in The Neurocritic

Erotic or not? (from Hot Chicks with Douchebags)Hypoactive Sexual Desire Disorder (HSDD) is a controversial diagnosis given to women who have a low (or nonexistent) libido and are distressed about it. The International Definitions Committee (a panel of 13 experts in female sexual dysfunction) from the 2nd International Consultation on Sexual Medicine in Paris defined HSDD, which has also been called Women's Sexual Interest/Desire Disorder (Basson et al., 2004), in the following fashion:There are absent or diminished feelings of sexual interest or desire, absent sexual thoughts or fantasies and a lack of responsive desire. Motivations (here defined as reasons/incentives) for attempting to have sexual arousal are scarce or absent. The lack of interest is considered to be beyond the normative lessening with life cycle and relationship duration.Dr. Petra Boynton has written extensively about the problematic aspects of the HSDD diagnosis and the screening tools used to assess it, as well as the medicalization of sexuality for pharmaceutical marketing purposes.Today, however, we'll examine a recent neuroimaging study that compared a group of heterosexual women diagnosed with HSDD to a group of non-HSDD control women (Bianchi-Demicheli et al., 2011). The authors set out to determine whether there were differences in brain activity while the two groups viewed erotic male photos, relative to when they viewed non-erotic photos. The experimental stimuli were all pictures of male underwear models that were not pornographic (i.e., not Anthony Weiner shots), as shown below in Figure 1. The models were rated as erotic or non-erotic by two of the experimenters (one heterosexual male, one heterosexual female)!!Figure 1 (Bianchi-Demicheli et al., 2011). Experimental paradigm. Procedure: each trial consisted of the following sequence: a 500 ms-fixation cross was followed by a 1,500 ms-target stimulus (here an exemplar of the erotic condition is presented). A random 1,500–4,000 ms inter-stimulus interval separated each target presentation. Participants performed a one-back task requiring the detection of occasional immediate repetitions of the same picture.After the scanning session, participants rated each picture from 1 to 10. Scores for the non-HSDD group were 6.57 ± 1.59 (mean ± SD) for erotic and 4.45 ± 1.43 for non-erotic photos. For the HSDD group, the scores were 5.24 for erotic and 4.08 for non-erotic. Note that the standard deviations were not given for the HSDD group, nor was an analysis performed to determine whether the erotic/non-erotic difference was statistically significant. What we do know is that the non-HSDD participants reliably distinguished the two classes of subjectively rated stimuli (p=.001), and that erotic photos were rated more highly by non-HSDD than by HSDD (p=.03).OK, so now we know that heterosexual women without HSDD rated the "erotic" male underwear models as more erotic than did the women with HSDD, but is this very surprising? And what can the brain imaging results say about low libido in HSDD beyond behavioral ratings and symptom reports? Since we already know that the women with hypoactive sexual desire aren't very thrilled by the guy in Figure 1, one would expect differences in neural activity between this group and the controls while viewing these pictures. And the differences are displayed in the figure below.Figure 2 (Bianchi-Demicheli et al., 2011). Surface rendering of NHSDD (green) and HSDD (red) group average brain activations for the Erotic stimuli > Non-Erotic stimuli contrast. BOLD responses are shown on lateral views of the flat PALS left and right of the human brain (P < 0.01 uncorrected). Overlap of activation appears for the two groups as yellow.The brain regions in the controls that showed greater activation for erotic vs. non-erotic pictures included high-level visual processing areas such as the fusiform and middle temporal gyri (Brodmann areas 37 and 19), and the extrastriate body area (EBA) in the lateral occipitotemporal cortex. Also showing greater activation for the sexier models were entorhinal and perirhinal regions in the medial temporal lobe (important for memory), the superior parietal lobule, the inferior frontal gyrus, and the mid-cingulate cortex. For the HSDD group, the erotic vs. non-erotic contrast revealed greater activation in some of the same regions: fusiform gyrus, superior parietal lobule, inferior frontal gyrus, and medial occipital gyrus.The comparisons above were significant at either p < 0.05 with a family-wise error correction for multiple comparisons, or at p < 0.001 uncorrected. Once we get to the key findings, the group differences for the erotic vs. non-erotic contrast, the significance level dropped to p<0.01 uncorrected. The brain regions that met this lesser standard for the NHSDD > HSDD comparison were the intraparietal sulcus, the dorsal anterior cingulate gyrus, and the entorhinal/perirhinal region. How do the authors interpret these results?We therefore interpret the activations in the anterior cingulate gyrus and ento/perirhinal region as reflecting a greater recruitment of motivational and associative multimodal memory processes for emotional events, respectively, presumably because of a more attentive processing of erotic stimuli in healthy participants.. . .Similarly, the present involvement of BA 7 [superior parietal lobule] in NHSDD participants suggests a greater recruitment of attentional and appraisal processes elicited by erotic stimuli in this group.Conversely, areas that were showed greater activation in HSDD than in NHSDD were the inferior parietal lobule, the medial occipital gyrus, and the inferior frontal gyrus. This distinct pattern of neural changes in HSDD participants might potentially reflect different subjective interpretations (e.g., different scenario) during the processing of stimuli. Indeed,... Read more »

Bianchi-Demicheli, F., Cojan, Y., Waber, L., Recordon, N., Vuilleumier, P., & Ortigue, S. (2011) Neural Bases of Hypoactive Sexual Desire Disorder in Women: An Event-Related fMRI Study. The Journal of Sexual Medicine. DOI: 10.1111/j.1743-6109.2011.02376.x  

  • June 28, 2011
  • 06:16 PM

JAMA on 60s Psychedelic Drug Culture

by The Neurocritic in The Neurocritic

An amusing semi-anthropological study was published in JAMA by Ludwig and Levine in 1965. It was based on extensive interviews with 27 "postnarcotic drug addict inpatients" who were treated at a hospital in Lexington, Kentucky. The specific drugs of interest included peyote (from the peyotl cactus plant), mescaline, LSD, and psilocybin. The current availability of each drug, most popular methods of intake, slang terms, psychoactive properties, and subcultural norms were discussed. Hallucinogens were sometimes combined with narcotics, barbituates, amphetamines, or marijuana, depending on the specific demographic group. Basically, there were the junkies, the potheads, and the psychonauts:There appear to be three main patterns of hallucinogenic drug use. First, there are the people who are primarily and preferentially narcotic drug addicts who have used the hallucinogenic agents on one or several occasions mainly for "kicks" or "curiosity." They seldom seek these drugs and tend to use them infrequently, as for example when these agents come their way through a friend or at a party. Rarely do they take the hallucinogenic agent alone but tend to take it after a "fix" with heroin, hydromorphone hydrochloride, morphine, or some other narcotic drug to which they are addicted at the time.The next group sounds like your everyday 1960s hippie stereotype:Second, there are the group of people, aptly described by one of the informants as the "professional potheads," who have had extensive experience with various drugs. The most commonly used drug by this group of people is marijuana (hence the name "potheads"), but amphetamines and barbiturates are also popular. Many have had some experience with the narcotic drugs, but on the whole they tend to avoid the opiates. "Creative" and "arty" people, such as struggling actors, musicians, artists, writers, as well as the Greenwich Village type of "beatnik," tend to fall in this category. The "frustrated," "curious," "free thinkers," "nonconformists," and "young rebels," who are seeking a temporary escape also comprise this class of hallucinogenic users, according to our informants. Although the "professional potheads" enjoy the euphoric effects produced by smoking marijuana, they also tend to relish and seek out the feelings of greater insight, inspiration, and sensory stimulation and distortions which the hallucinogens may produce. They are in constant search of agents to rouse them from their apathy, to make life more meaningful, to overcome social inhibitions, and to facilitate meaningful conversations and interpersonal relationships.Especially enjoyable was the description of the drug parties frequented by these types:Hallucinogenic agents are used by these people mainly on weekends (often "four-day weekends") or on special occasions, such as parties. It is rare for users to take drugs alone. They are mainly taken with friends or at intimate gatherings of people. The parties are of all varieties. Frequently, little conversation takes place while people are under the influence of these drugs, but they claim to experience a greater closeness and rapport with the other members of the group. One patient described having attended "basket weaving" and "lampshade making" parties where all members, under the influence of these drugs, squatted on the floor and silently attended to their tasks. At another type of party, overt sexual activities were carried out. Folk singing was also common. To quote another patient, "Mostly the people sit around trying to dig each other . . . everybody is sitting around and waiting, like on New Year's Eve, for something to happen."Finally were a small number of hard core exclusive users of hallucinogens in search of an expanded consciousness, whether it be religious, spiritual, or cosmic:Third, there are a small number of people who take the hallucinogenic agents repeatedly over a sustained period of time to the exclusion of all other drugs. The frequency of drug use during these periods of time is variable. One patient took peyote four times a day over a two-year period, while another patient took it two out of every three days over a three-month period. One patient took mescaline every day for two separate 15-day periods, while another took mescaline every two to three days over a six-month period...Generally, these patients seemed different from those in the second group, who primarily smoke marijuana. They did not take these drugs in a group for social purposes but used them mainly as a means of attaining some personal, esoteric goal. One patient talked of having achieved an increased sensitivity to nature and a greater insight into himself after prolonged peyote usage. While living by himself on Big Sur in California, he claimed to have achieved a "Christ-like state of mind" and a greater feeling of altruism. Another patient stated that as he kept taking mescaline, he was able to control his experience and attained a state of mind in which "every little thing is projected large," where he was able to see the negative and positive aspects of everything, and where "everything is real." A third patient, of Mexican extraction, kept taking peyote to "find God."In the last several months, there have been a spate of articles on the return of psychedelics for psychotherapeutic purposes. Maia Szalavitz has covered some of the most recent developments: 'Magic Mushrooms' Can Improve Psychological Health Long Term and A Mystery Partly Solved: How the 'Club Drug' Ketamine Lifts Depression So Quickly.Last year, a Neurocritic post (Ketamine for Depression: Yay or Neigh?) was part of a Nature Blog Focus on hallucinogenic drugs in medicine and mental health, inspired by the Nature Reviews Neuroscience paper, The neurobiology of psychedelic drugs: implications for the treatment of mood disorders, by Franz Vollenweider & Michael Kometer. For more information on this Blog Focus, see the Table of Contents.The secret history of psychedelic psychiatry was discussed over at Neurophilosophy. Neuroskeptic covered ... Read more »

LUDWIG AM, & LEVINE J. (1965) PATTERNS OF HALLUCINOGENIC DRUG ABUSE. JAMA : the journal of the American Medical Association, 92-6. PMID: 14233246  

  • June 19, 2011
  • 04:08 AM

Could Anthony Weiner Ace the Stroop Test?

by The Neurocritic in The Neurocritic

Former U.S. Representative Anthony Weiner served New York's 9th congressional district for 12 years until his online sexual indiscretions forced him to resign on June 16, 2011. We've all been overexposed [so to speak] to the "Weinergate" scandal, so no need to recount all the lurid details. Boxer briefly, he sent lewd photos of himself to young and under-aged girls following him on Twitter. This occurred despite the fact that Huma Abedin, Deputy Chief of Staff for Hillary Rodham Clinton and his wife of 2 years, was newly pregnant. Why would a high profile politician engage in such outrageous behavior?What a silly question! Because he could! Because of his political power and a giant ego that needed massaging from pretty girls less than half his age. And because he thought he could get away with it. Ask Bill, Arnold, John, and Eliot. For more on this phenomenon, I recommend On the biology of sexting, a monograph at the blog Neurological Correlates.Like many other public male figures who have fallen from grace due to their sexual activities, Weiner claimed to have checked into a treatment center to seek professional help for his ill-defined problems:"Congressman Weiner departed this morning to seek professional treatment to focus on becoming a better husband and healthier person," Weiner's spokeswoman, Risa Heller, tells Us Weekly in a statement. "In light of that, he will request a short leave of absence from the House of Representatives so that he can get evaluated and map out a course of treatment to make himself well.This was, of course, before his resignation. The New York Times went on to state:Ms. Heller would not identify the facility or the precise kind of counseling Mr. Weiner, who has admitted having explicit communications with six women he met online, would receive... . . .Ms. Pelosi had hoped that the congressman would reach the decision on his own to go. In addition to her concerns about the political distraction Mr. Weiner had become, Ms. Pelosi concluded that his behavior required medical intervention. “When you are this self-destructive, there is obviously something deeper going on with you,” said a Pelosi adviser who spoke on condition of anonymity for fear of being seen as betraying her confidence.This brings us to the issue of "sexual addiction", or compulsive sexuality, or hypersexuality. Establishing an agreed-upon definition and proper diagnostic critieria for this condition is a minefield (compare Kafka 2010a, 2010b and Levine, 2010). For the present blog post, I will present the view of Reid et al. (2011) from their paper on A Surprising Finding Related to Executive Control in a Patient Sample of Hypersexual Men:The proposed diagnostic criteria for the DSM-V characterize hypersexual disorder (HD) as a repetitive and intense preoccupation with sexual fantasies, urges, and behaviors, leading to adverse consequences and clinically significant distress or impairment in social, occupational, or other important areas of functioning. One defining feature of this proposed disorder includes multiple unsuccessful attempts to control or diminish the amount of time the individual engages in sexual fantasies, urges, and behavior in response to dysphoric mood states or stressful life events. Despite a constellation of studies investigating characteristics of HD (usually defined in the literature as sexual addiction, sexual compulsivity, or hypersexual behavior), little is known about the neuropsychological correlates of this phenomenon, including possible associations with executive functioning.Executive functions are a series of high-level cognitive processes that allow for the flexible control of thought and adaptive behavior. They include processes such as planning, decision making, multitasking, task switching, and impulse control. One might expect that executive functions (or at least some of them) would be impaired in those who show problematic hypersexual behavior. For example, although Weiner may be witty and reasonably intelligent, his apparent narcissism, poor impulse control, and terrible decision making abilities in the sexting realm proved to be his downfall.Anthony Weiner's comedy routine at the Congressional Correspondent's Dinner, March 30, 2011.Highlights:Ambitions to run for mayor of NYWeiner jokesPraises his lovely wife - "opposites attract"Outlines his use of social media, including TwitterFollow me @RepWeiner! (18,000 followers at that time, now over 83,000)Named to Time's 140 Best Twitter FeedsTo jump to the conclusion, the study of Reid et al. (2011) was surprising because the executive function scores of 30 men diagnosed with HD were the same as a group of 30 male volunteers without HD. All participants were administered a series of standardized neuropsychological tests that included the Stroop Color-Word Interference Test (shown above in Weiner's thought bubble), the Wisconsin Card Sorting Test (WCST), the Trail Making Test, and the Verbal Fluency Test. All of these tasks involve planning or overcoming automatic responses.In the Stroop task, the participant is instructed to say the font color and ignore the word. It's much more automatic to read the word than to say the font color, so people are slower to respond when the two dimensions are in conflict:BLUE PURPLEREDGREENTrail Making version B is an attention switching task where the participant connects the dots on the sheet below by alternating between letters and numbers: 1-A-2-B-3-C, etc.Before we examine the authors' interpretation of this interesting null effect, let's take a closer look at some of the defining characteristics of the HD grou... Read more »

  • June 13, 2011
  • 12:53 AM

Akiskal and the Bipolar Spectrum

by The Neurocritic in The Neurocritic

In the last post, we learned that the Editor-in-Chief of the Journal of Affective Disorders has published 165 papers in the journal, 155 of these since becoming editor in 1996. Excluding commentaries and editorials, that makes for a grand total of 142 articles thus far during his tenure as editor.The two major themes of Dr. Hagop Akiskal's papers are (1) the bipolar spectrum, and (2) temperament as the basis of mood, behavior and personality (e.g., Lara et al., 2006). Clearly, I cannot begin to summarize the content of these papers, but I will give some background material on the bipolar spectrum and "soft" bipolar (Akiskal & Pinto, 1999 - not published in JAD).Bipolar disorder, one of the most serious mental illnesses, is marked by periodic bouts of depression and mania (Bipolar I) or by depression and hypomania (Bipolar II). Given that depression often presents as the initial polarity, bipolar is frequently misdiagnosed as major depressive disorder (MDD), with disastrous consequences.1 The rigid categories of DSM-IV, however, may not capture everyone who displays clinically significant symptoms of bipolar disorder. Ghaemi et al. (2002) have noted that:...limitations of the DSM-IV nosology may impede the diagnosis of BD, because the DSM-IV has rather broad criteria for MDD and narrow criteria for BD.According to Akiskal and Pinto, the evolving bipolar spectrum (circa 1999) includes:BIPOLAR I: FULL-BLOWN MANIABIPOLAR I½: DEPRESSION WITH PROTRACTED HYPOMANIABIPOLAR II: DEPRESSION WITH HYPOMANIABIPOLAR II½: CYCLOTHYMIC DEPRESSIONS [often labeled as borderline personality disorder]BIPOLAR III: ANTIDEPRESSANT-ASSOCIATED HYPOMANIABIPOLAR III½: BIPOLARITY MASKED—AND UNMASKED—BY STIMULANT ABUSEBIPOLAR IV: HYPERTHYMIC DEPRESSION - "patients with clinical depression that occurs later in life and superimposed on a lifelong hyperthymic temperament."Each of the diagnostic categories was illustrated by a clinical case report. Cyclothymia is included in DSM-IV: "A history of hypomanic episodes with periods of depression that do not meet criteria for major depressive episodes. There is a low-grade cycling of mood which appears to the observer as a personality trait, and interferes with functioning." Hyperthymia, however, is not a diagnosis but an affective temperament "characterized by exuberant, upbeat, overenergetic, and overconfident lifelong traits." More specifically (Akiskal & Pinto, 1999):The attributes of a hyperthymic temperament are not episode-bound and constitute part of the habitual long-term functioning of the individual. Patients are typically men in their 50s whose lifelong drive, ambition, high energy, confidence, and extroverted interpersonal skills helped them to advance in life, to achieve successes in a variety of business domains or political life.Arnold Schwarzenegger comes to mind [if he had started having depressive episodes several years ago]. In fact, the case study of bipolar IV was presented as a highly successful, 53 year old married lawyer with three other families in different countries.Do powerful, philandering, middle-aged men who become depressed in their 50s really need their own special diagnosis??There are critics, of course... In his critique of the spectrum, Paris (2009) called it "bipolar imperialism" and said: "Until further research clarifies the boundaries of bipolarity, we should be conservative about extending its scope." It seems that no one is safe any more. Recurrent depression? Bipolar. Anxious and depressed? Most certainly bipolar.But the worse frontier of all has to be Bipolar Type VI: Dementia (Ng et al., 2008)! This paper presents "selected" case histories of 10 elderly patients from the California/Mexico border and Brazil. These patients presented with "late-onset mood and related behavioral symptomatology and cognitive decline without past history of clear-cut bipolar disorder." In other words: dementia (caused by neurodegenerative disease), with classic symptoms such as:Having hallucinations, arguments, striking out, and violent behaviorHaving delusions, depression, agitationAre we surprised that mood stabilizers and atypical antipsychotics were said to be beneficial??click on image for a larger viewAdapted from Table 1 (Ng et al., 2008). Clinical features and response to treatment in elderly patients with bipolar disorder type VI. [NOTE from The Neurocritic: atypical antipsychotics are in red, mood stabilizers are in blue.]Cases 1-5 are poor elderly Latino patients attending an adult day treatment center, and cases 6-10 are from private practice in a more affluent area of Brazil. Galantamine, donepezil, and rivastigmine are acetylcholinesterase inhibitors typically used to treat Alzheimer's disease [with limited effectiveness], while memantine blocks NMDA glutamate receptors. So why would the authors claim that the mood and behavioral problems had anything to do with bipolar disorder?Omitted from Table 1 (for space reasons and ease of presentation) are columns for premorbid temperament (as judged by family members) and family history. The temperaments were mostly cyclothymic or hyperthymic. Family histories included none (n=3), mood & anxiety (n=2), alcohol (n=2), and bipolar disorder (n=3). OK then, only 3 of the 10 patients had a family history of bipolar disorder. Again, what's the rationale for creating the new category of "bipolar type VI"?We present our perspective as an alternative to the more commonly held clinical–neurological view that agitation, impulsivity and related mood instability in Alzheimer's and other dementia patients merely represents frontal lobe dysfunction (Senanarong et al., 2004). A more sophisticated view in the literature argues that behavioral–cognitive syndrome in Alzheimer's disease is a prodromal stage, whereas in fronto-temperal dementia the behavioral disorder appears when the cognitive deficit ... Read more »

  • May 27, 2011
  • 02:00 AM

Abusing Chocolate and Bipolar Diagnoses

by The Neurocritic in The Neurocritic

Is chocolate a legal "social drug" of abuse in the same category as nicotine, caffeine, and alcohol? Do you hang out at chocolate cafes with the purpose of becoming high or intoxicated? No? Have you heard of cancer-related deaths due to chocolate or driving under the influence of chocolate?And really, how much chocolate is considered "chocolate abuse"?1A new paper by Maremmani et al. (2011) addressed none of these questions, but asked 562 depressed Italian outpatients about their cigarette, coffee, and chocolate consumption. Why? Actually, it's not clear.Across all ages and cultures, mankind has always used substances in order to induce pleasurable sensations or desirable psychophysiological states. These substances, notably caffeine, tobacco, alcohol and chocolate, given their widely accepted recreational use, can be labeled ‘social drugs’.This passage appeared as Background in the Abstract and as the first two sentences of the Introduction: brief literature review. But we have no explanation of why cigarettes, coffee, and chocolate are assumed to be "social drugs". Are there no solitary smokers, drinkers, and eaters? Look at the large number of singletons staring at laptop screens at any Starbucks. However, cafe culture in Italy or France does allow for smoking, espresso sipping, and chocolate croissant nibbling all at the same time. Also, anti-smoking laws in other countries force smokers to congregate outside to smoke, which often turns into a social activity.But why look at the consumption of "social drugs" in people who are depressed? Aren't these individuals less inclined to be social? And aren't they likely to show anhedonia (loss of interest of pleasure) according to DSM IV criteria?2) markedly diminished interest or pleasure in all, or almost all, activities most of the day, nearly every day (as indicated by either subjective account or observation made by others)We're reminded that caffeine improves mood and cognitive performance, increases mental energy, and reduces fatigue, that nicotine increases attention and working memory, and that chocolate can also improve mood and even reduce stress. We're also reminded that psychiatric patients use more drugs than those in the general population:With regard to caffeine, in hospitalized psychiatric patients the prevalence of tolerance and intoxication is significantly higher than in the healthy population. The highest caffeine intake has been found in patients suffering from eating disorders (Ciapparelli et al., 2010) and schizophrenia (Rihs et al., 1996), the lowest in patients with anxiety disorders and major depression (Ciapparelli et al., 2010; Rihs et al., 1996).It's well known that the prevalence of smoking among individuals with schizophrenia is quite high (70-80%), higher than for those with other mental illnesses (Winterer, 2010). In fact, smoking is often considered a form of self-medication. So are cigarettes a "social drug" for smokers with schizophrenia?Finally, we have the case of chocolate, where Maremmani et al. (2011) issue a number of curious pronouncements:Lastly, the consumption of chocolate has shown interesting forms of linkage with psychiatric conditions. The correlation most often studied is that with depression: it has been observed that the craving for the rewards given by chocolate intensifies when depressive mood is induced... More severe depressive symptoms have been associated with higher chocolate consumption (Rose et al., 2010). Self-labeled ‘chocolate addicts’ do not generally seem to suffer from eating disorders, but may constitute a population of psychologically vulnerable people with a high predisposition to depression and anxiety disorders (Dallard et al., 2001). More specifically, a craving for chocolate seems to be unusually high not only in cases of depressed mood, but also in conditions of emotional dysregulation, like anxious and irritable states. The capacity to find comfort in eating chocolate seems to be related to the biological mechanisms of emotional instability,2 so that the depression associated with a craving for chocolate turns out to be an efficient discriminator of hysteroid dysphoria3 and DSM-IV atypical depression (Parker and Crawford, 2007; Schuman et al., 1987).So of the three "social drugs", chocolate seems like the winner among depressives of any sort, especially those with the greatest emotional dysregulation (i.e., those with bipolar disorder).Then the introductory narrative inserts a non sequitur or three on illegal drugs of abuse (heroin and cocaine) and alcohol use in bipolar disorder, and "cyclothymic traits" in bipolar individuals, heroin addicts, and alcoholics. Furthermore........This reported bipolar connection, in our opinion, is not just valid at a clinical level. We have stressed the possible role of the bipolar spectrum in the pathogenesis of substance use disorders. In particular, our integrated model provides an explanation for why the bipolar spectrum is the psychic substrate for the development of a substance-resorting attitude...So let's blur the lines between alcoholics and coffee drinkers, schizophrenic smokers and chocolate-craving dysthymics, severe bipolar I disorder and mild cyclothymia, shall we? Then what?The 562 depressed Italian outpatients were initially given one of four DSM-IV-TR diagnoses:192 patients with a Major Depressive Episode, 212 with Major Depression, Recurrent, 119 with Bipolar Depression, and 39 with Depression NOS (“not otherwise specified”).The participants also filled out the Hypomania Checklist (HCL-32) and according to the dichotomous rating procedure, there were 306 non-bipolar and 256 bipolar depressive patients [vs. 119 bipolar depressives according to DSM-IV-TR]. This illustrates the expansionism of the "bipolar spectrum" project, which is a major goal of the senior author.Then we have the vague quantification of chocolate use:The social drug habit was recorded in terms of the use of tobacco, coffee and dark chocolate-based food (chocolate bars, hot chocolate, chocolate-containing ice cream, biscuits or cakes). We classified smoking habits by division into 3 ascending ranks: total non-smokers, past regular users and current regular users and considered one cigarette as a “unit”. As to coffee, we distinguished between regular consumption (at least one coffee a day) and sporadic or no use [one cup was considered a unit]. What was considered a chocolate "unit"? The paper doesn't say.To reiterate: the goals of the study were to prove that the notion of "bipolar" should be expanded, and that those on the "bipolar spectrum" are more likely to abuse substances of any sort.And did the results support these ideas? Well, 44.5% of the DSM-IV-TR bipolar depressives were current smokers, and 43.4% of the HCL-32 bipolar depressives were current smokers. However, the statistics were only significant in the latter case, because the comparison was between only two groups, instead of between four groups. Even better are the number of cigarettes smoked daily: 10.66 for the DSM bipolars [vs. 8.13 for the other groups combined], and 10.45 for the HCL bipolars [vs. 7.51 for the non-bipolars]. The stats were nonsignificant in the first case (p=.33) and highly significant in the second case (p=.0003).In contrast, there were no differences in chocolate consumptio... Read more »

Maremmani I, Perugi G, Rovai L, Maremmani AG, Pacini M, Canonico PL, Carbonato P, Mencacci C, Muscettola G, Pani L.... (2011) Are "social drugs" (tobacco, coffee and chocolate) related to the bipolar spectrum?. Journal of affective disorders. PMID: 21605911  

  • May 19, 2011
  • 03:09 AM

Improving the Physical Health of People With Serious Mental Illness

by The Neurocritic in The Neurocritic

Today is the Mental Health Blog Party sponsored by the American Psychological Association as part of Mental Health Month. A widely neglected part of mental health treatment is encouraging and maintaining good physical health. This is extremely difficult when some of the major drugs prescribed for serious mental illnesses (such as schizophrenia and bipolar disorder) produce substantial weight gain. The "second generation" or atypical antipsychotics can cause obesity and hence diabetes, hypertension, cardiovascular problems, high cholesterol, and stroke.Yesterday the BBC posted this headline:Mentally ill have reduced life expectancy, study findsBy Dominic Hughes Health correspondent, BBC NewsPeople suffering from serious mental illnesses like schizophrenia or bipolar disorder can have a life expectancy 10 to 15 years lower than the UK average.Researchers tracked the lives of more than 30,000 patients through the use of electronic medical records.They found that many were dying early from heart attack, stroke and cancer rather than suicide or violence.Mental health groups say vulnerable people need to be offered better care to prevent premature deaths.. . . "We need to improve the general health of people suffering from mental disorders by making sure they have access to healthcare of the same standard, quality and range as other people, and by developing effective screening programmes."The BBC article referred to a paper that was published today in PLoS ONE (Chang et al., 2011). The authors reviewed the electronic database of a major mental health care provider (the South London and Maudsley NHS Foundation Trust). The results were alarming (but not new, unfortunately):A total of 31,719 eligible people, aged 15 years or older, with SMI [serious mental illness] were analyzed. Among them, 1,370 died during 2007–09. Compared to national figures, all disorders were associated with substantially lower life expectancy: 8.0 to 14.6 life years lost for men and 9.8 to 17.5 life years lost for women. Highest reductions were found for men with schizophrenia (14.6 years lost) and women with schizoaffective disorders (17.5 years lost). click on image for a larger viewFigure 1 (Chang et al., 2011). Annual mortality risk (%) by age groups and diagnoses of mental illness, compared to England and Wales population in 2008.The figure above illustrates the 2008 population of England and Wales in the red bars for five different age group: 15-29, 30-44, 45-59, 60-74, and 75+. Those with substance use disorders are shown in maroon, schizoaffective disorder in green, bipolar disorder in purple, schizophrenia in aqua, and depression/recurrent depression in light brown.Mortality risk is increased for all psychiatric diagnoses, and is especially evident in the middle three age groups. Life expectancies were estimated using these data, and the resultsconfirmed substantially shortened life expectancies at birth for all serious mental disorder groups investigated compared to national norms. Largest reductions were found for men with schizophrenia, women with schizoaffective disorders, and both men and women with substance use disorders.Why might this be? The authors do not speculate beyond stating that the "underlying causes may be multiple." Certainly, one can imagine that medication-induced weight gain [and increased levels of smoking] among the SMI contributes to lowered life expectancies.To counteract these dismal statistics, a regular part of mental health treatment should include programs that promote better physical health. Smoking cessation and nutritionists and structured exercise classes in addition to standard psychiatric care and substance abuse treatment.A six month intervention pilot study in Maryland enrolled 63 overweight participants at psychiatric rehabilitation day programs (Daumit et al., 2010):Results: ... In total, 52 (82%) completed the study; others were discharged from psychiatric centers before completion of the study. Average attendance across all weight management sessions was 70% (87% on days participants attended the center) and 59% for physical activity classes (74% on days participants attended the center). From a baseline mean of 210.9 lbs (s.d. 43.9), average weight loss for 52 participants was 4.5 lb (s.d. 12.8) (P<0.014). On average, participants lost 1.9% of body weight. Mean waist circumference change was 3.1 cm (s.d. 5.6). Participants on average increased the distance on the 6-minute walk test by 8%.Conclusion: This pilot study documents the feasibility and preliminary efficacy of a behavioral weight-loss intervention in adults with serious mental illness who were attendees at psychiatric rehabilitation centers...Although a 2% loss of body weight may not seem like much, it's better than a 10% weight gain over the same time period. The medical profession is obligated to provide the means for improved physical health in persons with serious mental illnesses. When physical health is potentially compromised by psychiatric treatments such as atypical antipsychotics, action to improve the situation is even more urgent.ReferencesChang, C., Hayes, R., Perera, G., Broadbent, M., Fernandes, A., Lee, W., Hotopf, M., & Stewart, R. (2011). Life Expectancy at Birth for People with Serious Mental Illness and Other Major Disorders from a Secondary M... Read more »

  • May 11, 2011
  • 02:59 PM

Revisiting Depression's Cognitive Downside

by The Neurocritic in The Neurocritic

Depression, by h.koppdelaneyIs depression actually good for you?Experts now believe that mild to moderate depression may be good for us – and even help us live longer. Rebecca Hardy explains how to reap the benefitsWe constantly hear how depression is blighting our lives, but some experts have an interesting, if controversial, theory: depression can be "good for us", or at least a force for good in our lives.Is this the start of a new Negative Psychology1 movement? Let's all seek out personal tragedy, sadness, insomnia, and a profound sense of failure and hopelessness, because it's good for us!!Last year, author and blogger Jonah Lehrer had a lengthy (and controversial) essay in the New York Times Magazine on Depression's Upside. The main idea, that depression has cognitive and evolutionary advantages, was largely based on a review paper by Andrews and Thomson (2009). In it, they put forth the analytical rumination hypothesis: depression is an evolved response to complex problems, and focusing on them to the exclusion of everything else is beneficial.In response, The Neurocritic was motivated to write about Depression's Cognitive Downside:On the contrary, numerous papers have shown that impairments in cognitive processes such as executive control, attention, and memory persist after a depressed person has recovered (Andersson et al., 2010; Baune et al., 2010; Hammar et al., 2009). In actively depressed patients, Baune and colleagues (2010) found impairments in all domains tested: immediate memory, visuospatial construction, language, attention, and delayed memory. These deficits can contribute to lower social and occupational functioning and a diminished quality of life. In addition, depression can be associated with declines in problem solving abilities on neuropsychological tests such as the Wisconsin Card Sorting Test and the Tower of London test.Now, a new paper by von Helversen et al. (2011) has claimed that depression is good for decision making. Lehrer wrote about this study as support for the analytical rumination hypothesis in Does Depression Help Us Think Better?Here’s where things get interesting: depressed patients approximated the optimal strategy [for hiring the best applicant in a simulated job search] much more closely than non-depressed participants did. The main problem with healthy subjects is that they proved lazy, unwilling to search through enough applicants. Those with depression, on the other hand, were much more willing to keep on considering alternatives, which is why they performed far better on the task. While this study comes with many caveats, it remains an interesting demonstration that depression, at least in specific situations, seems to enhance our analytical skills, making us better at focusing on social dilemmas.Participants in the study were 37 inpatients diagnosed with major depression upon admission to the hospital (10 of whom were omitted "due to technical difficulties with the choice task"). The 27 remaining patients were classified as either "depressed" (n=15) or "recovered" (n=12) based on improved scores on the Patient Health Questionnaire (PHQ-D) between admission and testing (which was a mean of 6.25 days -- that seems like an incredibly rapid remission, which makes one wonder about the actual severity and why they were admitted in the first place). Only half of the patients, both depressed and recovered, were on antidepressants (none were on other medications), which seems unusual for patients who may have been suicidal. Perhaps the criteria for admission to the psych ward in Germany are different than they are in the U.S. and Canada. The still-depressed patients were in hospital an average of 4.20 days when they were tested (which was not significantly different from the recovered patients). It wasn't completely clear if any of the patients were already on antidepressants, or whether the pharmacological treatment started during hospitalization for those on meds.2 The paper did not state whether any of the depressed patients had another diagnosis, such as an anxiety disorder of any sort (co-morbidity is common).Mean scores on the Beck Depression Inventory (BDI) were higher in the Depressed group (29.13) than in the Recovered group (16.67) or the Control participants (6.63), who also differed from each other. BDI scores of 14–19 are considered mildly depressed, 20–28 moderately depressed, and 29–63 severely depressed. So patients in the Depressed group scored at the low end of severely depressed, the Recovered participants were mildly depressed, and the Controls (n=27) were not depressed at all.The task administered to all participants is called the "Secretary Problem":The sequential decision-making task consisted of playing 30 games of a secretary-type problem. Each game challenged participants to find the best candidate for a job out of a sequence of 40 applicants. The 40 applicants were presented one after another, in a random sequence. After an applicant was presented, participants needed to decide whether they would accept the applicant or not. If they accepted the applicant, the game concluded and the next game started. If they rejected the applicant, the next applicant was presented. Rejected applicants could not be chosen later in that game. Information about the current candidate included their relative ranking compared to the candidates that came before, but not their absolute ranking. Points were awarded based on the absolute ranking of the candidate chosen on each round. If a participant didn't make a choice until the end of the sequence, they were forced to accept the final candidate. So it seems that an indecisive person would be more likely to continue the search for a longer time...Results showed there was a trend in that direction (p=.08): search length was 23.37 for Depressed, 16.87 for Recovered, and 17.96 for Controls. Performance goals for each round (how good a candidate would have to be in order to be chosen) and the relative rank of candidates did not differ between groups. However, the number of points awarded for each game did differ (p=.02): 37.67 for Depressed, 35.50 for Recovered, and 35.17 for Controls. A computational model suggested that the Depressed group had higher internal thresholds for the first and second, but not the third threshold. A caveat from the authors:However, although we found that depressed participants had higher thresholds than did nondepressed participants, we did not find significant differences in the self-reported goals of participants. This suggests that differences in behavior may not result from participants’ conscious effort to perform well. Thus, increases in thresholds could be an artifact stemming from greater persistence and the inability to disengage from a task.What does this mean? That severely depressed inpatients should be given the task of selecting job candidates for Fortune 500 companies, while they are so impaired otherwise that they are unable to work or function socially? Is a very modest performance benefit in a laboratory sequential decision making task worth the pain and suffering of severe depression, along with its concomitant deficits in other cognitive domains?... Read more »

join us!

Do you write about peer-reviewed research in your blog? Use to make it easy for your readers — and others from around the world — to find your serious posts about academic research.

If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.

Register Now

Research Blogging is powered by SMG Technology.

To learn more, visit