The Neurocritic

324 posts · 409,851 views

Born in West Virginia in 1980, The Neurocritic embarked upon a roadtrip across America at the age of thirteen with his mother. She abandoned him when they reached San Francisco and The Neurocritic descended into a spiral of drug abuse and prostitution. At fifteen, The Neurocritic's psychiatrist encouraged him to start writing as a form of therapy.

The Neurocritic
324 posts

Sort by Latest Post, Most Popular

View by Condensed, Full

  • January 22, 2016
  • 08:30 AM
  • 733 views

This Neuroimaging Method Has 100% Diagnostic Accuracy (or your money back)

by The Neurocritic in The Neurocritic

doi:10.1371/journal.pone.0129659.g003Did you know that SPECT imaging can diagnose PTSD with 100% accuracy (Amen et al., 2015)? Not only that, out of a sample of 397 patients from the Amen Clinic in Newport Beach, SPECT was able to distinguish between four different groups with 100% accuracy! That's right, the scans of (1) healthy participants, and patients with (2) classic post-traumatic stress disorder (PTSD), (3) classic traumatic brain injury (TBI), and (4) both disorders..... were all classified with 100% accuracy!TRACK-TBI investigators, your 3T structural and functional MRI outcome measures are obsolete.NIMH, the hard work of developing biomarkers for mental illness is done, you can shut down now. Except none of this research was funded by you...The finding was #19 in a list of the top 100 stories by Discover Magazine.How could the Amen Clinics, a for-profit commercial enterprise, accomplish what an army of investigators with billions in federal funding could not?The authors1 relied on a large database of scans collected from multiple sites over a 20 year period. The total sample included 20,746 individuals who visited one of nine Amen Clinics from 1995-2014 for the purposes of psychiatric and/or neurological evaluation (Amen et al., 2015). The first analysis included a smaller, highly selected sample matched on a number of dimensions, including psychiatric comorbidities (Group 1).- click on image for larger view -You'll notice the percentage of patients with ADHD was remarkably high (58%, matched across the three patient groups). Perhaps that's because... I did not know that. Featuring Johnny Cash ADD.SPECT uses a radioactive tracer injected 30 minutes before a scan that will assess either the “resting state” or an “on-task” condition (a continuous performance task, in this study). Clearly, SPECT is not the go-to method if you're looking for decent temporal resolution to compare two conditions of an active attention task. The authors used a region of interest (ROI) analysis to measure tracer activity (counts) in specific brain regions.I wondered about the circularity of the clinical diagnosis (i.e., were the SPECT scans used to aid diagnosis), particularly since “Diagnoses were made by board certified or eligible psychiatrists, using all of the data available to them, including detailed clinical history, mental status examination and DSM-IV or V criteria...” But we were assured that wasn't the case: “These quantitative ROI metrics were in no way used to aid in the clinical diagnosis of PTSD or TBI.” The rest of the methods (see Footnote 2) were opaque to me, as I know nothing about SPECT.A second analysis relied on visual readings (VR) of about 30 cortical and subcortical ROIs. “Raters did not have access to detailed clinical information, but did know age, gender, medications, and primary presenting symptoms (ex. depressive symptoms, apathy, etc.).”  Hmm...But the quantitative ROI analysis gave superior results to the clinician VR. So superior, in fact, that the sensitivity/specificity in distinguishing one group from another was 100% (indicated by red boxes below). The VR distinguished patients from controls with 100% accuracy, but was not as good for classifying the different patient groups during the resting state scan — only a measly 86% sensitivity, 81% specificity for TBI vs. PTSD, which is still much better than other studies. However, results from the massively sized Group 2 were completely unimpressive. 3- click on image for larger view, you'll want to see this - Why is this so important? PTSD and TBI can show overlapping symptoms in war veterans and civilians alike, and the disorders can co-occur in the same individual. More accurate diagnosis can lead to better treatments. This active area of research is nicely reviewed in the paper, but no major breakthroughs have been reported yet. So the claims of Amen et al. are remarkable. Stunning if true. But they're not. They can't be. The accuracy of the classifier exceeds the precision of the measurements, so this can't be possible. What is the test-retest reliability of SPECT? What is the concordance across sites? Was there no change in imaging protocol, no improvements or upgrades to the equipment over 20 years? SPECT is sensitive to motion artifact, so how was that handled, especially in patients who purportedly have ADHD?SPECT has been noted for its poor spatial resolution compared to other functional neuroimaging techniques like PET and fMRI. A panel of ... Read more »

  • January 8, 2016
  • 07:47 AM
  • 906 views

Opioid Drugs for Mental Anguish: Basic Research and Clinical Trials

by The Neurocritic in The Neurocritic

The prescription opioid crisis of overdosing and overprescribing has reached epic proportions, according to the North American media. Just last week, we learned that 91% of patients who survive opioid overdose are prescribed more opioids! The CDC calls it an epidemic, and notes there's been “a 200% increase in the rate of overdose deaths involving opioid pain relievers and heroin.” A recent paper in the Annual Review of Public Health labels it a “public health crisis” and proposes “interventions to address the epidemic of opioid addiction” (Kolodny et al., 2015).In the midst of this public and professional outcry, why on earth would anyone recommend opioid drugs as a treatment for severe depression and suicidal ideation??Let's revisit the questions posed in my previous post: Does the pain of mental anguish rely on the same neural machinery as physical pain?     Can we treat these dreaded ailments with the same medications? The opioid-for-depression proponents would answer both of those questions in the affirmative,1 with some qualifications. First off, the actual medication in question (and its dose) is different from the typically abused opiate / opioid drug. As far as I can tell, no one is clamoring for narcotic analgesics like OxyContin and Vicodin to be used as antidepressants.In his 2008 paper on the Psychotherapeutic Benefits of Opioid Agonist Therapy, Dr. Peter L. Tenore reviewred the history of the Opium Cure and declared, “Opioids have been used for centuries to treat a variety of psychiatric conditions with much success.” However, these drugs can be highly addictive (obviously) so he issued this caveat at the end of the paper:It should be noted that opioids do not have FDA approval for the treatment of psychiatric disorders. The intent of this paper was not to suggest that practitioners should prescribe opioids in a manner not approved by the FDA, but rather it was to explore the mechanisms and develop hypotheses that might explain the observation that opioid-dependent psychiatric patients in appropriately certified opioid replacement therapy programs (i.e., methadone treatment programs) stabilize on higher opioid dosages than those without psychiatric diagnoses.Methadone and especially low-dose buprenorphine are the drugs being tested for their antidepressant efficacy, even in those who have no opioid abuse issues. Buprenorphine is a mixed partial μ/κ agonist with complex actions, including:Antagonist (blocker) of κ-opioid receptors (KORs) that bind dynorphins (endogenous opioids associated with anxiety and dysphoria) Partial agonist at μ-opioid receptors (MORs), producing analgesic effects but with less euphoria and less respiratory depression than full agonistsBasic research in rodents suggests that KORs may be a promising target for potential psychiatric treatments in humans, based on improvements shown in standard behavioral assays such as the forced swim test and the elevated maze test (Crowley & Kash, 2015).2 But there's still a long way to go. In addition to the difficulty of modeling mental anguish in animals, the complexity of the dynorphin/KOR system — which can exhibit paradoxical and “convoluted” effects on behavior3 — presents a barrier to clinical translation. In contrast, a very different approach uses affect modeling in an effort to accelerate drug development in neuropsychiatry (Panksepp & Yovell, 2014). In this view, current models of depression have hindered new breakthroughs because of their focus on animal behaviors, instead of animal emotions. Panksepp maintains that separation distress and infant versions of psychic pain, excessive sadness, and grief are mediated by the PANIC system, which is soothed by opioids. Chicks, kittens, puppies, and other infant animals emit distress vocalizations when separated from their mothers. Rat pups emit ultrasonic vocalizations and baby monkeys “coo”. These innate, reflexive, and adaptive behaviors are reduced with low doses of morphine.4 Panksepp and colleagues have inferred that very strong and human-like emotions are associated with distress vocalizations.By way of example, here is my adult cat. He's very affectionate and chatty. He requires a lot of attention and doesn't like to be alone. Does he meow and miss me when I'm on vacation? I imagine he does. Do I think he feels psychic pain and grief while I'm gone? No.Watt and Panskepp (2009) argue that depression is an evolutionarily conserved mechanism to terminate separation distress, drawing on psychoanalytic concepts like object relations theory as well as the literature on neuropeptides and neuromodulators implicated in major depression.Nopan Treatment of Acute SuicidalityThe research on separation distress in animals helped motivate a clinical trial that was recently published in the American Journal of Psychiatry (Yovell et al., 2015). The initial daily dose of Nopan (0.1 or 0.2 mg sublingual buprenorphine hydrochloride) was relatively low, reaching a maximum dose of 0.8 mg daily by the end of the four week trial (mean = 0.44 mg). By way of comparison, the ... Read more »

  • December 29, 2015
  • 07:04 AM
  • 1,045 views

Social Pain Revisited: Opioids for Severe Suicidal Ideation

by The Neurocritic in The Neurocritic

Does the pain of mental anguish rely on the same neural machinery as physical pain? Can we treat these dreaded ailments with the same medications? These issues have come to the fore in the field of social/cognitive/affective neuroscience.As many readers know, Lieberman and Eisenberger (2015) recently published a controversial paper claiming that a brain region called the dorsal anterior cingulate cortex (dACC, shown above) is “selective” for pain.1 This finding fits with their long-time narrative that rejection literally “hurts” — social pain is analogous to physical pain, and both are supported by activity in same regions of dACC (Eisenberger et al., 2003). Their argument is based on work by Dr. Jaak Panksepp and colleagues, who study separation distress and other affective responses in animals (Panksepp & Yovell, 2014).Panksepp wrote The Book on Affective Neuroscience in 1998, and coined the term even earlier (Panksepp, 1992). He also wrote a Perspective piece in Science to accompany Eisenberger et al.'s 2003 paper:We often speak about the loss of a loved one in terms of painful feelings, but it is still not clear to what extent such metaphors reflect what is actually happening in the human brain? Enter Eisenberger and colleagues ... with a bold neuroimaging experiment that seeks to discover whether the metaphor for the psychological pain of social loss is reflected in the neural circuitry of the human brain. Using functional magnetic resonance imaging (fMRI), they show that certain human brain areas that “light up” during physical pain are also activated during emotional pain induced by social exclusion [i.e., exclusion from playing a video game]. But as I've argued for years, Social Pain and Physical Pain Are Not Interchangeable. Whenever I read an article proclaiming that “the brain bases of social pain are similar to those of physical pain”, I am reminded of how phenomenologically DIFFERENT they are. And subsequent work has demonstrated that physical pain and actual social rejection (a recent romantic break-up) do not activate the same regions of dACC (Woo et al., 2014). Furthermore, multivariate activation patterns across the entire brain can discriminate pain and rejection with high accuracy.2 Modified from Fig. 3 (Woo et al., 2014). Differences between fMRI pattern-based classifiers for pain and rejection.Feelings of rejection were elicited in the participants by showing them pictures of their ex-partners (vs. pictures of close friends), and physical pain was elicited by applying painful heat to the forearm (vs. warm heat).Does this mean there is no overlap between brain systems that can dampen physical and emotional pain (e.g., endogenous opioids)? Of course not; otherwise those suffering from utter despair, unspeakable loneliness, and other forms of psychic turmoil would not self-medicate with mind-altering substances.Separation Distress: Of Mice and PsychoanalysisAlthough Panksepp has worked primarily with rodents and other animals throughout his career, he maintains a keen interest in neuropsychoanalysis, an attempt to merge Freudian psychoanalysis with contemporary neuroscience. Neuropsychoanalysis “seeks to understand the human mind, especially as it relates to first-person experience.” If you think that's a misguided (and impossible) quest, you might be surprised by some of the prominent neuroscientists who have signed on to this agenda (see these posts).Prof. Panksepp is currently collaborating with Prof. Yoram Yovell, a Psychoanalyst and Neuroscientist at the Institute for the Study of Affective Neuroscience (ISAN) in Haifa. A recent review paper addresses their approach of affective modeling in animals as a way to accelerate drug development in neuropsychiatry (Panksepp & Yovell, 2014). Their view is that current models of depression, which focus on animal behaviors instead of animal emotions, have hindered new breakthroughs in treatments for depression. It’s actually a fascinating and ambitious research program:We admit that our conceptual position may be only an empirical/ontological approximation, especially when contrasted to affective qualia in humans … but it is at least a workable empirical approach that remains much underutilized. Here we advance the view that such affective modeling can yield new medical treatments more rapidly than simply focusing on behavioral processes in animals. In sum, we propose that the neglect of affect in preclinical psychiatric modeling may be a major reason why no truly new psychiatric medicinal treatments have arisen from behavior-only preclinical modeling so far.They propose that three key primal emotional systems3 may be critical for understanding depression: SEEKING (enthusiasm-exuberance), PANIC (psychic pain), and PLAY (joyful exuberance). If these constructs sound highly anthropormorphic when applied to rats, it's because they are!! Perhaps you'd rather “reaffirm classical behaviorist dogma” (Panksepp & Yovell, 2014) and stick with more traditional notions like brain reward systems, separation distress, and 50-kHz ultrasonic vocalizations (e.g., during tickling, mating, and play) when studying rodents.Of interest today is the PANIC system (Panksepp & Yovell, 2014), which “mediates the psychic pain of separation distress (i.e. excessive sadness and grief), which can be counteracted by minimizing PANIC arousals (as with low-dose opioids).” Since low-dose opioids alleviate separation distress in animals (based on reductions in distress vocalizations), why not give them to suicidal humans suffering from psychic pain?Well... because making strong inferences about the contents of animal minds is deeply problematic (... Read more »

  • December 12, 2015
  • 05:47 PM
  • 888 views

This Week in Neuroblunders: Optogenetics Edition

by The Neurocritic in The Neurocritic

Recent technological developments in neuroscience have enabled rapid advances in our knowledge of how neural circuits function in awake behaving animals. Highly targeted and reversible manipulations using light (optogenetics) or drugs have allowed scientists to demonstrate that activating a tiny population of neurons can evoke specific memories or induce insatiable feeding.But this week we learned these popular and precise brain stimulation and inactivation methods may produce spurious links to behavior!! And that “controlling neurons with light or drugs may affect the brain in more ways than expected”! Who knew that rapid and reversible manipulations of a specific cell population might actually affect (gasp) more than the targeted circuit, suggesting that neural circuits do not operate in isolation??Apparently, a lot of people already knew this.Here's the dire Nature News report:...stimulating one part of the brain to induce certain behaviours might cause other, unrelated parts to fire simultaneously, and so make it seem as if these circuits are also involved in the behaviour.According to Ölveczky, the experiments suggest that although techniques such as optogenetics may show that a circuit can perform a function, they do not necessarily show that it normally performs that function. “I don’t want to say other studies have been wrong, but there is a danger to overinterpreting,” he says.But the paper in question (Otchy et al., 2015) was not primarily about that problem. The major theme is shown in the figure above — the difference between acute manipulations using a drug (muscimol) to transiently inactivate a circuit versus the chronic effects of permanent damage (which show remarkable recovery).1 In the songbird example, acute inactivation of the nucleus interface (Nif) vocal control area (and its “off-target” attachments) warped singing, but the “chronic” lesion did not.2 In an accompanying commentary, Dr. Thomas C. Südhof asked:How should we interpret these experiments? Two opposing hypotheses come to mind. First, that acute manipulations are unreliable and should be discarded in favour of chronic manipulations. Second, that acute manipulations elicit results that truly reflect normal circuit functions, and the lack of changes after chronic manipulations is caused by compensatory plasticity. But not so fast! said Südhof (2015), who then stated the obvious. “Many chronic manipulations of neural circuits (both permanent genetic changes and physical lesions) do actually produce major behavioural changes.” [as if no one had ever heard of H.M. or Phineas Gage or Leborgne before now.]The acute/chronic conundrum is nothing new in the world of human neurology. But centuries of crudely observing accidents of nature, with no control over which brain regions are damaged, and no delineation of precise neural mechanisms for behavior, doesn't count for much in our store of knowledge about acute vs. chronic manipulations of neural circuits.Let's take a look at a few examples anyway. In his 1876 Lecture on the Prognosis of Cerebral Hæmorrhage, Dr. Julius Althaus discussed recovery of function: Do patients ever completely recover from an attack of cerebral hæmorrhage?This question used formerly to be unhesitatingly answered in the affirmative.. . .The extent to which recovery of function may take place depends–1. Upon the quantity of blood which has been effused.  ...2. Upon the portion of the brain into which the effusion has taken place. Sensation is more easily re-established than motion; and hæmorrhage into the thalamus opticus seems to give better prospects of recovery than when the blood tears up the corpus striatum.  ...[etc.] In his 1913 textbook of neurology (Organic and Functional Nervous Diseases), Dr. Moses Allen Starr discussed aspects of paralysis from cortical disease, and the uniqueness of motor representations across individuals: “Every artisan, every musician, every dancer, has a peculiar individual store of motor memories. Some individuals possess a greater variety of them than others. Hence the motor zone on the cortex is of different extent in different persons, each newly acquired set of movements increasing its area.”In 1983, we could read about Behavioral abnormalities after right hemisphere stroke and then Recovery of behavioral abnormalities after right hemisphere stroke.More recently, there's been an emphasis on connectome-based approaches for quantifying the effects of focal brain injuries on large-scale network interactions, and how this might predict neuropsychological outcomes. So the trend in human neuroscience is to acknowledge the impact of chronic lesions on distant brain regions, rather than the current contention [in animals, of course] that “acute manipulations are probably more susceptible to off-target effects than are chronic lesions.”But I digress... Based on two Nature commentaries about the Otchy et al. paper, I was expecting “ah ha, gotcha, optogenetics is a fatally flawed technique.” This Hold Your Horses narrative fits nicely into a recap of neurogaffes in high places. One of the experiments did indeed use an optogenetic manipulation, but the issue wasn't specific to that method.Ultimately, the neuroblunder for me wasn't the Experimental mismatch in neural circuits (or a failure of optogenetics per se), it was the mismatch between the-problem-as-hyped and a lack of historical context for said problem.Footnotes1... Read more »

  • November 30, 2015
  • 01:34 AM
  • 849 views

Carving Up Brain Disorders

by The Neurocritic in The Neurocritic

Neurology and Psychiatry are two distinct specialties within medicine, both of which treat disorders of the brain. It's completely uncontroversial to say that neurologists treat patients with brain disorders like Alzheimer's disease and Parkinson's disease. These two diseases produce distinct patterns of neurodegeneration that are visible on brain scans. For example, Parkinson's disease (PD) is a movement disorder caused by the loss of dopamine neurons in the midbrain.Fig. 3 (modified from Goldstein et al., 2007). Brain PET scans superimposed on MRI scans. Note decreased dopamine signal in the putamen and substantia nigra (S.N.) bilaterally in the patient. It's also uncontroversial to say that drugs like L-DOPA and invasive neurosurgical interventions like deep brain stimulation (DBS) are used to treat PD.On the other hand, some people will balk when you say that psychiatric illnesses like bipolar disorder and depression are brain disorders, and that drugs and DBS (in severe intractable cases) may be used to treat them. You can't always point to clear cut differences in the MRI or PET scans of psychiatric patients, as you can with PD (which is a particularly obvious example).The diagnostic methods used in neurology and psychiatry are quite different as well. The standard neurological exam assesses sensory and motor responses (e.g., reflexes) and basic mental status. PD has sharply defined motor symptoms including tremor, rigidity, impaired balance, and slowness of movement. There are definitely cases where the symptoms of PD should be attributed to another disease (most notably Lewy body dementia)1, and other examples where neurological diagnosis is not immediately possible. But by and large, no one questions the existence of a brain disorder.Things are different in psychiatry. Diagnosis is not based on a physical exam. Psychiatrists and psychologists give clinical interviews based on the Diagnostic and Statistical Manual (DSM-5), a handbook of mental disorders defined by a panel of experts with opinions that are not universally accepted. The update from DSM-IV to DSM-5 was highly controversial (and widely discussed). The causes of mental disorders are not only biological, but often include important social and interpersonal factors. And their manifestations can vary across cultures.Shortly before the release of DSM-5, the former director of NIMH (Dr. Tom Insel) famously dissed the new manual:The strength of each of the editions of DSM has been “reliability” – each edition has ensured that clinicians use the same terms in the same ways. The weakness is its lack of validity. Unlike our definitions of ischemic heart disease, lymphoma, or AIDS, the DSM diagnoses are based on a consensus about clusters of clinical symptoms, not any objective laboratory measure. In other words, where are the clinical tests for psychiatric disorders?For years, NIMH has been working on an alternate classification scheme, the Research Domain Criteria (RDoC) project, which treats mental illnesses as brain disorders that should be studied according to domains of functioning (e.g., negative valence). Dimensional constructs such as acute threat (“fear”) are key, rather than categorical DSM diagnosis. RDoC has been widely discussed on this blog and elsewhere – it's the best thing since sliced bread, it's necessary but very oversold, or it's ill-advised.What does this have to do with neurology, you might ask? In 2007, Insel called for the merger of neurology and psychiatry:Just as research during the Decade of the Brain (1990-2000) forged the bridge between the mind and the brain, research in the current decade is helping us to understand mental illnesses as brain disorders. As a result, the distinction between disorders of neurology (e.g., Parkinson's and Alzheimer's diseases) and disorders of psychiatry (e.g., schizophrenia and depression) may turn out to be increasingly subtle. That is, the former may result from focal lesions in the brain, whereas the latter arise from abnormal activity in specific brain circuits in the absence of a detectable lesion. As we become more adept at detecting lesions that lead to abnormal function, it is even possible that the distinction between neurological and psychiatric disorders will vanish, leading to a combined discipline of clinical neuroscience.Actually, Insel's view dates back to 2005 (Insel & Quirion, 2005)....2 Future training might begin with two post-graduate years of clinical neuroscience shared by the disciplines we now call neurology and psychiatry, followed by two or three years of specialty training in one of several sub-disciplines (ranging from peripheral neuropathies to public sector and transcultural psychiatry). This model recognizes that the clinical neurosciences have matured sufficiently to resemble internal medicine, with core training required prior to specializing....and was expressed earlier by Dr. Joseph P. Martin, Dean of Harvard Medical School (Martin, 2002):Neurology and psychiatry have, for much of the past century, been separated by an artificial wall created by the divergence of their philosophical approaches and research and treatment methods. Scientific advances in recent decades have made it clear that this separation is arbitrary and counterproductive. .... Further progress in understanding brain diseases and behavior demands fuller collaboration and integration of these fields. Leaders in academic medicine and science must work to break down the barriers between disciplines. Contemporary leaders and observers of academic medicine are not all equally ecstatic about this prospect, however. ... Read more »

Crossley, N., Scott, J., Ellison-Wright, I., & Mechelli, A. (2015) Neuroimaging distinction between neurological and psychiatric disorders. The British Journal of Psychiatry, 207(5), 429-434. DOI: 10.1192/bjp.bp.114.154393  

David, A., & Nicholson, T. (2015) Are neurological and psychiatric disorders different?. The British Journal of Psychiatry, 207(5), 373-374. DOI: 10.1192/bjp.bp.114.158550  

  • November 23, 2015
  • 12:58 AM
  • 1,074 views

Happiness Is a Large Precuneus

by The Neurocritic in The Neurocritic

What is happiness, and how do we find it? There are 93,290 books on happiness at Amazon.com. Happiness is Life's Most Important Skill, an Advantage and a Project and a Hypothesis that we can Stumble On and Hard-Wire in 21 Days.The Pursuit of Happiness is an Unalienable Right granted to all human beings, but it also generates billions of dollars for the self-help industry.And now the search for happiness is over! Scientists have determined that happiness is located in a small region of your right medial parietal lobe. Positive psychology gurus will have to adapt to the changing landscape or lose their market edge. “My seven practical, actionable principles are guaranteed to increase the size of your precuneus or your money back.”The structural neural substrate of subjective happiness is the precuneus.A new paper has reported that happiness is related to the volume of gray matter in a 222.8 mm3 cluster of the right precuneus (Sato et al., 2015). What does this mean? Taking the finding at face value, there was a correlation (not a causal relationship) between precuneus gray matter volume and scores on the Japanese version of the Subjective Happiness Scale.1Fig. 1 (modified from Sato et al., 2015).  Left: Statistical parametric map (p < 0.001, peak-level uncorrected for display purposes). The blue cross indicates the location of the peak voxel. Right: Scatter plot of the adjusted gray matter volume as a function of the subjective happiness score at the peak voxel. [NOTE: Haven't we agreed to not show regression lines through scatter plots based on the single voxel where the effect is the largest??]“The search for happiness: Using MRI to find where happiness happens,” said one deceptive headline. Should we accept the claim that one small region of the brain is entirely responsible for generating and maintaining this complex and desirable state of being? NO. Of course not. And the experimental subjects were not actively involved in any sort of task at all. The study used a static measure of gray matter volume in four brain Regions of Interest (ROIs): left anterior cingulate gyrus, left posterior cingulate gyrus, right precuneus, and left amygdala. These ROIs were based on an fMRI activation study in 26 German men (mean age 33 yrs) who underwent a mood induction procedure (Habel et al., 2005). The German participants viewed pictures of faces with happy expressions and were told to “Look at each face and use it to help you to feel happy.” The brain activity elicited by happy faces was compared to activity elicited by a non-emotional control condition. Eight regions were reported in their Table 1.Table 1 (modified from Habel et al., 2005).Only four of those regions were selected as ROIs by Sato et al. (2015). One of these was a tiny 12 voxel region in the paracentral lobule, which was called precuneus by Sato et al. (2015).Image: John A Beal, PhD. Dept. of Cellular Biology & Anatomy, Louisiana State University Health Sciences Center Shreveport.Before you say I'm being overly pedantic, we can agree that the selected coordinates are at the border of the precuneus and the paracentral lobule. The more interesting fact is that the sadness induction of Habel et al. (2005) implicated a very large region of the posterior precuneus and surrounding regions (1562 voxels). An area over 100 times larger than the Happy Precuneus.Oops. But the precuneus contains multitudes, so maybe it's not so tragic. The precuneus is potentially involved in very lofty functions like consciousness and self-awareness and the recollection of  autobiographical memories. It's also a functional core of the default-mode network (Utevsky et al., 2014), which is active during daydreaming and mind wandering and unconstrained thinking. But it seems a bit problematic to use hand picked ROIs from a study of transient and mild “happy” states (in a population of German males) to predict a stable trait of subjective happiness in a culturally distinct group of younger Japanese college students (26 women, 25 men).Cross-Cultural Notions of HappinessIsn't “happiness” a social construct (largely defined by Western thought) that varies across cultures?... Read more »

Sato, W., Kochiyama, T., Uono, S., Kubota, Y., Sawada, R., Yoshimura, S., & Toichi, M. (2015) The structural neural substrate of subjective happiness. Scientific Reports, 16891. DOI: 10.1038/srep16891  

  • November 16, 2015
  • 05:50 AM
  • 1,064 views

The Neuroscience of Social Media: An Unofficial History

by The Neurocritic in The Neurocritic

There's a new article in Trends in Cognitive Sciences about how neuroscientists can incorporate social media into their research on the neural correlates of social cognition (Meshi et al., 2015). The authors outlined the sorts of social behaviors that can be studied via participants' use of Twitter, Facebook, Instagram, etc.: (1) broadcasting information; (2) receiving feedback; (3) observing others' broadcasts; (4) providing feedback; (5) comparing self to others.Meshi, Tamir, and Heekeren / Trends in Cognitive Sciences (2015)More broadly, these activities tap into processes and constructs like emotional state, personality, social conformity, and how people manage their self-presentation and social connections. You know, things that exist IRL (this is an important point to keep in mind for later).The neural systems that mediate these phenomena, as studied by social cognitive neuroscience types, are the Mentalizing Network (in blue below), the Self-Referential Network (red), and the Reward Network (green).Fig. 2 (Meshi et al., 2015). Proposed Brain Networks Involved in Social Media Use.  (i) mentalizing network: dorsomedial prefrontal cortex (DMPFC), temporoparietal junction (TPJ), anterior temporal lobe (ATL), inferior frontal gyrus (IFG), and the posterior cingulate cortex/precuneus (PCC); (ii) self-referential network: medial prefrontal cortex (MPFC) and PCC; and (iii) reward network: ventromedial prefrontal cortex (VMPFC), ventral striatum (VS), and ventral tegmental area (VTA).  The article's publication was announced on social media:The emerging neuroscience of social media. New review in @TrendsCognSci: https://t.co/2JDIeCvJsT pic.twitter.com/2vwv827bdI— CellPressNews (@CellPressNews) November 11, 2015I anticipated this day in 2009, when I wrote several satirical articles about the neurology of Twitter.  I proposed that someone should do a study to examine the neural correlates of Twitter use:It was bound to happen. Some neuroimaging lab will conduct an actual fMRI experiment to examine the so-called "Neural Correlates of Twitter" -- so why not write a preemptive blog post to report on the predicted results from such a study, before anyone can publish the actual findings?Here are the conditions I proposed, and the predicted results (a portion of the original post is reproduced below).A low-level baseline condition (viewing "+") and an active baseline condition (reading the public timeline [public timeline no longer exists] of random tweets from strangers) will be compared to three active conditions:(1) Celebrity Fluff(2) Social Media Marketing Drivel(3) Friends on your Following List... The hemodynamic response function to the active control condition will be compared to those from Conditions 1-3 above. Contrasts between each of these conditions and the low-level baseline will also be performed.The major predicted results are as follows:Reading the Tweets of your close friends will engage a network of regions involved in self-referential processing of similar others, including the posterior superior temporal sulcus (STS) and adjacent temporo-parietal junction (TPJ), and the ventral medial prefrontal cortex (Mitchell et al., 2006).Fig. 2A. (Mitchell et al., 2006). A region of ventral mPFC showed greater activation during judgments of the target to whom participants considered themselves to be more similar.Reading the stream of Celebrity Fluff will activate the frontal eye fields to a much greater extent than the control condition, as the participants will be engaged in rolling their eyes in response to the inane banter. Figure from Paul Pietsch, Ph.D. The frontal eye fields are i... Read more »

Meshi D, Tamir TI, Heekeren HR. (2015) The Emerging Neuroscience of Social Media. Trends in Cognitive Sciences. info:/10.1016/j.tics.2015.09.004

  • November 11, 2015
  • 03:29 AM
  • 989 views

Obesity Is Not Like Being "Addicted to Food"

by The Neurocritic in The Neurocritic

Credit: Image courtesy of Aalto UniversityIs it possible to be “addicted” to food, much like an addiction to substances (e.g., alcohol, cocaine, opiates) or behaviors (gambling, shopping, Facebook)? An extensive and growing literature uses this terminology in the context of the “obesity epidemic”, and looks for the root genetic and neurobiological causes (Carlier et al., 2015; Volkow & Bailer, 2015).Fig. 1 (Meule, 2015). Number of scientific publications on food addiction (1990-2014). Web of Science search term “food addiction”. Figure 1 might lead you to believe that the term “food addiction” was invented in the late 2000s by NIDA. But this term is not new at all, as Adrian Meule (2015) explained in his historical overview, Back by Popular Demand: A Narrative Review on the History of Food Addiction Research. Dr. Theron G. Randolph wrote about food addiction in 1956 (he also wrote about food allergies).Fig. 2 (Meule, 2015). History of food addiction research.Thus, the concept of food addiction predates the documented rise in obesity in the US, which really took off in the late 80s to late 90s (as shown below).1 Prevalence of Obesity in the United States, 1960-2012 1960-62 1971-74 1976-80 1988-89 1999-2000 12.80% 14.10% 14.50% 22.50% 30.50% 2007-08 2011-12 33.80% 34.90% Sources: Flegal et al. 1998, 2002, 2010; Ogden et al. 2014One problem with the “food addiction” construct is that you can live without alcohol and gambling, but you'll die if you don't eat. Complete abstinence is not an option.2 Another problem is that most obese people simply don't show signs of addiction (Hebebrand, 2015): ...irrespective of whether scientific evidence will justify use of the term food and/or eating addiction, most obese individuals have neither a food nor an eating addiction.3 Obesity frequently develops slowly over many years; only a slight energy surplus is required to in the longer term develop overweight. Genetic, neuroendocrine, physiological and environmental research has taught us that obesity is a complex disorder with many risk factors, each of which have small individual effects and interact in a complex manner. The notion of addiction as a major cause of obesity potentially entails endless and fruitless debates, when it is clearly not relevant to the great majority of cases of overweight and obesity. Still not convinced? Surely, differences in the brains' of obese individuals point to an addiction. The dopamine system is altered, right, so this must mean they're addicted to food? Well think again, because the evidence for this is inconsistent (Volkow et al., 2013; Ziauddeen & Fletcher, 2013).An important new paper by a Finnish research group has shown that D2 dopamine receptor binding in obese women is not different from that in lean participants (Karlsson et al., 2015). Conversely, μ-opioid receptor (MOR) binding is reduced, consistent with lowered hedonic processing. After the women had bariatric surgery (resulting in  mean weight loss of 26.1 kg, or 57.5 lbs), MOR returned to control values, while the unaltered D2 receptors stayed the same.In the study, 16 obese women (mean BMI=40.4, age 42.8) had PET scans before and six months after undergoing the standard Gastric Bypass procedure (Roux-en-Y Gastric Bypass) or the Sleeve Gastrectomy. A comparison group of non-obese women (BMI=22.7, age 44.9) was also scanned. The radiotracer [11C]carfentanil measured MOR availability and [11C]raclopride measured D2R availability in two separate sessions. The opioid and dopamine systems are famous for their roles in neural circuits for “liking” (pleasurable consumption) and “wanting”(incentive/motivation), respectively (Castro & Berridge, 2014).The pre-operative PET scans in the obese women showed that MOR binding was significantly lower in a number of reward-related regions, including ventral striatum, dorsal caudate, putamen, insula, amygdala, thalamus, orbitofrontal cortex and posterior cingulate cortex. Six months after surgery, there was an overall 23% increase in MOR availability, which was no longer different from controls.... Read more »

  • October 29, 2015
  • 06:54 AM
  • 930 views

Ophidianthropy: The Delusion of Being Transformed into a Snake

by The Neurocritic in The Neurocritic

Scene from Sssssss (1973).“When Dr. Stoner needs a new research assistant for his herpetological research, he recruits David Blake from the local college.  Oh, and he turns him into a snake for sh*ts and giggles.”Movie Review by Jason Grey Horror movies where people turn into snakes are relatively common (30 by one count), but clinical reports of delusional transmogrification into snakes are quite rare. This is in contrast to clinical lycanthropy, the delusion of turning into a wolf.What follows are two frightening tales of unresolved mental illness, minimal followup, and oversharing (plus mistaking an April Fool's joke for a real finding).THERE ARE NO ACTUAL PICTURES OF SNAKES [an important note for snake phobics].The first case of ophidianthropy was described by Kattimani et al. (2010):A 24 year young girl presented to us with complaints that she had died 15 days before and that in her stead she had been turned into a live snake. At times she would try to bite others claiming that she was a snake. ... We showed her photos of snakes and when she was made to face the large mirror she failed to identify herself as her real human self and described herself as snake. She described having snake skin covering her and that her entire body was that of snake except for her spirit inside.  ...  She was distressed that others did not understand or share her conviction. She felt hopeless that nothing could make her turn into real self. She made suicidal gestures and attempted to hang herself twice on the ward...The initial diagnosis was severe depressive disorder with psychotic features. A series of drug trials was unsuccessful (Prozac and four different antipsychotics), and a course of 10 ECT sessions had no lasting effect on her delusions. The authors couldn't decide whether the patient should be formally diagnosed with schizophrenia or a more general psychotic illness. Her most recent treatment regime (escitalopram plus quetiapine) was also a failure because the snake delusion persisted. “Our next plan is to employ supportive psychotherapy in combination with pharmacotherapy,” said the authors (but we never find out what happened to her). Not a positive outcome...Scene from Sssssss (1973).Ophidiantrophy with paranoid schizophrenia, cannabis use, bestiality, and history of epilepsyThe second case is even more bizarre, with a laundry list of delusions and syndromes (Mondal, 2014):A 23 year old, married, Hindu male, with past history of  ... seizures..., personal history of non pathological consumption of bhang and alcohol for the last nine years and one incident of illicit sexual intercourse with a buffalo at the age of 18 years presented ... with the chief complains of muttering, fearfulness, wandering tendency ... and hearing of voices inaudible to others for the last one month. ... he sat cross legged with hands folded in a typical posture resembling the hood of a snake. ... The patient said that he inhaled the breath of a snake passing by him following which he changed into a snake. Though he had a human figure, he could feel himself poisonous inside and to have grown a fang on the lower set of his teeth. He also had the urge to bite others but somehow controlled the desire. He said that he was not comfortable with humans then but would be happy on seeing a snake, identifying it belonging to his species. ... He says that he was converted back to a human being by the help of a parrot, which took away his snake fangs by inhaling his breath and by a cat who ate up his snake flesh once when he was lying on the ground. ...  the patient also had thought alienation phenomena in the form of thought blocking, thought withdrawal and thought broadcasting, delusion of persecution, delusion of reference, delusion of infidelity [Othello syndrome], the Fregoli delusion, bizarre delusion, nihilistic delusion [Cotard's syndrome], somatic passivity, somatic hallucinations, made act [?], third person auditory hallucinations, derealization and depersonalisation. He was diagnosed as a case of paranoid schizophrenia as per ICD 10.Wow.He was was given the antipsychotic haloperidol while being treated as an  an inpatient for 10 days. Some of his symptoms improved but others did not. “Long term follow up is not available.”The discussion of this case is a bit... terrifying:Lycanthropy encompasses two aspects, the first one consisting of primary lupine delusions and associated behavioural deviations termed as lycomania, and the second aspect being a psychosomatic problem called as lycosomatization (Kydd et al., 1991). Kydd, O.U., Major, A., Minor, C (1991). A really neat, squeaky-clean isolation and characterization of two lycanthropogens from nearly subhuman populations of Homo sapiens. J. Ultratough Molec. Biochem. 101: 3521-3532.  [this is obviously a fake citation]Endogenous lycanthropogens responsible for lycomania are lupinone and buldogone which differ by only one carbon atom in their ring structure; their plasma level having a lunar periodicity with peak level during the week of full moon. Lycosomatization likely depends on the simultaneous secretion of suprathreshold levels of both lupinone and the peptide lycanthrokinin, a second mediator, reported to be secreted by the pineal gland, that “initiates and maintains the lycanthropic process” (Davis et al., 1992). Thus, secretion of lupinone without lycanthrokinin results in only lycomania. In our patient these molecular changes were not investigated. oh my god, the paper by Davis et al. on the Psychopharmacology of Lycanthropy (and "endogenous lycanthropogens") was published in the April 1, 1992 issue of the Canadian Medical Association Journal. There is no such thing as lupinone and buldogone.Fig. 1 (Davis et al., 1992... Read more »

  • October 26, 2015
  • 01:47 AM
  • 750 views

On the Long Way Down: The Neurophenomenology of Ketamine

by The Neurocritic in The Neurocritic

Is ketamine a destructive club drug that damages the brain and bladder? With psychosis-like effects widely used as a model of schizophrenia? Or is ketamine an exciting new antidepressant, the “most important discovery in half a century”?For years, I've been utterly fascinated by these separate strands of research that rarely (if ever) intersect. Why is that? Because there's no such thing as “one receptor, one behavior.” And because like most scientific endeavors, neuro-pharmacology/psychiatry research is highly specialized, with experts in one microfield ignoring the literature produced by another (though there are some exceptions).1 Ketamine is a dissociative anesthetic and PCP-derivative that can produce hallucinations and feelings of detachment in non-clinical populations. Phamacologically it's an NMDA receptor antagonist that also acts on other systems (e.g., opioid). Today I'll focus on a recent neuroimaging study that looked at the downsides of ketamine: anhedonia, cognitive disorganization, and perceptual distortions (Pollak et al., 2015).Imaging Phenomenologically Distinct Effects of KetamineIn this study, 23 healthy male participants underwent arterial spin labeling (ASL) fMRI scanning while they were infused with either a high dose (0.26 mg/kg bolus + slow infusion) or a low dose (0.13 mg/kg bolus + slow infusion) of ketamine 2 (Pollak et al., 2015). For comparison, the typical dose used in depression studies is 0.5 mg/kg (Wan et al., 2015). Keep in mind that the number of participants in each condition was low, n=12 (after one was dropped) and n=10 respectively, so the results are quite preliminary.ASL is a post-PET and BOLD-less technique for measuring cerebral blood flow (CBF) without the use of a radioactive tracer (Petcharunpaisan et al., 2010). Instead, water in arterial blood serves as a contrast agent, after being magnetically labeled by applying a 180 degree radiofrequency inversion pulse. Basically, it's a good method for monitoring CBF over a number of minutes.ASL sequences were obtained before and 10 min after the start of ketamine infusion. Before and after the scan, participants rated their subjective symptoms of delusional thinking, perceptual distortion, cognitive disorganization, anhedonia, mania, and paranoia on the Psychotomimetic States Inventory (PSI). The study was completely open label, so it's not like they didn't know they were getting a mind-altering drug. Behavioral ratings were quite variable (note the large error bars below), but generally the effects were larger in the high-dose group, as one might expect.The changes in Perceptual Distortion and Cognitive Disorganization scores were significant for the low-dose group, with the addition of Delusional Thinking, Anhedonia, and Mania in the high-dose group. But again, it's important to remember there was no placebo condition, the significance levels were not all that impressive, and the n's were low.The CBF results (below) show increases in anterior and subgenual cingulate cortex and decreases in superior and medial temporal cortex, similar to previous studies using PET.Fig 2a (Pollak et al., 2015). Changes in CBF with ketamine in the low- and high-dose groups overlaid on a high-resolution T1-weighted image.Did I say the n's were low? The Fig. 2b maps (not shown here) illustrated significant correlations with the Anhedonia and Cognitive Disorganization subscales, but these were based on 10 and 12 data points, when outliers can drive phenomenally large effects. One might like to say...For [the high-dose] group, ketamine-induced anhedonia inversely related to orbitofrontal cortex CBF changes and cognitive disorganisation was positively correlated with CBF changes in posterior thalamus and the left inferior and middle temporal gyrus. Perceptual distortion was correlated with different regional CBF changes in the low- and high-dose groups.   ...but this clearly requires replication studies with placebo comparisons and larger subject groups.Nonetheless, the fact remains that ketamine administration in healthy participants caused negative effects like anhedonia and cognitive disorganization at doses lower than those used in studies of treatment-resistant depression (many of which were also open label). Now you can say, “well, controls are not the same as patients with refractory depression” and you'd be right (see Footnote 1). “Glutamatergic signaling profiles” and symptom reports could show a variable relationship, with severe depression at the low end and schizophrenia at the high end (with controls somewhere in the middle).A recent review of seven placebo-controlled, double-blind, randomized clinical trials of ketamine and other NMDA antagonists concluded (Newport et al., 2015):The antidepressant efficacy of ketamine ... holds promise for future glutamate-modulating strategies; however, the ineffectiveness of other NMDA antagonists suggests that any forthcoming advances will depend on improving our understanding of ketamine’s mechanism of action. The fleeting nature of ketamine’s therapeutic benefit, coupled with its potential for abuse and neurotoxicity, suggest that its use in the clinical setting warrants caution.The mysterious and paradoxical ways of ketamine continue...So take it in don't hold your breath... Read more »

  • September 27, 2015
  • 01:02 AM
  • 706 views

Neurohackers Gone Wild!

by The Neurocritic in The Neurocritic

Scene from Listening, a new neuro science fiction film by writer-director Khalil Sullins. What are some of the goals of research in human neuroscience?To explain how the mind works.To unravel the mysteries of consciousness and free will.To develop better treatments for mental and neurological illnesses.To allow paralyzed individuals to walk again.Brain decoding experiments that use fMRI or ECoG (direct recordings of the brain in epilepsy patients) to deduce what a person is looking at or saying or thinking have become increasingly popular as well. They're still quite limited in scope, but any study that can invoke “mind reading” or “brain-to-brain” scenarios will attract the press like moths to a flame....For example, here's how NeuroNews site Brain Decoder covered the latest “brain-to-brain communication” stunt and the requisite sci fi predictions:Scientists Connect 2 Brains to Play “20 Questions”Human brains can now be linked well enough for two people to play guessing games without speaking to each other, scientists report. The researchers hooked up several pairs of people to machines that connected their brains, allowing one to deduce what was on the other's mind.. . . This brain-to-brain interface technology could one day allow people to empathize or see each other's perspectives more easily by sending others concepts too difficult to explain in words, [author Andrea Stocco] said.Mind reading! Yay! But this isn't what happened. No thoughts were decoded in the making of this paper (Stocco et al., 2015).Instead, stimulation of visual cortex did all the “talking.” Player One looked at an LED that indicated “yes” (13 Hz flashes) or “no” (12 Hz flashes). Steady-state visual evoked potentials (a type of EEG signal very common in BCI research) varied according to flicker rate, and this binary code was transmitted to a second computer, which triggered a magnetic pulse delivered to the visual cortex of Player Two if the answer was yes. The TMS pulse in turn elicited a phosphene (a brief visual percept) that indicated yes (no phosphene indicated a “no” answer).Eventually, we see some backpedalling in the Brain Decoder article:Ideally, brain-to-brain interfaces would one day allow one person to think about an object, say a hammer, and another to know this, along with the hammer's shape and what the first person wanted to use it for. "That would be the ideal type of complexity of information we want to achieve," Stocco said. "We don't know whether that future is possible."  Well, um, we already have the first half of the equation to some small degree (Naselaris et al. 2015 decoded mental images of remembered scenes)...But the Big Prize goes to.... the decoders of covert speech, or inner thoughts!! (Martin et al. 2014)Scientists develop a brain decoder that can hear your inner thoughtsBrain decoder can eavesdrop on your inner voiceListening to Your ThoughtsThe new film Listening starts off with a riff on this work and spins into a dark and dangerous place where no thought is private. Given the preponderance of “hearing” metaphors above, it's fitting that the title is Listening, where fiction (in this case near-future science fiction) is stranger than truth. The hazard of watching a movie that depicts your field of expertise is that you nitpick every little thing (like the scalp EEG sensors that record from individual neurons). This impulse was exacerbated by a setting which is so near-future that it's present day.From Marilyn Monroe Neurons to Carbon NanotubesBut there were many things I did like about Listening.1  In particular, I enjoyed the way the plot developed in the second half of the film, especially in the last 30 minutes. On the lighter side was this amusing scene of a pompous professor lecturing on the real-life finding of Marilyn Monroe neurons (Quian Quiroga et al., 2005, 2009).Caltech Professor: “For example, the subject is asked to think about Marilyn Monroe. My study suggests not only conscious control in the hippocampus and parahippocampal cortex, when the neuron....” Conversation between two grad students in back of class: “Hey, you hear about the new bioengineering transfer?” ...Caltech Professor: “Mr. Thorogood, perhaps you can enlighten us all with Ryan's gossip? Or tell us what else we can conclude from this study?” Ryan the douchy hardware guy: “We can conclude that all neurosurgeons are in love with Marilyn Monroe.” David the thoughtful software guy: “A single neuron has not only the ability to carry complex code and abstract form but is also able to override sensory input through cognitive effort. It suggests thought is a stronger reality than the world around us.” Caltech Professor: “Unfortunately, I think you're both correct.” Ryan and David are grad students with Big Plans. They've set up a garage lab (with stolen computer equipment) to work on their secret EEG decoding project. Ryan the douche lets Jordan the hot bioengineering transfer into their boys' club, much to David's dismay.Ryan: “She's assigned to Professor Hamomoto's experiment with ATP-powered cell-binding nanotube devices.” [maybe these?]So she gets to stay in the garage. For the demonstration, Ryan sports an EEG net that looks remarkably like the ones made by EGI (shown below on the right).Ryan reckons they'll put cell phone companies out of business with their mind reading invention, but David realizes they have a long way to go...... Read more »

  • August 31, 2015
  • 04:31 AM
  • 835 views

Cats on Treadmills (and the plasticity of biological motion perception)

by The Neurocritic in The Neurocritic

Cats on a treadmill. From Treadmill Kittens.It's been an eventful week. The 10th Anniversary of Hurricane Katrina. The 10th Anniversary of Optogenetics (with commentary from the neuroscience community and from the inventors). The Reproducibility Project's efforts to replicate 100 studies in cognitive and social psychology (published in Science). And the passing of the great writer and neurologist, Oliver Sacks. Oh, and Wes Craven just died too...I'm not blogging about any of these events. Many many others have already written about them (see selected reading list below). And The Neurocritic has been feeling tapped out lately.Hence the cats on treadmills. They're here to introduce a new study which demonstrated that early visual experience is not necessary for the perception of biological motion (Bottari et al., 2015). Biological motion perception involves the ability to understand and visually track the movement of a living being. This phenomenon is often studied using point light displays, as shown below in a demo from the BioMotion Lab. You should really check out their flash animation that allows you to view human, feline, and pigeon walkers moving from right to left, scrambled and unscrambled, masked and unmasked, inverted and right side up.from BioMotion Lab 1Biological Motion Perception Is Spared After Early Visual DeprivationPeople born with dense, bilateral cataracts that are surgically removed at a later date show deficits in higher visual processing, including the perception of global motion, global form, faces, and illusory contours. Proper neural development during the critical, or sensitive period early in life is dependent on experience, in this case visual input. However, it seems that the perception of biological motion (BM) does not require early visual experience (Bottari et al., 2015).Participants in the study were 12 individuals with congenital cataracts that were removed at a mean age of 7.8 years (range 4 months to 16 yrs). Age at testing was 17.8 years (range 10-35 yrs). The study assessed their biological motion thresholds (extracting BM from noise) and recorded their EEG to point light displays of a walking man and to scrambled versions of the walking man (see demo).from BioMotion LabBehavioral performance on the BM threshold task didn't differ much between the congenital cataract (cc) and matched control (mc) groups (i.e., there was a lot of overlap between the filled diamonds and the open triangles below).Modified from Fig. 1 (Bottari et al., 2015).The event-related potentials (ERPs) averaged to presentations of the walking man vs. scrambled man showed the same pattern in cc and mc groups as well: larger to walking man (BM) than scrambled man (SBM). Modified from Fig. 1 (Bottari et al., 2015).The N1 component (the peak at about 0.25 sec post-stimulus) seems a little smaller in cc but that wasn't significant. On the other hand, the earlier P1 was significantly reduced in the cc group. Interestingly, the duration of visual deprivation, amount of visual experience, and post-surgical visual acuity did not correlate with the size of the N1.The authors discuss three possible explanations for these results:(1) The neural circuitries associated with the processing of BM can specialize in late childhood or adulthood. That is, as soon as visual input becomes available, initiates the functional maturation of the BM system. Alternatively the neural systems for BM might mature independently of vision. (2) Either they are shaped cross-modally or (3) they mature independent of experience.They ultimately favor the third explanation, that "the neural systems for BM specialize independently of visual experience." They also point out that the ERPs to faces vs. scrambled faces in the cc group do not show the characteristic difference between these stimulus types. What's so special about biological motion, then? Here the authors wave their hands and arms a bit:We can only speculate why these different developmental trajectories for faces and BM emerge: BM is characteristic for any type of living being and the major properties are shared across species. ... By contrast, faces are highly specific for a species and biases for the processing of faces from our own ethnicity and age have been shown.It's more important to see if a bear is running towards you than it is to recognize faces, as anyone with congenital prosopagnosia ("face blindness") might tell you...Footnote1 Troje & Westhoff (2006):"The third sequence showed a walking cat. The data are based on a high-speed (200 fps) video sequence showing a cat walking on a treadmill. Fourteen feature points were manually sampled from single frames. As with the pigeon sequence, data were approximated with a third-order Fourier series to obtain a generic walking cycle."Reference... Read more »

  • August 10, 2015
  • 07:35 AM
  • 998 views

Will machine learning create new diagnostic categories, or just refine the ones we already have?

by The Neurocritic in The Neurocritic

How do we classify and diagnose mental disorders?In the coming era of Precision Medicine, we'll all want customized treatments that “take into account individual differences in people’s genes, environments, and lifestyles.” To do this, we'll need precise diagnostic tools to identify the specific disease process in each individual. Although focused on cancer in the near-term, the longer-term goal of the White House initiative is to apply Precision Medicine to all areas of health. This presumably includes psychiatry, but the links between Precision Medicine, the BRAIN initiative, and RDoC seem a bit murky at present.1But there's nothing a good infographic can't fix. Science recently published a Perspective piece by the NIMH Director and the chief architect of the Research Domain Criteria (RDoC) initiative (Insel & Cuthbert, 2015). There's Deconstruction involved, so what's not to like? 2 ILLUSTRATION: V. Altounian and C. Smith / SCIENCEIn this massively ambitious future scenario, the totality of one's genetic risk factors, brain activity, physiology, immune function, behavioral symptom profile, and life experience (social, cultural, environmental) will be deconstructed and stratified and recompiled into a neat little cohort. 3The new categories will be data driven. The project might start by collecting colossal quantities of expensive data from millions of people, and continue by running classifiers on exceptionally powerful computers (powered by exceptionally bright scientists/engineers/coders) to extract meaningful patterns that can categorize the data with high levels of sensitivity and specificity. Perhaps I am filled with pathologically high levels of negative affect (Loss? Frustrative Nonreward?), but I find it hard to be optimistic about progress in the immediate future. You know, for a Precision Medicine treatment for me (and my pessimism)...But seriously.Yes, RDoC is ambitious (and has its share of naysayers). But what you may not know is that it's also trendy! Just the other day, an article in The Atlantic explained Why Depression Needs A New Definition (yes, RDoC) and even cited papers like Depression: The Shroud of Heterogeneity. 4But let's just focus on the brain for now. For a long time, most neuroscientists have viewed mental disorders as brain disorders. [But that's not to say that environment, culture, experience, etc. play no role! cf. Footnote 3]. So our opening question becomes, How do we classify and diagnose brain disorders neural circuit disorders in a fashion consistent with RDoC principles? Is there really One Brain Network for All Mental Illness, for instance? (I didn't think so.)Our colleagues in Asia and Australia and Europe and Canada may not have gotten the funding memo, however, and continue to run classifiers based on DSM categories. 5 In my previous post, I promised an unsystematic review of machine learning as applied to the classification of major depression. You can skip directly to the Appendix to see that.Regardless of whether we use DSM-5 categories or RDoC matrix constructs, what we need are robust and reproducible biomarkers (see Table 1 above). A brief but excellent primer by Woo and Wager (2015) outlined the characteristics of a useful neuroimaging biomarker: 1. Criterion 1: diagnosticity Good biomarkers should produce high diagnostic performance in classification or prediction. Diagnostic performance can be evaluated by sensitivity and specificity. Sensitivity concerns whether a model can correctly detect signal when signal exists. Effect size is a closely related concept; larger effect sizes are related to higher sensitivity. Specificity concerns whether the model produces negative results when there is no signal. Specificity can be evaluated relative to a range of specific alternative conditions that may be confusable with the condition of interest. 2. Criterion 2: interpretability Brain-based biomarkers should be meaningful and interpretable in terms of neuroscience, including previous neuroimaging studies and converging evidence from multiple sources (eg, animal models, lesion studies, etc). One potential pitfall in developing neuroimaging biomarkers is that classification or prediction models can capitalize on confounding variables that are not neuroscientifically meaningful or interesting at all (eg, in-scanner head movement). Therefore, neuroimaging biomarkers should be evaluated and interpreted in the light of existing neuroscientific findings. 3. Criterion 3: deployability Once the classification or outcome-prediction model has been developed as a neuroimaging biomarker, the model and the testing procedure should be precisely defined so that it can be prospectively applied to new data. Any flexibility in the testing procedures could introduce potential overoptimistic biases into test results, rendering them useless and potentially misleading. For example, “amygdala activity” cannot be a good neuroimaging biomarker without a precise definition of which “voxels” in the amygdala should be activated and the relative expected intensity of activity across each voxel. A well-defined model and standardized testing procedure are crucial aspects of turning neuroimaging results into a “research product,” a biomarker that can be shared and tested across laboratories. 4. Criterion 4: generalizability Clinically useful neuroimaging biomarkers aim to provide predictions about new individuals. Therefore, they should be val... Read more »

Insel, T., & Cuthbert, B. (2015) Brain disorders? Precisely. Science, 348(6234), 499-500. DOI: 10.1126/science.aab2358  

  • August 1, 2015
  • 08:42 PM
  • 837 views

The Idiosyncratic Side of Diagnosis by Brain Scan and Machine Learning

by The Neurocritic in The Neurocritic

R2D3R2D3 recently had a fantastic Visual Introduction to Machine Learning, using the classification of homes in San Francisco vs. New York as their example. As they explain quite simply: In machine learning, computers apply statistical learning techniques to automatically identify patterns in data. These techniques can be used to make highly accurate predictions. You should really head over there right now to view it, because it's very impressive.Computational neuroscience types are using machine learning algorithms to classify all sorts of brain states, and diagnose brain disorders, in humans. How accurate are these classifications? Do the studies all use separate training sets and test sets, as shown in the example above?Let's say your fMRI measure is able to differentiate individuals with panic disorder (n=33) from those with panic disorder + depression (n=26) with 79% accuracy.1 Or with structural MRI scans you can distinguish 20 participants with treatment-refractory depression from 21 never-depressed individuals with 85% accuracy.2 Besides the issues outlined in the footnotes, the “reality check” is that the model must be able to predict group membership for a new (untrained) data set. And most studies don't seem to do this.I was originally drawn to the topic by a 3 page article entitled, Machine learning algorithm accurately detects fMRI signature of vulnerability to major depression (Sato et al., 2015). Wow! Really? How accurate? Which fMRI signature? Let's take a look.machine learning algorithm = Maximum Entropy Linear Discriminant Analysis (MLDA)accurately predicts = 78.3% (72.0% sensitivity and 85.7% specificity)fMRI signature = “guilt-selective anterior temporal functional connectivity changes” (seems a bit overly specific and esoteric, no?)vulnerability to major depression = 25 participants with remitted depression vs. 21 never-depressed participantsThe authors used a “standard leave-one-subject-out procedure in which the classification is cross-validated iteratively by using a model based on the sample after excluding one subject to independently predict group membership” but they did not test their fMRI signature in completely independent groups of participants.Nor did they try to compare individuals who are currently depressed to those who are currently remitted. That didn't matter, apparently, because the authors suggest the fMRI signature is a trait marker of vulnerability, not a state marker of current mood. But the classifier missed 28% of the remitted group who did not have the “guilt-selective anterior temporal functional connectivity changes.”What is that, you ask? This is a set of mini-regions (i.e., not too many voxels in each) functionally connected to a right superior anterior temporal lobe seed region of interest during a contrast of guilt vs. anger feelings (selected from a number of other possible emotions) for self or best friend, based on written imaginary scenarios like “Angela [self] does act stingily towards Rachel [friend]” and “Rachel does act stingily towards Angela” conducted outside the scanner (after the fMRI session is over). Got that?You really need to read a bunch of other articles to understand what that means, because the current paper is less than 3 pages long. Did I say that already?modified from Fig 1B (Sato et al., 2015). Weight vector maps highlighting voxels among the 1% most discriminative for remitted major depression vs. controls, including the subgenual cingulate cortex, both hippocampi, the right thalamus and the anterior insulae.The patients were previously diagnosed according to DSM-IV-TR (which was current at the time), and in remission for at least 12 months. The study was conducted by investigators from Brazil and the UK, so they didn't have to worry about RDoC, i.e. “new ways of classifying mental disorders based on behavioral dimensions and neurobiological measures” (instead of DSM-5 criteria). A “guilt-proneness” behavioral construct, along with the “guilt-selective” network of idiosyncratic brain regions, might be more in line with RDoC than past major depression diagnosis.Could these results possibly generalize to other populations of remitted and never-depressed individuals? Well, the fMRI signature seems a bit specialized (and convoluted). And overfitting is another likely problem here... In their next post, R2D3 will discuss overfitting: Ideally, the [decision] tree should perform similarly on both known and unknown data. So this one is less than ideal. [NOTE: the one that's 90% in the top figure] These errors are due to overfitting. Our model has learned to treat every detail in the training data as important, even details that turned out to be irrelevant.In my next post, I'll present an unsystematic review of machine learning as applied to the classification of major depression. It's notable that Sato et al. (2015) used the word “classification” instead of “diagnosis.”3 Footnotes1 The sensitivity (true positive rate) was 73% and the specificity (true negative rate) was 85%. After correcting for confounding variables, these numbers were 77% and 70%, respectively.2 The abstract concludes this is a “high degree of accuracy.” Not to pick on these particular authors (this is a typical study), but Dr. Dorothy Bishop explains why this is not very helpful for screening or diagnostic purposes. And what you'd really want to do here is to discriminate between treatment-resistant vs. treatment-responsive depression. If an individual does not respond to standard treatments, it would be highly beneficial to avoid a long futile period of medication trials. 3 In case you're wondering, the title of this post was based on The Dark Side of Diagnosis by Brain Scan, which is about Dr  Daniel Amen. The work of the investigators discussed here is in ... Read more »

  • July 15, 2015
  • 04:09 AM
  • 817 views

Can Tetris Reduce Intrusive Memories of a Trauma Film?

by The Neurocritic in The Neurocritic

For some inexplicable reason, you watched the torture gore horror film Hostel over the weekend. On Monday, you're having trouble concentrating at work. Images of severed limbs and bludgeoned heads keep intruding on your attempts to code or write a paper. So you decide to read about the making of Hostel.You end up seeing pictures of the most horrifying scenes from the movie. It's all way too way much to simply shake off so then you decide to play Tetris. But a funny thing happens. The unwelcome images start to become less frequent. By Friday, the gory mental snapshots are no longer forcing their way into your mind's eye. The ugly flashbacks are gone.Meanwhile, your parnter in crime is having similar images of eye gouging pop into his head. Except he didn't review the tortuous highlights on Monday, and he didn't play Tetris. He continues to have involuntary intrusions of Hostel images once or twice a day for the rest of the week.This is basically the premise (and outcome) of a new paper in Psychological Science by Ella James and colleagues at Cambridge and Oxford. It builds on earlier work suggesting that healthy participants who play Tetris shortly after watching a “trauma” film will have fewer intrusive memories (Holmes et al, 2009, 2010). This is based on the idea that involuntary “flashbacks” in real post-traumatic stress disorder (PTSD) are visual in nature, and require visuospatial processing resources to generate and maintain. Playing Tetris will interfere with consolidation and subsequent intrusion of the images, at least in an experimental setting (Holmes et al, 2009):...Trauma flashbacks are sensory-perceptual, visuospatial mental images. Visuospatial cognitive tasks selectively compete for resources required to generate mental images. Thus, a visuospatial computer game (e.g. "Tetris") will interfere with flashbacks. Visuospatial tasks post-trauma, performed within the time window for memory consolidation [6 hrs], will reduce subsequent flashbacks. We predicted that playing "Tetris" half an hour after viewing trauma would reduce flashback frequency over 1-week.The timing is key here. In the earlier experiments, Tetris play commenced 30 min after the trauma film experience, during the 6 hour window when memories for the event are stabilized and consolidated. Newly formed memories are thought to be malleable during this time.However, if one wants to extrapolate directly to clinical application in cases of real life trauma exposure (and this is problematic, as we'll see later), it's pretty impractical to play Tetris right after an earthquake, auto accident, mortar attack, or sexual assault. So the new paper relies on the process of reconsolidation, when an act of remembering will place the memory in a labile state once again, so it can be modified (James et al., 2015).The procedure was as follows: 52 participants came into the lab on Day 0 and completed questionnaires about depression, anxiety, and previous trauma exposure. Then they watched a 12 min trauma film that included 11 scenes of actual death (or threatened death) or serious injury (James et al., 2015):...the film functioned as an experimental analogue of viewing a traumatic event in real life. Scenes contained different types of context; examples include a young girl hit by a car with blood dripping out of her ear, a man drowning in the sea, and a van hitting a teenage boy while he was using his mobile phone crossing the road. This film footage has been used in previous studies to evoke intrusive memories...After the film, they rated “how sad, hopeless, depressed, fearful, horrified, and anxious they felt right at this very moment” and “how distressing did you find the film you just watched?” They were instructed to keep a diary of intrusive images and come back to the lab 24 hours later.On Day 1, participants were randomized to either the experimental group (memory reactivation + Tetris) or the control group (neither manipulation). The experimental group viewed 11 still images from the film that served as reminder cues to initiate reconsolidation. This was followed by a 10 min filler task and then 12 min of playing Tetris (the Marathon mode shown above). The game instructions aimed to maximize the amount of mental rotation the subjects would use. The controls did the filler task and then sat quietly for 12 min.Both groups kept a diary of intrusions for the next week, and then returned on Day 7. All participants performed the Intrusion Provocation Task (IPT). Eleven blurred pictures from the film were shown, and subjects indicated when any intrusive mental images were provoked. Finally, the participants completed a few more questionnaires, as well as a recognition task that tested their verbal (T/F written statements) and visual (Y/N for scenes) memories of the film.1The results indicated that the Reactivation + Tetris manipulation was successful in decreasing the number of visual memory intrusions in both the 7-day diary and the IPT (as shown below).modified from Fig. 1 (James et al., 2015). Asterisks indicate a significant difference between groups (**p < .001). Error bars represent +1 SEM.Cool little snowman plots (actually frequency scatter plots) illustrate the time course of intrusive memories in the two groups.modified from Fig. 2 (James et al., 2015). Frequency scatter plots showing the time course of intrusive memories reported in the diary daily from Day 0 (prior to intervention) to Day 7. The intervention was on Day 1, and the red arrow is 24 hrs later (when the intervention starts working). The solid lines are the results of a generalized additive model. The size of the bubbles represents the number of participants who reported the indicated number of intrusive memories on that particular day. But now, you might be asking yourself if the critical element was Tetris or the reconsolidation update procedure (or both), since the control group did neither. Not to worry. Experiment 2 tried to disentangle this by recruiting four groups of participants (n=18 in each) — the original two groups plus two new ones: Reactivation only and Tetris only.And the results from Exp. 2 demonstrated that both were needed.... Read more »

  • June 28, 2015
  • 03:05 AM
  • 1,010 views

Who Will Pay for All the New DBS Implants?

by The Neurocritic in The Neurocritic

Recently, Science and Nature had news features on big BRAIN funding for the development of deep brain stimulation technologies. The ultimate aim of this research is to treat and correct malfunctioning neural circuits in psychiatric and neurological disorders. Both pieces raised ethical issues, focused on device manufacturers and potential military applications, respectively.A different ethical concern, not mentioned in either article, is who will have access to these new devices, and who is going to pay the medical costs once they hit the market. DBS for movement disorders is a test case, because Medicare (U.S.) approved coverage for Parkinson's disease (PD) and essential tremor in 2003. Which is good, given that unilateral surgery costs about $50,000.Willis et al. (2014) examined Medicare records for 657,000 PD patients and found striking racial disparities. The odds of receiving DBS in white PD patients were five times higher than for African Americans, and 1.8 times higher than for Asians. And living in a neighborhood with high socioeconomic status was associated with 1.4-fold higher odds of receiving DBS. Out-of-pocket costs for Medicare patients receiving DBS are over $2,000 per year, which is quite a lot of money for low-income senior citizens.Aaron Saenz raised a similar issue regarding the cost of the DEKA prosthetic arm (aka "Luke"):But if you're not a veteran, neither DARPA project may really help you much. The Luke Arm is slated to cost $100,000+.... That's well beyond the means of most amputees if they do not have the insurance coverage provided by the Veteran's Administration. ... As most amputees are not veterans, I think that the Luke Arm has a good chance of being priced out of a large market share. The availability of qualified neurosurgeons, even in affluent areas, will be another problem once future indications are FDA-approved (or even trialed).The situation in one Canadian province (British Columbia, with a population of 4.6 million) is instructive. An article in the Vancouver Sun noted that in March 2013, only one neurosurgeon was qualified to perform DBS surgeries for Parkinson's disease (or for dystonia). This resulted in a three year waiting list. Imagine, all these eligible patients with Parkinson's have to endure their current condition (and worse) for years longer, instead of having a vastly improved quality of life. Funding, doctors needed if brain stimulation surgery to expand in B.C.:... “But here’s the problem: We already have a waiting list of almost three years, from the time family doctors first put in the referral to the DBS clinic. And I’m the only one in B.C. doing this. So we really aren’t able to do more than 40 cases a year,” [Dr. Christopher Honey] said.. . ....The health authority allocates funding of $1.1 million annually, which includes the cost of the $20,000 devices, and $14,000 for each battery replacement. On average, batteries need to be replaced every three years.. . .To reduce wait times, the budget would have to increase and a Honey clone would have to be trained and hired.Back in the U.S., Rossi et al. (2014) called out Medicare for curbing medical progress: Devices for DBS have been approved by the FDA for use in treating Parkinson disease, essential tremor, obsessive-compulsive disorder, and dystonia,2 but expanding DBS use to include new indications has proven difficult—specifically because of the high cost of DBS devices and generally because of disincentives for device manufacturers to sponsor studies when disease populations are small and the potential for a return on investment is not clear. In many of these cases, Medicare coverage will determine whether a study will proceed. ... Ultimately, uncertain Medicare coverage coupled with the lack of economic incentives for industry sponsorship could limit investigators’ freedom of inquiry and ability to conduct clinical trials for new uses of DBS therapy. But the question remains, where is all this health care money supposed to come from?The device manufacturers aren't off the hook, either, but BRAIN is trying to reel them in. NIH recently sponsored a two-day workshop, BRAIN Initiative Program for Industry Partnerships to Facilitate Early Access Neuromodulation and Recording Devices for Human Clinical Studies [agenda PDF]. The purpose was to:Bring together stakeholders and interested parties to disseminate information on opportunities for research using latest-generation devices for CNS neuromodulation and interfacing with the brain in humans.Describe the proposed NIH framework for facilitating and lowering the cost of new studies using these devices.Discuss regulatory and intellectual property considerations.Solicit recommendations for data coordination and access. The Program Goals [PDF]:...we hope to spur human research bridging the “valley of death” that has been a barrier to translating pre-clinical research into therapeutic outcomes. We expect the new framework will allow academic researchers to test innovative ideas for new therapies, or to address scientific unknowns regarding mechanisms of disease or device action, which will facilitate the creation of solid business cases by industry and venture capital for the larger clinical trials required to take these ideas to market.To advance these goals, NIH is pursuing general agreements (Memoranda of Understanding, MOUs) with device manufacturers to set up a framework for this funding program. In the MOUs, we expect each company to specify the capabilities of their devices, along with information, support and any other concessions they are willing to provide to researchers. In other words, it's a public/private partnership to advance the goal of having all depressed Americans implanted with the CyberNeuroTron WritBit device by 2035 (just kidding!!).But seriously... before touting the impending clinical relevance of a study in rodents, basic scientists and bureaucrats alike should listen to patients with the current generation of DBS devices. Participants in the halted BROADEN Trial for refractory depression reported outcomes ranging from “...the side effects caused by the device were, at times, worse than the depression itself” to “I feel like I have a second chance at life.”What do you do with a medical device that causes ... Read more »

  • June 21, 2015
  • 06:28 AM
  • 1,010 views

The Future of Depression Treatment

by The Neurocritic in The Neurocritic

2014Jessica is depressed again. After six straight weeks of overtime, her boss blandly praised her teamwork at the product launch party. And the following week she was passed over for a promotion in favor of Jason, her junior co-worker. "It's always that way, I'll never get ahead..." She arrives at her therapist's office late, looking stressed, disheveled, and dejected. The same old feelings of worthlessness and despair prompted her to resume her medication and CBT routine."You deserve to be recognized for your work," said Dr. Harrison. "The things you're telling yourself right now are cognitive distortions: the black and white thinking, the overgeneralization, the self-blame, jumping to conclusions... " "I guess so," muttered Jessica, looking down. "And you need a vacation!". . .A brilliant suggestion, Dr. Harrison. As we all know, taking time off to relax and recharge after a stressful time will do wonders for our mental health. And building up a reserve of happy memories to draw upon during darker times is a cornerstone of positive psychology.Jessica and her husband Michael take a week-long vacation in Hawaii, creating new episodic memories that involve snorkling, parasailing, luaus, and mai tais on the beach. Jessica ultimately decides to quit her job and sell jewelry on Etsy.2015Michael is depressed after losing his job. His self-esteem has plummeted, and he feels useless. But he's too proud to ask for help. "Depression is something that happens to other people (like my wife), but not to me." He grows increasingly angry and starts drinking too much.Jessica finally convinces him to see Dr. Harrison's colleague. Dr. Roberts is a psychiatrist with a Ph.D. in neuroscience. She's adopted a translational approach and tries to incorporate the latest preclinical research into her practice. She's intrigued by the latest finding from Tonegawa's lab, which suggests that the reactivation of a happy memory is more effective in alleviating depression than experiencing a similar event in the present.Recalling happier memories can reverse depression, said the MIT press release.  So instead of telling Michael to take time off and travel and practice mindfulness and live in the present, she tells him to recall his fondest memory from last year's vacation in Hawaii.  It doesn't work.Michael goes to see Dr. Harrison, who prescribes bupropion and venlafaxine. Four weeks later, he feels much better, and starts a popular website that repudiates positive psychology. Seligman and Zimbardo are secretly chagrined. . . .Happy Hippocampusphoto credit: S. RamirezArtificially reactivating positive [sexual] memories [in male mice] could offer an alternative to traditional antidepressants makes them struggle more when you hold them by the tail after 10 days of confinement.1Not as upbeat as the press release, eh?The findings ... offer a possible explanation for the success of psychotherapies in which depression patients are encouraged to recall pleasant experiences. They also suggest new ways to treat depression by manipulating the brain cells where memories are stored...“Once you identify specific sites in the memory circuit which are not functioning well, or whose boosting will bring a beneficial consequence, there is a possibility of inventing new medical technology where the improvement will be targeted to the specific part of the circuit, rather than administering a drug and letting that drug function everywhere in the brain,” says Susumu Tonegawa, ... senior author of the paper.Although this type of intervention is not yet possible in humans, “This type of analysis gives information as to where to target specific disorders,” Tonegawa adds.Before considering what the mice might actually experience when their happy memory cells are activated with light, let's all marvel at what was accomplished here.Ramirez et al. (2015) studied mice that were genetically engineered to allow blue light to activate a specific set of granule cells in the dentate gyrus subfield of the hippocampus. These neurons are critical for the formation of new memories and are considered “engram cells” that undergo physical changes and store discrete memories (Liu et al., 2015). When a cue reactivates the same set of neurons, the episodic memory is retrieved. In this study, the engram cells were part of a larger circuit that included the amygdala and the nucleus accumbens, regions important for processing emotion, motivation, and reward.Ramiriez, Liu, Tonegawa and colleagues have repeatedly demonstrated their masterful manipulation of mouse memories: activating fear memories, implanting false memories, and changing the valence of memories. These experiments are technically challenging and far outside my areas of expertise (greater detail in the Appendix below). In brief, the authors were able to label discrete sets of dentate gyrus cells while they were naturally activated during an interval of positive, neutral, or negative treatment. Then some groups of  animals were stressed for 10 days, and others remained in their home cages. ... Read more »

Liu, X., Ramirez, S., Redondo, R., & Tonegawa, S. (2014) Identification and Manipulation of Memory Engram Cells. Cold Spring Harbor Symposia on Quantitative Biology, 59-65. DOI: 10.1101/sqb.2014.79.024901  

Ramirez, S., Liu, X., MacDonald, C., Moffa, A., Zhou, J., Redondo, R., & Tonegawa, S. (2015) Activating positive memory engrams suppresses depression-like behaviour. Nature, 522(7556), 335-339. DOI: 10.1038/nature14514  

Timmins, L., & Lombard, M. (2005) When “Real” Seems Mediated: Inverse Presence. Presence: Teleoperators and Virtual Environments, 14(4), 492-500. DOI: 10.1162/105474605774785307  

  • June 7, 2015
  • 01:12 PM
  • 681 views

Use of Anti-Inflammatories Associated with Threefold Increase in Homicides

by The Neurocritic in The Neurocritic

Scene from Elephant, a fictional film by Gus Van SantRegular use of over-the-counter pain relievers like aspirin, ibuprofen, naproxen, and acetaminophen was associated with three times the risk of committing a homicide in a new Finnish study (Tiihonen et al., 2015). The association between NSAID use and murderous acts was far greater than the risk posed by antidepressants.Clearly, drug companies are pushing dangerous, toxic chemicals and we should ban the substances that are causing school massacres — Advil and Alleve and Tylenol are evil!!Wait..... what?Tiihonen and colleagues wanted to test the hypothesis that antidepressant treatment is associated with an increased risk of committing a homicide. Because, you know, the Scientology-backed Citizens Commission on Human Rights of Colorado thinks so (and their blog is cited in the paper!!):After a high-profile homicide case, there is often discussion in the media on whether or not the killing was caused or facilitated by a psychotropic medication. Antidepressants have especially been blamed by non-scientific organizations for a large number of senseless acts of violence, e.g., 13 school shootings in the last decade in the U.S. and Finland [1]. The authors reviewed a database of all homicides investigated by the police in Finland between 2003 and 2011. A total of 959 offenders were included in the analysis. Each offender was matched to 10 controls selected from the Population Information System. Then the authors checked purchases in the Finnish Prescription Register. A participant was considered a "user" if they had a current purchase in the system.1 The main drug classes examined were antidepressants, benzodiazepines, and antipsychotics. The primary outcome measure was risk of offending for current use vs. no use of those drugs (with significance set to p<0.016 to correct for multiple comparisons). Seven other drug classes were examined as secondary outcome measures (with α adjusted to .005): opioid analgesics, non-opioid analgesics (e.g., NSAIDs), antiepileptics, lithium, stimulants, meds for addictive disorders, and non-benzo anxiolytics.Lo and behold, current use of antidepressants in the adult offender population was associated with a 31% greater risk of committing a homicide, but this did not reach significance (p=0.022). On the other hand, benzodiazepine use was associated with a 45% greater risk (p<.001), while antipsychotics were not associated with greater risk of offending (p=0.54).Most dangerous of all were pain relievers. Current use of opioid analgesics (like Oxycontin and Vicodin) was associated with 92% greater risk. Non-opioid analgesics were even worse: individuals taking these meds were at 206% greater risk of offending — that's a threefold increase. 2  Taken in the context of this surprising result, the anti-psych-med faction doth complain too much about antidepressants.Furthermore, analysis of young offenders (25 yrs or less) revealed that none of the medications were associated with greater risk of committing a homicide (benzos and opioids were p=.07 and .04 respectively). To repeat: In Finland at least, there was no association between antidepressant use and the risk of becoming a school shooter.What are we to make of the provocative NSAIDs? More study is needed:The surprisingly high risk associated with opioid and non-opioid analgesics deserves further attention in the treatment of pain among individuals with criminal history.Drug-related murders in oxycodone abusers don't come as a great surprise, but aspirin-related violence is hard to explain...3 Footnotes1 Having a purchase doesn't mean the individual was actually taking the drug before/during the time of the offense, however. 2 RR = 3.06; 95% CI: 1.78-5.24, p<0.001 for Advil, Tylenol, and the like. And the population-adjusted odds ratios (OR) weren't substantially different, although this wasn't reported for NSAIDs:The analysis based on case-control design showed an adjusted OR of 1.30 (95% CI: 0.97-1.75) as the risk of homicide for the current use of an antidepressant, 2.52 (95% CI: 1.90-3.35) for benzodiazepines, 0.62 (95% CI: 0.41-0.93) for antipsychotics, and 2.16 (95% CI: 1.41-3.30) for opioid analgesics.3 P.S. Just to be clear here, correlation ≠ causation. Disregarding the anomalous nature of the finding in the first place, it could be that murderers have more headaches and muscle pain, so they take more anti-inflammatories (rather than ibuprofen "causing" violence). But if the anti-med faction uses these results to argue that "antidepressants cause school shootings" then explain how ibuprofen raises the risk threefold...ReferenceTiihonen, J., Lehti, M., Aaltonen, M., Kivivuori, J., Kautiainen, H., J. Virta, L., Hoti, F., Tanskanen, A., & Korhonen, P. (2015). Psychotropic drugs and homicide: A prospective cohort study from Finland. World Psychiatry, 14 (2), 245-247. DOI: 10.1002/wps.20220

... Read more »

Tiihonen, J., Lehti, M., Aaltonen, M., Kivivuori, J., Kautiainen, H., J. Virta, L., Hoti, F., Tanskanen, A., & Korhonen, P. (2015) Psychotropic drugs and homicide: A prospective cohort study from Finland. World Psychiatry, 14(2), 245-247. DOI: 10.1002/wps.20220  

  • May 31, 2015
  • 09:33 PM
  • 844 views

Capgras for Cats and Canaries

by The Neurocritic in The Neurocritic

Capgras syndrome is the delusion that a familiar person has been replaced by a nearly identical duplicate. The imposter is usually a loved one or a person otherwise close to the patient.Originally thought to be a manifestation of schizophrenia and other psychotic illnesses, the syndrome is most often seen in individuals with dementia (Josephs, 2007). It can also result from acquired damage to a secondary (dorsal) face recognition system important for connecting the received images with an affective tone (Ellis & Young, 1990).1 Because of this, the delusion crosses the border between psychiatry and neurology.The porous etiology of Capgras syndrome raises the question of how phenomenologically similar delusional belief systems can be constructed from such different underlying neural malfunctions. This is not a problem for Freudian types, who promote psychodynamic explanations (e.g., psychic conflict, regression, etc.). For example, Koritar and Steiner (1988) maintain that “Capgras' Syndrome represents a nonspecific symptom of regression to an early developmental stage characterized by archaic modes of thought, resulting from a relative activation of primitive brain centres.”The psychodynamic view was nicely dismissed by de Pauw (1994), who states:While often ill-founded and convoluted, these formulations have, until recently, dominated many theoretical approaches to the phenomenon. Generally post hoc and teleological in nature, they postulate motives that are not introspectable and defence mechanisms that cannot be observed, measured or refuted. While psychosocial factors can and often do play a part in the development, content and course of the Capgras delusion in individual patients it remains to be proven that such factors are necessary and sufficient to account for delusional misidentification in general and the Capgras delusion in particular.Canary CapgrasAlthough psychodynamic explanations were sometimes applied 2 to cases of Capgras syndrome for animals,3 other clinicians report that the delusional misindentification of pets can be ameliorated by pharmacological treatment of the underlying psychotic disorder. Rösler et al. (2001) presented the case of “a socially isolated woman who felt her canary was replaced by a duplicate”:Mrs. G., a 67-year-old woman, was admitted for the first time to a psychiatric hospital for late paraphrenia. ... She had been a widow for 11 years, had no children, and lived on her own with very few social contacts. Furthermore, she suffered from concerns that her canary was alone at home. She was delighted with the suggestion that the bird be transferred to the ward. However, during the first two days she repeatedly asserted that the canary in the cage was not her canary and reported that the bird looked exactly like her canary, but was in fact a duplicate. There were otherwise no misidentifications of persons or objects.Earlier, Somerfield (1999) had reported a case of parrot Capgras, also in an elderly woman with a late-onset delusional disorder:I would like to report an unusual case of a 91-year-old woman with a 10-year history of late paraphrenia (LP) and episodes of Capgras syndrome involving her parrot. She was a widow of 22 years, nulliparous, with profound deafness and a fiercely independent character.  The psychotic symptoms were usually well controlled by haloperidol 0.5 mg orally. However, she was periodically non-compliant with medication, resulting in deterioration of her mental state, refusal of food and her barricading herself in her room to stop her parrot being stolen. At times she accused others of “swapping” the parrot and said the bird was an identical imposter. There was no misidentifcation of people or objects. Her symptoms would attenuate rapidly with reinstatement of haloperidol.Both of these patients believed their beloved pet birds had been replaced by impostors, but neither of them misidentified any human beings. Clearly, this form of Capgras syndrome is different from what can happen after acquired damage to the affective face identification system (Ellis & Young, 1990). Is there an isolated case of sudden onset Capgras for animals that does not encompass person identification as well? I couldn't find one.A Common Explanation?Despite these differences, Ellis and Lewis (2001) suggested that “It seems parsimonious to seek a common explanation for the delusion, regardless of its aetiology.” I'm not so sure. If that's true, then haloperidol should effectively treat all instances of Capgras syndrome, including those that arise after a stroke. And there's evidence suggesting that antipsychotics would be ineffective in such patients.Are there systematic differences in the symptoms shown by Capgras patients with varying etiologies? Josephs (2007) reviewed 47 patient records and found no major differences between the delusions in patients with neurodegenerative vs. non-neurodegenerative disorders. In all 47 cases, the delusion involved a spouse, child, or other relative. {There were no cases involving animals or objects.}The factors that differed were age of onset (older in dementia patients) and other reported symptoms (e.g., visual hallucinations 4 in all patients with Lewy body dementia, LBD). In this series, 81% of patients had a neurodegenerative disease, and only 4% had schizophrenia [perhaps the Capgras delusion was under-reported in the context of wide-ranging delusions?]. Other cases were due to methamphetamine abuse (4%) or sudden onset brain injury, e.g. hemorrhage (11%).Interestingly, Josephs puts forth dopamine dysfunction as a unifying theme, in line with Ellis and Lewis's general suggestion of a common explanation. The pathology in dementia with Lewy bodies includes degeneration of neurons containing dopamine and acetylcholine. The cognitive/behavioral symptoms of LBD overlap with those seen in Parkinson's dementia, which also involves degeneration of dopaminergic neurons. But dopamine-blocking antipsychotics like haloperidol should not be used in treating LBD. So from a circuit perspective, using “dopamine dysregulation” as a parsimonious explanation isn't really an explanation. And this conception doesn't fit with the neuropsychological model (shown at the bottom of the page).I'm not a fan of parsimony in matters of brain function and dysfunction. We don't know why one person thinks her canary has been replaced by an impostor, another thinks her husband has been replaced by a woman, while a third is convinced there are six copies of his wife floating around.5 I don't expect there to be a unifying explanation. The ... Read more »

Ellis, H., & Young, A. (1990) Accounting for delusional misidentifications. The British Journal of Psychiatry, 157(2), 239-248. DOI: 10.1192/bjp.157.2.239  

Rösler, A., Holder, G., & Seifritz, E. (2001) Canary Capgras. The Journal of Neuropsychiatry and Clinical Neurosciences, 13(3), 429-429. DOI: 10.1176/jnp.13.3.429  

  • May 16, 2015
  • 02:00 PM
  • 714 views

Shooting the Phantom Head (perceptual delusional bicephaly)

by The Neurocritic in The Neurocritic

I have two headsWhere's the man, he's late--Throwing Muses, Devil's Roof Medical journals are enlivened by case reports of bizarre and unusual syndromes. Although somatic delusions are relatively common in schizophrenia, reports of hallucinations and delusions of bicephaly are rare. For a patient to attempt to remove a perceived second head by shooting and to survive the experience for more than two years may well be unique, and merits presentation. --David Ames, British Journal of Psychiatry (1984)In 1984, Dr. David Ames of Royal Melbourne Hospital published a truly bizarre case report about a 39 year old man hospitalized with a self-inflicted gunshot wound through the left frontal lobe (Ames, 1984). The man was driven to this desperate act by the delusion of having a second head on his shoulder. The interloping head belonged to his wife's gynecologist.In an even more macabre twist, his wife had died in a car accident two years earlier..... and the poor man had been driving at the time!Surprisingly, the man survived a bullet through his skull (in true Phineas Gage fashion). After waking from surgery to remove the bullet fragments, the patient was interviewed:He described a second head on his shoulder. He believed that the head belonged to his wife's gynaecologist, and described previously having felt that his wife was having an affair with this gynaecologist, prior to her death. He described being able to see the second head when he went to bed at night, and stated that it had been trying to dominate his normal head. He also stated that he was hearing voices, including the voice of his wife's gynaecologist from the second head, as well as the voices of Jesus and Abraham around him, conversing with each other. All the voices were confirming that he had two heads...I'm two headed one free one sticky--Throwing Muses, Devil's Roof “The other head kept trying to dominate my normal head, and I would not let it. It kept trying to say to me I would lose, and I said bull-shit ... and decided to shoot my other head off.”A gun was not his first choice, however... he originally wanted to use an ax.He stated that he fired six shots, the first at the second head, which he then decided was hanging by a thread, and then another one through the roof of his mouth. He then fired four more shots, one of which appeared to have gone through the roof of his mouth and three of which missed. He said that he felt good at that stage, and that the other head was not felt any more. Then he passed out. Prior to shooting himself, he had considered using an axe to remove the phantom head.Not surprisingly, the patient was diagnosed with schizophrenia and given antipsychotics.He was seen regularly in psychiatric out-patients following this operation and by March, stated that the second head was dead, that he was taking his chlorpromazine regularly, and that he had no worries.  [This was Australia, after all.]Unfortunately, the man died two years later from a Streptococcus pneumoniae infection in his brain.  Ames (1984) concluded his lively and bizarre case report by naming the singular syndrome “perceptual delusional bicephaly”:This case illustrates an interesting phenomenon of perceptual delusional bicephaly; the delusion caused the patient to attempt to remove the second head by shooting. It is notable that following his head injury and treatment with chlorpromazine, the initial symptoms resolved, although he was left with the problems of social disinhibition and poor volition, typical of patients with frontal lobe injuries. As far as I know, this specific delusion has not yet been depicted in a horror film (or in an episode of Perception or Black Box). ReferenceAmes, D. (1984). Self shooting of a phantom head The British Journal of Psychiatry, 145 (2), 193-194 DOI: 10.1192/bjp.145.2.193

... Read more »

Ames, D. (1984) Self shooting of a phantom head. The British Journal of Psychiatry, 145(2), 193-194. DOI: 10.1192/bjp.145.2.193  

join us!

Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.

If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.

Register Now

Research Blogging is powered by SMG Technology.

To learn more, visit seedmediagroup.com.