The Neurocritic

330 posts · 420,609 views

Born in West Virginia in 1980, The Neurocritic embarked upon a roadtrip across America at the age of thirteen with his mother. She abandoned him when they reached San Francisco and The Neurocritic descended into a spiral of drug abuse and prostitution. At fifteen, The Neurocritic's psychiatrist encouraged him to start writing as a form of therapy.

The Neurocritic
330 posts

Sort by Latest Post, Most Popular

View by Condensed, Full

  • April 14, 2016
  • 07:02 AM
  • 734 views

Don't Lose Your Head Over tDCS

by The Neurocritic in The Neurocritic

Recent studies of transcranial electrical stimulation in human cadaver heads showed a 90% loss of current when delivered through the skin (Buzsáki, 2016 CNS meeting).Siren SongBy Margaret AtwoodThis is the one song everyone would like to learn: the songthat is irresistible:the song that forces mento leap overboard in squadronseven though they see the beached skullsthe song nobody knowsbecause anyone who has heard itis dead, and the others can't remember.Better living through electricity. The lure of superior performance, improved memory, and higher IQ without all the hard work. Or at least, in a much shorter amount of time.Transcranial direct current stimulation (tDCS), hailed as a “non-invasive” 1 way to alter brain activity,2 has been hot for years now. In fact, peak tDCS is already behind us, with a glut of DIY brain stimulation articles in places like Fortune, CBC, Life Hacker, New Statesman, Wall Street Journal, Wired, Slate, Medical Daily, Mosaic, The Economist, Nature, IEEE Spectrum, and The Daily Dot.Simply apply a weak electrical current to your head via a pair of saline soaked sponges connected to a 9 volt battery. Current flows between the positive anode, or stimulating electrode (in blue below), and the negative cathode (in red below). Low levels of electrical stimulation travel through the scalp and skull to a region of cortex underneath. Modeling studies suggest that the electric field generated by tDCS in humans is about 1 mV/mm (Neuling et al., 2012). The method doesn't directly induce spiking (the firing of action potentials), but it's thought to alter neuronal excitability. By facilitating neuroplastic changes during cognitive training, tDCS may improve learning, memory, mental arithmetic, and target detection.Modified from Fig. 1b (Dayan et al., 2013). Bipolar tDCS electrode configuration, with one electrode over left dorsolateral prefrontal cortex and a reference electrode over the contralateral supraorbital region. And there you have it. High tech performance enhancement for less than $40. Or a siren song for wannabe brain hackers??In Symposium Session 7 of the Cognitive Neuroscience Society meeting last week, Dr. György Buzsáki threw a bit of cold water on non-invasive transcranial electrical stimulation (TES) methods, which include tDCS and transcranial alternating current (tACS).My understanding of his remarks: Studies of transcranial electrical stimulation (TES) in human cadaver heads showed there's a 90% loss of current when delivered through the skin (which is obviously the case in living humans) vs. through the skull. This implies that a current of at least 5 mA on the scalp would be necessary to generate a 1 mV/mm electric field in the human brain. Based on his personal experience, Dr. Buzsáki reported that 4 mA was hard to tolerate even with anesthetized skin. For comparison, 2 mA is the maximum current recommended by an international panel of experts. Others in the audience had similar interpretations:90% loss of electrical current between skin and scalp -György Buzsáki #CNS2016— CNS News (@CogNeuroNews) April 5, 201690% charge loss skin-skull using TES. Need 5 mA to affect spiking, FDA approved 2mA, Buzsáki first hand 4mA hard to tolerate. #CNS2016— Renee M. Symonds (@DatabaseDragons) April 5, 2016This revelation was in the context of work on focused beam stimulation, which is designed to improve the spatial selectivity of non-invasive TES (Voroslakos et al., 2015):We recorded TES-generated field potentials in human cadavers and anesthetized rats. Stimulation was applied by placing Ag/AgCl EEG electrodes over the external surface of the skull.  ... We also measured the shunting effect of the skin during transcutaneous stimulation. In additi... Read more »

  • March 31, 2016
  • 04:46 AM
  • 622 views

Sleep Doctoring: Fatigue Amnesia in Physicians

by The Neurocritic in The Neurocritic

New in the journal journal Cortex: four shocking cases of practicing medicine while exhausted  (Dharia & Zeman, 2016). The authors called this newly discovered syndrome “fatigue amnesia.” Why this is is any different from countless other examples of not remembering things you did while exhausted — I do not know. Except amnesia for performing a complex medical procedure is a lot more disturbing than forgetting you did the dishes the night before.Here are the cases in brief:Case 1:  A consultant geriatrician, while working as house officer, treated a patient with chest pain and severe pulmonary oedema in the middle of night. She made an entry in the notes, demonstrating successful initial memory acquisition. She does not remember going to bed that night. On the ward round on the following morning the patient was pointed out to her but she had no recollection of seeing the patient or writing the note.Case 2: A senior house officer, now a consultant neurologist, went to bed in the early hours after a busy shift. She was woken soon afterwards to manage a patient with cardiac arrest. The resuscitation was complex and included an intracardiac adrenaline injection. She documented events in the medical notes immediately, demonstrating successful initial memory acquisition. She returned to bed. She was told on the morning ward round that the patient was well and had his breakfast following the cardiac arrest. She was startled by this information, as she had no recollection of the previous night's events.Case 3: A consultant microbiologist who was working on a night shift as a house officer clerked in a patient at 11:00 pm and continued to work thereafter throughout the night. On the morning ward round when the patient was pointed out to her she had no recollection of seeing or managing him.Case 4: A paediatrician reported memory loss for a complex decision made and instructions given over the phone. While working as a registrar he went to bed in the early hours of morning when on call. He was woken by a call about a complex patient. He went to the ward soon afterwards to find out that the trolley was laid out for Swan Ganz catheterisation. Although he was assured that he had done so, he did not remember giving instructions to prepare the trolley.The incidents were not due to alcohol or drugs. Long hours and sleep deprivation were to blame. And fortunately, the amnesic episodes were isolated and did not recur in any of the doctors. Dharia & Zeman (2016) suggested that:While the resulting memory gaps can reasonably be described as resulting from a ‘transient amnesic state', the evidence from the medical notes suggest that this phenomenon reflects a novel form of accelerated long-term forgetting (Elliott, Isaac, & Muhlert, 2014), whereby a memory for events is acquired normally but then decays more rapidly than usual.By tomorrow, I will have forgotten that I wrote this...ReferenceDharia, S., & Zeman, A. (2016). Fatigue amnesia Cortex DOI: 10.1016/j.cortex.2016.03.001

... Read more »

Dharia, S., & Zeman, A. (2016) Fatigue amnesia. Cortex. DOI: 10.1016/j.cortex.2016.03.001  

  • March 27, 2016
  • 09:03 PM
  • 742 views

Everybody Loves Dopamine

by The Neurocritic in The Neurocritic

Dopamine is love. Dopamine is reward. Dopamine is addiction.Neuroscientists have a love/hate relationship with how this monoamine neurotransmitter is portrayed in the popular press.wwlp.comthestranger.com[The claim of vagus nerve-stimulating headphones is worth a post in its own right.]observer.com“You can fold your laundry, but you can’t fold your dopamine.”- James Cole Abrams, M.A. (in Contemplative Psychotherapy)The word dopamine has become a shorthand for positive reinforcement, whether it's from fantasy baseball or a TV show.But did you know that a subset of dopamine (DA) neurons originating in the ventral tegmental area (VTA) of the midbrain respond to obnoxious stimuli (like footshocks) and regulate aversive learning?Sometimes the press coverage of a snappy dopamine paper can be positive and (mostly) accurate, as was the case with a recent paper on risk aversion in rats (Zalocusky et al., 2016). This study showed that rats who like to “gamble” on getting a larger sucrose reward have a weaker neural response after “losing.” In this case, losing means choosing the risky lever, which dispenses a low amount of sucrose 75% of the time (but a high amount 25%), and getting a tiny reward. The gambling rats will continue to choose the risky lever after losing. Other rats are risk-averse, and will choose the “safe” lever with a constant reward after losing.This paper was a technical tour de force with 14 multi-panel figures.1 For starters, cells in the nucleus accumbens (a VTA target) expressing the D2 receptor (NAc D2R+ cells) were modified to express a calcium indicator that allowed the imaging of neural activity (via fiber photometry). Activity in NAc D2R+ cells was greater after loss, and during the decision phase of post-loss trials. And these two types of signals were dissociable.2 Then optogenetic methods were used to activate NAc D2R+ cells on post-loss trials in the risky rats. This manipulation caused them to choose the safer option.- click to enlarge -Noted science writer Ed Yong wrote an excellent piece about these findings in The Atlantic (Scientists Can Now Watch the Brain Evaluate Risk).Now there's a boatload of data on the role of dopamine in reinforcement learning and computational models of reward prediction error (Schultz et al., 1997) and discussion about potential weaknesses in the DA and RPE model. So while a very impressive addition to the growing pantheon of laser-controlled rodents, the results of Zalocusky et al. (2016) aren't massively surprising.More surprising are two recent papers in the highly sought-after population of humans implanted with electrodes for seizure monitoring or treatment of Parkinson's disease. I'll leave you with quotes from these papers as food for thought.1. Stenner et al. (2015). No unified reward prediction error in local field potentials from the human nucleus accumbens: evidence from epilepsy patients.Signals after outcome onset were correlated with RPE regressors in all subjects. However, further analysis revealed that these signals were better explained as outcome valence rather than RPE signals, with gamble gains and losses differing in the power of beta oscillations and in evoked response amplitudes. Taken together, our results do not support the idea that postsynaptic potentials in the Nacc represent a RPE that unifies outcome magnitude and prior value expectation. The n... Read more »

  • March 20, 2016
  • 08:06 AM
  • 858 views

A Detached Sense of Self Associated with Altered Neural Responses to Mirror Touch

by The Neurocritic in The Neurocritic

Our bodily sense of self contributes to our personal feelings of awareness as a conscious being. How we see our bodies and move through space and feel touched by loved ones are integral parts of our identity. What happens when this sense of self breaks down? One form of dissolution is Depersonalization Disorder (DPD).1 Individuals with DPD feel estranged or disconnected from themselves, as if their bodies belong to someone else, and “they” are merely a detached observer. Or the self feels absent entirely. Other symptoms of depersonalization include emotional blunting, out-of-body experiences, and autoscopy.Autoscopy for dummies - Antonin De Bemels (cc licence)Transient symptoms of depersonalization can occur due to stress, anxiety, sleep deprivation, or drugs such as ketamine (a dissociative anesthetic) and hallucinogens (e.g., LSD, psilocybin). These experiences are much more common than the official diagnosis of DPD, which occurs in only 1-2% of the population.Research by Olaf Blanke and colleagues (reviewed in Blanke et al., 2015) has tied bodily self-consciousness to the integration of multi-sensory signals in fronto-parietal and temporo-parietal regions of the brain.from Neuron 88(1):145-66. The fragmentation or loss of an embodied self raises philosophically profound questions. Although the idea of “mind uploading” is preposterous in my view (whether via whole brain emulation or cryonics), proponents must seriously ask whether the uploaded consciousness will in any way resemble the living person from whom it arose.2 “Minds are not disembodied logical reasoning devices” (according to Andy Clark).  And...Increasing evidence suggests that the basic foundations of the self lie in the brain systems that represent the body (Lenggenhager et al., 2012). Lenggenhager et al. asked whether the loss of sensorimotor function alters body ownership and the sense of self. Persons with spinal cord injuries scored higher on Cambridge Depersonalization Scale (CDS) items such as “I have to touch myself to make sure that I have a body or a real existence.” This suggests that disconnecting the brain from somatosensory input can change phenomenological aspects of self-consciousness.Source: misswhite.blogcu.comThe Stranger in the MirrorPatients with depersonalization not only feel a change in perception concerning the outside world, but they also have clear-cut changes concerning their own body.  ...  The patient sees his face in the mirror changed, rigid and distorted. His own voice seems strange and unfamiliar to him.  ...  It is in this respect especially remarkable that the estrangement concerning the outside world is often an estrangement in the optic sphere (Schilder, 1935, p. 139).Depersonalization can involve perceptual distortions of bodily experience in different sensory modalities (e.g., vision, hearing, touch, and pain). Recent research has examined interactions between visual and somatosensory representations of self in the tactile mirroring paradigm (also called visual remapping of touch). Here, the participant views images of a person being touched (or not) while they themselves are touched. Tactile perception is enhanced by simultaneously receiving and observing the same stimulation, especially when the image is of oneself.Are the symptoms of depersonalization associated with reduced or absent responses in the tactile mirroring paradigm? If so, at what stage of processing (early or late) does this occur? A new study recorded EEG to look at somatosensory evoked potential (SEP) responses to tactile stimuli during mirroring (Adler et al., 2016). The participants scored high (n=14) or low (n=13) on the CDS.One SEP of interest was the P45, which occurs shortly (25-50 msec) after tactile stimulation. Although the spatial resolution of EEG does not allow firm conclusions about the neural generators, we know from invasive studies in epilepsy patients and animals that P45 originates in the primary somatosensory cortex (S1). When the participants viewed the other-face, P45 did not differ on touch vs. no-touch trials. But the later N80 component was enhanced for touch vs. no-touch, and the enhancement was similar for low and high depersonalization (DP) participants.... Read more »

  • March 7, 2016
  • 06:26 AM
  • 793 views

Writing-Induced Fugue State

by The Neurocritic in The Neurocritic

Who is this, wandering around the crowded street, afraid of everything, trusting no one? “There must be something wrong, somewhere.”But maybe I’m safer since I look disheveled. Who are these people? Where is this place?Did I write that? When did that happen? I don’t remember. I can’t stop writing. I can’t stop walking, either, which is a problem because it’s hard to write and walk at the same time.In the early 1940s, Austrian Psychiatrist Dr. Erwin Stengel wrote a pair of papers on fugue states, a type of dissociative disorder involving loss of personal identity and aimless wandering (Stengel, 1941):THE peculiar condition designated “fugue state,” of which the main symptom is compulsive wandering, has puzzled psychiatrists since it was first described. Nothing is known of the aetiology of this well-defined condition. Fugue states occur in epileptics, hysterics, and certain psychopaths. Bleuler has described their occurrence in schizophrenia, and they have been recorded in cases of general paralysis and of altered personality due to brain tumour.  ...  Kraepelin recognized that it was impossible to distinguish between the states of compulsive wandering associated with various mental disorders. Janet tried to distinguish between hysterical and epileptic fugues by pointing out that short fugues are more likely to be epileptic than hysterical.   He was disturbed by inaccurate use of the term, which was widespread (Stengel, 1943):...the following conditions have been described as fugues: States of wandering, in accordance with the classical conception; states of double personality; all kinds of transitory abnormal behaviour of functional origin; hysterical loss of consciousness and of memory; twilight states; confusional states of hysterical nature; delirious states in schizophrenia. The tendency to call transient states of altered consciousness fugues, irrespective of the behaviour of the patient, is obvious. This is a most unsatisfactory state of affairs.  Stengel presented dozens of cases in these papers and was obsessed with finding common etiological factors, no matter what the underlying medical condition (e.g., epilepsy, “hysteria”, schizophrenia):The intimate similarity of fugue states associated with different mental disorders suggests that there must be aetiological factors common to all. However, no attempt has been made hitherto to ascertain such factors. I have been engaged in investigations concerning this problem for more than eight years......and (Stengel, 1943):Clinical studies carried out over many years have convinced me that there is no justification in differentiating between hysterical and epileptic wandering states, as the behaviour of the patients and the majority of the etiological factors are fundamentally the same in all fugues with the impulse to wander (Stengel, 1939, 1941).Since Stengel was trained as a psychoanalyst and considered Freud as a mentor, you might guess the common etiology:This was a disturbance of the environment of child life. A serious disturbance in the child-parent relationship, usually of such a nature that the relationship to one or both parents was either completely lacking or only partially developed, had occurred in nearly every case.Beyond the mommy/daddy issues, symptoms of severe depression (suicide attempts, failure to eat, lack of hygiene) and/or mania (elation, hypersexuality) were commonplace. Here's one especially tragic example:CASE 9. — M. E —, female, born 1906. The patient was normal until her twenty first year. At that time she suddenly became unstable and wanted to live apart from her mother, with whom she had been happy hitherto. She went to Paris, where she found employment as a secretary, but after some months she returned home again. When she was 22 she experienced for the first time an urge to wander, which reappeared subsequently two or three times every year. For no adequate reason, sometimes after an insignificant quarrel, she left home and wandered about for some days. During these states she was not fully conscious, slept little, and neglected herself. When normal consciousness returned, after three or four days, she found herself in the country far away from home. These states were followed by profound depression, lasting for several weeks, when the patient indulged in self-reproaches, ate very little, lost weight, and could not work. ... The patient was a typical daydreamer. In her daydreams a fantasy of a man disappointed in love committing suicide often appeared. (Her father had committed suicide.) ... The patient, who was of unusual intelligence, suffered very much from her abnormal states, which appeared at intervals of four to five months, and were always followed by melancholic depression. In one of these depressions she committed suicide by poisoning.Period FugueStengel (1941) asserted that the majority of his female patients started their wandering premenstrually, but his definition of what this means was kind of loose (and meaningless): “usually appear before menstruation”, “usually just before menstruation”, “usually commences shortly before her menstrual period”, “at the onset of menstruation”, “about the time of menstruation”.He had no explanation for this, other than the implication that it's an unstable lady thing. One particularly fun case (Case 14) was a young woman with a previous bout of encephalitis lethargica. But it was determined that her menstrual period and an Oedipus Complex drove her to wander, not her illness.The report for Case 35 (Miss May S. M—, aged 18, member of the women's military service) was accompanied by a four page excerpt from her diary, which is illuminating for what it tells us about bipolar disorder (but fugue, not so much):“ 1940.  12.1: Had a drink, sang all the way home. —13.1: The matinee went off well. Feeling so horribly sad, a terribly empty feeling, felt like crying my heart out. Home is like the end of the world. —21.1: Tried to commit suicide. Instead wrote to G. telling him to give me some ideas how to get to America. Feeling just frightful, feel dead. —27. 1: No feelings at all. —30.1: Have a mad desire to go really common, lipstick, scarlet nails and with as little clothes as possible.Modern conceptions of fugue states (including dissociative amnesia) focus on trauma, memory systems, and underlying neurobiological causes, instead of dysfunctional child-parent relationships (MacDonald & MacDonald, 2009).Who is this? How did I end up here?You mean there’s a world outside... Read more »

  • January 22, 2016
  • 08:30 AM
  • 781 views

This Neuroimaging Method Has 100% Diagnostic Accuracy (or your money back)

by The Neurocritic in The Neurocritic

doi:10.1371/journal.pone.0129659.g003Did you know that SPECT imaging can diagnose PTSD with 100% accuracy (Amen et al., 2015)? Not only that, out of a sample of 397 patients from the Amen Clinic in Newport Beach, SPECT was able to distinguish between four different groups with 100% accuracy! That's right, the scans of (1) healthy participants, and patients with (2) classic post-traumatic stress disorder (PTSD), (3) classic traumatic brain injury (TBI), and (4) both disorders..... were all classified with 100% accuracy!TRACK-TBI investigators, your 3T structural and functional MRI outcome measures are obsolete.NIMH, the hard work of developing biomarkers for mental illness is done, you can shut down now. Except none of this research was funded by you...The finding was #19 in a list of the top 100 stories by Discover Magazine.How could the Amen Clinics, a for-profit commercial enterprise, accomplish what an army of investigators with billions in federal funding could not?The authors1 relied on a large database of scans collected from multiple sites over a 20 year period. The total sample included 20,746 individuals who visited one of nine Amen Clinics from 1995-2014 for the purposes of psychiatric and/or neurological evaluation (Amen et al., 2015). The first analysis included a smaller, highly selected sample matched on a number of dimensions, including psychiatric comorbidities (Group 1).- click on image for larger view -You'll notice the percentage of patients with ADHD was remarkably high (58%, matched across the three patient groups). Perhaps that's because... I did not know that. Featuring Johnny Cash ADD.SPECT uses a radioactive tracer injected 30 minutes before a scan that will assess either the “resting state” or an “on-task” condition (a continuous performance task, in this study). Clearly, SPECT is not the go-to method if you're looking for decent temporal resolution to compare two conditions of an active attention task. The authors used a region of interest (ROI) analysis to measure tracer activity (counts) in specific brain regions.I wondered about the circularity of the clinical diagnosis (i.e., were the SPECT scans used to aid diagnosis), particularly since “Diagnoses were made by board certified or eligible psychiatrists, using all of the data available to them, including detailed clinical history, mental status examination and DSM-IV or V criteria...” But we were assured that wasn't the case: “These quantitative ROI metrics were in no way used to aid in the clinical diagnosis of PTSD or TBI.” The rest of the methods (see Footnote 2) were opaque to me, as I know nothing about SPECT.A second analysis relied on visual readings (VR) of about 30 cortical and subcortical ROIs. “Raters did not have access to detailed clinical information, but did know age, gender, medications, and primary presenting symptoms (ex. depressive symptoms, apathy, etc.).”  Hmm...But the quantitative ROI analysis gave superior results to the clinician VR. So superior, in fact, that the sensitivity/specificity in distinguishing one group from another was 100% (indicated by red boxes below). The VR distinguished patients from controls with 100% accuracy, but was not as good for classifying the different patient groups during the resting state scan — only a measly 86% sensitivity, 81% specificity for TBI vs. PTSD, which is still much better than other studies. However, results from the massively sized Group 2 were completely unimpressive. 3- click on image for larger view, you'll want to see this - Why is this so important? PTSD and TBI can show overlapping symptoms in war veterans and civilians alike, and the disorders can co-occur in the same individual. More accurate diagnosis can lead to better treatments. This active area of research is nicely reviewed in the paper, but no major breakthroughs have been reported yet. So the claims of Amen et al. are remarkable. Stunning if true. But they're not. They can't be. The accuracy of the classifier exceeds the precision of the measurements, so this can't be possible. What is the test-retest reliability of SPECT? What is the concordance across sites? Was there no change in imaging protocol, no improvements or upgrades to the equipment over 20 years? SPECT is sensitive to motion artifact, so how was that handled, especially in patients who purportedly have ADHD?SPECT has been noted for its poor spatial resolution compared to other functional neuroimaging techniques like PET and fMRI. A panel of ... Read more »

  • January 8, 2016
  • 07:47 AM
  • 960 views

Opioid Drugs for Mental Anguish: Basic Research and Clinical Trials

by The Neurocritic in The Neurocritic

The prescription opioid crisis of overdosing and overprescribing has reached epic proportions, according to the North American media. Just last week, we learned that 91% of patients who survive opioid overdose are prescribed more opioids! The CDC calls it an epidemic, and notes there's been “a 200% increase in the rate of overdose deaths involving opioid pain relievers and heroin.” A recent paper in the Annual Review of Public Health labels it a “public health crisis” and proposes “interventions to address the epidemic of opioid addiction” (Kolodny et al., 2015).In the midst of this public and professional outcry, why on earth would anyone recommend opioid drugs as a treatment for severe depression and suicidal ideation??Let's revisit the questions posed in my previous post: Does the pain of mental anguish rely on the same neural machinery as physical pain?     Can we treat these dreaded ailments with the same medications? The opioid-for-depression proponents would answer both of those questions in the affirmative,1 with some qualifications. First off, the actual medication in question (and its dose) is different from the typically abused opiate / opioid drug. As far as I can tell, no one is clamoring for narcotic analgesics like OxyContin and Vicodin to be used as antidepressants.In his 2008 paper on the Psychotherapeutic Benefits of Opioid Agonist Therapy, Dr. Peter L. Tenore reviewred the history of the Opium Cure and declared, “Opioids have been used for centuries to treat a variety of psychiatric conditions with much success.” However, these drugs can be highly addictive (obviously) so he issued this caveat at the end of the paper:It should be noted that opioids do not have FDA approval for the treatment of psychiatric disorders. The intent of this paper was not to suggest that practitioners should prescribe opioids in a manner not approved by the FDA, but rather it was to explore the mechanisms and develop hypotheses that might explain the observation that opioid-dependent psychiatric patients in appropriately certified opioid replacement therapy programs (i.e., methadone treatment programs) stabilize on higher opioid dosages than those without psychiatric diagnoses.Methadone and especially low-dose buprenorphine are the drugs being tested for their antidepressant efficacy, even in those who have no opioid abuse issues. Buprenorphine is a mixed partial μ/κ agonist with complex actions, including:Antagonist (blocker) of κ-opioid receptors (KORs) that bind dynorphins (endogenous opioids associated with anxiety and dysphoria) Partial agonist at μ-opioid receptors (MORs), producing analgesic effects but with less euphoria and less respiratory depression than full agonistsBasic research in rodents suggests that KORs may be a promising target for potential psychiatric treatments in humans, based on improvements shown in standard behavioral assays such as the forced swim test and the elevated maze test (Crowley & Kash, 2015).2 But there's still a long way to go. In addition to the difficulty of modeling mental anguish in animals, the complexity of the dynorphin/KOR system — which can exhibit paradoxical and “convoluted” effects on behavior3 — presents a barrier to clinical translation. In contrast, a very different approach uses affect modeling in an effort to accelerate drug development in neuropsychiatry (Panksepp & Yovell, 2014). In this view, current models of depression have hindered new breakthroughs because of their focus on animal behaviors, instead of animal emotions. Panksepp maintains that separation distress and infant versions of psychic pain, excessive sadness, and grief are mediated by the PANIC system, which is soothed by opioids. Chicks, kittens, puppies, and other infant animals emit distress vocalizations when separated from their mothers. Rat pups emit ultrasonic vocalizations and baby monkeys “coo”. These innate, reflexive, and adaptive behaviors are reduced with low doses of morphine.4 Panksepp and colleagues have inferred that very strong and human-like emotions are associated with distress vocalizations.By way of example, here is my adult cat. He's very affectionate and chatty. He requires a lot of attention and doesn't like to be alone. Does he meow and miss me when I'm on vacation? I imagine he does. Do I think he feels psychic pain and grief while I'm gone? No.Watt and Panskepp (2009) argue that depression is an evolutionarily conserved mechanism to terminate separation distress, drawing on psychoanalytic concepts like object relations theory as well as the literature on neuropeptides and neuromodulators implicated in major depression.Nopan Treatment of Acute SuicidalityThe research on separation distress in animals helped motivate a clinical trial that was recently published in the American Journal of Psychiatry (Yovell et al., 2015). The initial daily dose of Nopan (0.1 or 0.2 mg sublingual buprenorphine hydrochloride) was relatively low, reaching a maximum dose of 0.8 mg daily by the end of the four week trial (mean = 0.44 mg). By way of comparison, the ... Read more »

  • December 29, 2015
  • 07:04 AM
  • 1,109 views

Social Pain Revisited: Opioids for Severe Suicidal Ideation

by The Neurocritic in The Neurocritic

Does the pain of mental anguish rely on the same neural machinery as physical pain? Can we treat these dreaded ailments with the same medications? These issues have come to the fore in the field of social/cognitive/affective neuroscience.As many readers know, Lieberman and Eisenberger (2015) recently published a controversial paper claiming that a brain region called the dorsal anterior cingulate cortex (dACC, shown above) is “selective” for pain.1 This finding fits with their long-time narrative that rejection literally “hurts” — social pain is analogous to physical pain, and both are supported by activity in same regions of dACC (Eisenberger et al., 2003). Their argument is based on work by Dr. Jaak Panksepp and colleagues, who study separation distress and other affective responses in animals (Panksepp & Yovell, 2014).Panksepp wrote The Book on Affective Neuroscience in 1998, and coined the term even earlier (Panksepp, 1992). He also wrote a Perspective piece in Science to accompany Eisenberger et al.'s 2003 paper:We often speak about the loss of a loved one in terms of painful feelings, but it is still not clear to what extent such metaphors reflect what is actually happening in the human brain? Enter Eisenberger and colleagues ... with a bold neuroimaging experiment that seeks to discover whether the metaphor for the psychological pain of social loss is reflected in the neural circuitry of the human brain. Using functional magnetic resonance imaging (fMRI), they show that certain human brain areas that “light up” during physical pain are also activated during emotional pain induced by social exclusion [i.e., exclusion from playing a video game]. But as I've argued for years, Social Pain and Physical Pain Are Not Interchangeable. Whenever I read an article proclaiming that “the brain bases of social pain are similar to those of physical pain”, I am reminded of how phenomenologically DIFFERENT they are. And subsequent work has demonstrated that physical pain and actual social rejection (a recent romantic break-up) do not activate the same regions of dACC (Woo et al., 2014). Furthermore, multivariate activation patterns across the entire brain can discriminate pain and rejection with high accuracy.2 Modified from Fig. 3 (Woo et al., 2014). Differences between fMRI pattern-based classifiers for pain and rejection.Feelings of rejection were elicited in the participants by showing them pictures of their ex-partners (vs. pictures of close friends), and physical pain was elicited by applying painful heat to the forearm (vs. warm heat).Does this mean there is no overlap between brain systems that can dampen physical and emotional pain (e.g., endogenous opioids)? Of course not; otherwise those suffering from utter despair, unspeakable loneliness, and other forms of psychic turmoil would not self-medicate with mind-altering substances.Separation Distress: Of Mice and PsychoanalysisAlthough Panksepp has worked primarily with rodents and other animals throughout his career, he maintains a keen interest in neuropsychoanalysis, an attempt to merge Freudian psychoanalysis with contemporary neuroscience. Neuropsychoanalysis “seeks to understand the human mind, especially as it relates to first-person experience.” If you think that's a misguided (and impossible) quest, you might be surprised by some of the prominent neuroscientists who have signed on to this agenda (see these posts).Prof. Panksepp is currently collaborating with Prof. Yoram Yovell, a Psychoanalyst and Neuroscientist at the Institute for the Study of Affective Neuroscience (ISAN) in Haifa. A recent review paper addresses their approach of affective modeling in animals as a way to accelerate drug development in neuropsychiatry (Panksepp & Yovell, 2014). Their view is that current models of depression, which focus on animal behaviors instead of animal emotions, have hindered new breakthroughs in treatments for depression. It’s actually a fascinating and ambitious research program:We admit that our conceptual position may be only an empirical/ontological approximation, especially when contrasted to affective qualia in humans … but it is at least a workable empirical approach that remains much underutilized. Here we advance the view that such affective modeling can yield new medical treatments more rapidly than simply focusing on behavioral processes in animals. In sum, we propose that the neglect of affect in preclinical psychiatric modeling may be a major reason why no truly new psychiatric medicinal treatments have arisen from behavior-only preclinical modeling so far.They propose that three key primal emotional systems3 may be critical for understanding depression: SEEKING (enthusiasm-exuberance), PANIC (psychic pain), and PLAY (joyful exuberance). If these constructs sound highly anthropormorphic when applied to rats, it's because they are!! Perhaps you'd rather “reaffirm classical behaviorist dogma” (Panksepp & Yovell, 2014) and stick with more traditional notions like brain reward systems, separation distress, and 50-kHz ultrasonic vocalizations (e.g., during tickling, mating, and play) when studying rodents.Of interest today is the PANIC system (Panksepp & Yovell, 2014), which “mediates the psychic pain of separation distress (i.e. excessive sadness and grief), which can be counteracted by minimizing PANIC arousals (as with low-dose opioids).” Since low-dose opioids alleviate separation distress in animals (based on reductions in distress vocalizations), why not give them to suicidal humans suffering from psychic pain?Well... because making strong inferences about the contents of animal minds is deeply problematic (... Read more »

  • December 12, 2015
  • 05:47 PM
  • 935 views

This Week in Neuroblunders: Optogenetics Edition

by The Neurocritic in The Neurocritic

Recent technological developments in neuroscience have enabled rapid advances in our knowledge of how neural circuits function in awake behaving animals. Highly targeted and reversible manipulations using light (optogenetics) or drugs have allowed scientists to demonstrate that activating a tiny population of neurons can evoke specific memories or induce insatiable feeding.But this week we learned these popular and precise brain stimulation and inactivation methods may produce spurious links to behavior!! And that “controlling neurons with light or drugs may affect the brain in more ways than expected”! Who knew that rapid and reversible manipulations of a specific cell population might actually affect (gasp) more than the targeted circuit, suggesting that neural circuits do not operate in isolation??Apparently, a lot of people already knew this.Here's the dire Nature News report:...stimulating one part of the brain to induce certain behaviours might cause other, unrelated parts to fire simultaneously, and so make it seem as if these circuits are also involved in the behaviour.According to Ölveczky, the experiments suggest that although techniques such as optogenetics may show that a circuit can perform a function, they do not necessarily show that it normally performs that function. “I don’t want to say other studies have been wrong, but there is a danger to overinterpreting,” he says.But the paper in question (Otchy et al., 2015) was not primarily about that problem. The major theme is shown in the figure above — the difference between acute manipulations using a drug (muscimol) to transiently inactivate a circuit versus the chronic effects of permanent damage (which show remarkable recovery).1 In the songbird example, acute inactivation of the nucleus interface (Nif) vocal control area (and its “off-target” attachments) warped singing, but the “chronic” lesion did not.2 In an accompanying commentary, Dr. Thomas C. Südhof asked:How should we interpret these experiments? Two opposing hypotheses come to mind. First, that acute manipulations are unreliable and should be discarded in favour of chronic manipulations. Second, that acute manipulations elicit results that truly reflect normal circuit functions, and the lack of changes after chronic manipulations is caused by compensatory plasticity. But not so fast! said Südhof (2015), who then stated the obvious. “Many chronic manipulations of neural circuits (both permanent genetic changes and physical lesions) do actually produce major behavioural changes.” [as if no one had ever heard of H.M. or Phineas Gage or Leborgne before now.]The acute/chronic conundrum is nothing new in the world of human neurology. But centuries of crudely observing accidents of nature, with no control over which brain regions are damaged, and no delineation of precise neural mechanisms for behavior, doesn't count for much in our store of knowledge about acute vs. chronic manipulations of neural circuits.Let's take a look at a few examples anyway. In his 1876 Lecture on the Prognosis of Cerebral Hæmorrhage, Dr. Julius Althaus discussed recovery of function: Do patients ever completely recover from an attack of cerebral hæmorrhage?This question used formerly to be unhesitatingly answered in the affirmative.. . .The extent to which recovery of function may take place depends–1. Upon the quantity of blood which has been effused.  ...2. Upon the portion of the brain into which the effusion has taken place. Sensation is more easily re-established than motion; and hæmorrhage into the thalamus opticus seems to give better prospects of recovery than when the blood tears up the corpus striatum.  ...[etc.] In his 1913 textbook of neurology (Organic and Functional Nervous Diseases), Dr. Moses Allen Starr discussed aspects of paralysis from cortical disease, and the uniqueness of motor representations across individuals: “Every artisan, every musician, every dancer, has a peculiar individual store of motor memories. Some individuals possess a greater variety of them than others. Hence the motor zone on the cortex is of different extent in different persons, each newly acquired set of movements increasing its area.”In 1983, we could read about Behavioral abnormalities after right hemisphere stroke and then Recovery of behavioral abnormalities after right hemisphere stroke.More recently, there's been an emphasis on connectome-based approaches for quantifying the effects of focal brain injuries on large-scale network interactions, and how this might predict neuropsychological outcomes. So the trend in human neuroscience is to acknowledge the impact of chronic lesions on distant brain regions, rather than the current contention [in animals, of course] that “acute manipulations are probably more susceptible to off-target effects than are chronic lesions.”But I digress... Based on two Nature commentaries about the Otchy et al. paper, I was expecting “ah ha, gotcha, optogenetics is a fatally flawed technique.” This Hold Your Horses narrative fits nicely into a recap of neurogaffes in high places. One of the experiments did indeed use an optogenetic manipulation, but the issue wasn't specific to that method.Ultimately, the neuroblunder for me wasn't the Experimental mismatch in neural circuits (or a failure of optogenetics per se), it was the mismatch between the-problem-as-hyped and a lack of historical context for said problem.Footnotes1... Read more »

  • November 30, 2015
  • 01:34 AM
  • 881 views

Carving Up Brain Disorders

by The Neurocritic in The Neurocritic

Neurology and Psychiatry are two distinct specialties within medicine, both of which treat disorders of the brain. It's completely uncontroversial to say that neurologists treat patients with brain disorders like Alzheimer's disease and Parkinson's disease. These two diseases produce distinct patterns of neurodegeneration that are visible on brain scans. For example, Parkinson's disease (PD) is a movement disorder caused by the loss of dopamine neurons in the midbrain.Fig. 3 (modified from Goldstein et al., 2007). Brain PET scans superimposed on MRI scans. Note decreased dopamine signal in the putamen and substantia nigra (S.N.) bilaterally in the patient. It's also uncontroversial to say that drugs like L-DOPA and invasive neurosurgical interventions like deep brain stimulation (DBS) are used to treat PD.On the other hand, some people will balk when you say that psychiatric illnesses like bipolar disorder and depression are brain disorders, and that drugs and DBS (in severe intractable cases) may be used to treat them. You can't always point to clear cut differences in the MRI or PET scans of psychiatric patients, as you can with PD (which is a particularly obvious example).The diagnostic methods used in neurology and psychiatry are quite different as well. The standard neurological exam assesses sensory and motor responses (e.g., reflexes) and basic mental status. PD has sharply defined motor symptoms including tremor, rigidity, impaired balance, and slowness of movement. There are definitely cases where the symptoms of PD should be attributed to another disease (most notably Lewy body dementia)1, and other examples where neurological diagnosis is not immediately possible. But by and large, no one questions the existence of a brain disorder.Things are different in psychiatry. Diagnosis is not based on a physical exam. Psychiatrists and psychologists give clinical interviews based on the Diagnostic and Statistical Manual (DSM-5), a handbook of mental disorders defined by a panel of experts with opinions that are not universally accepted. The update from DSM-IV to DSM-5 was highly controversial (and widely discussed). The causes of mental disorders are not only biological, but often include important social and interpersonal factors. And their manifestations can vary across cultures.Shortly before the release of DSM-5, the former director of NIMH (Dr. Tom Insel) famously dissed the new manual:The strength of each of the editions of DSM has been “reliability” – each edition has ensured that clinicians use the same terms in the same ways. The weakness is its lack of validity. Unlike our definitions of ischemic heart disease, lymphoma, or AIDS, the DSM diagnoses are based on a consensus about clusters of clinical symptoms, not any objective laboratory measure. In other words, where are the clinical tests for psychiatric disorders?For years, NIMH has been working on an alternate classification scheme, the Research Domain Criteria (RDoC) project, which treats mental illnesses as brain disorders that should be studied according to domains of functioning (e.g., negative valence). Dimensional constructs such as acute threat (“fear”) are key, rather than categorical DSM diagnosis. RDoC has been widely discussed on this blog and elsewhere – it's the best thing since sliced bread, it's necessary but very oversold, or it's ill-advised.What does this have to do with neurology, you might ask? In 2007, Insel called for the merger of neurology and psychiatry:Just as research during the Decade of the Brain (1990-2000) forged the bridge between the mind and the brain, research in the current decade is helping us to understand mental illnesses as brain disorders. As a result, the distinction between disorders of neurology (e.g., Parkinson's and Alzheimer's diseases) and disorders of psychiatry (e.g., schizophrenia and depression) may turn out to be increasingly subtle. That is, the former may result from focal lesions in the brain, whereas the latter arise from abnormal activity in specific brain circuits in the absence of a detectable lesion. As we become more adept at detecting lesions that lead to abnormal function, it is even possible that the distinction between neurological and psychiatric disorders will vanish, leading to a combined discipline of clinical neuroscience.Actually, Insel's view dates back to 2005 (Insel & Quirion, 2005)....2 Future training might begin with two post-graduate years of clinical neuroscience shared by the disciplines we now call neurology and psychiatry, followed by two or three years of specialty training in one of several sub-disciplines (ranging from peripheral neuropathies to public sector and transcultural psychiatry). This model recognizes that the clinical neurosciences have matured sufficiently to resemble internal medicine, with core training required prior to specializing....and was expressed earlier by Dr. Joseph P. Martin, Dean of Harvard Medical School (Martin, 2002):Neurology and psychiatry have, for much of the past century, been separated by an artificial wall created by the divergence of their philosophical approaches and research and treatment methods. Scientific advances in recent decades have made it clear that this separation is arbitrary and counterproductive. .... Further progress in understanding brain diseases and behavior demands fuller collaboration and integration of these fields. Leaders in academic medicine and science must work to break down the barriers between disciplines. Contemporary leaders and observers of academic medicine are not all equally ecstatic about this prospect, however. ... Read more »

Crossley, N., Scott, J., Ellison-Wright, I., & Mechelli, A. (2015) Neuroimaging distinction between neurological and psychiatric disorders. The British Journal of Psychiatry, 207(5), 429-434. DOI: 10.1192/bjp.bp.114.154393  

David, A., & Nicholson, T. (2015) Are neurological and psychiatric disorders different?. The British Journal of Psychiatry, 207(5), 373-374. DOI: 10.1192/bjp.bp.114.158550  

  • November 23, 2015
  • 12:58 AM
  • 1,107 views

Happiness Is a Large Precuneus

by The Neurocritic in The Neurocritic

What is happiness, and how do we find it? There are 93,290 books on happiness at Amazon.com. Happiness is Life's Most Important Skill, an Advantage and a Project and a Hypothesis that we can Stumble On and Hard-Wire in 21 Days.The Pursuit of Happiness is an Unalienable Right granted to all human beings, but it also generates billions of dollars for the self-help industry.And now the search for happiness is over! Scientists have determined that happiness is located in a small region of your right medial parietal lobe. Positive psychology gurus will have to adapt to the changing landscape or lose their market edge. “My seven practical, actionable principles are guaranteed to increase the size of your precuneus or your money back.”The structural neural substrate of subjective happiness is the precuneus.A new paper has reported that happiness is related to the volume of gray matter in a 222.8 mm3 cluster of the right precuneus (Sato et al., 2015). What does this mean? Taking the finding at face value, there was a correlation (not a causal relationship) between precuneus gray matter volume and scores on the Japanese version of the Subjective Happiness Scale.1Fig. 1 (modified from Sato et al., 2015).  Left: Statistical parametric map (p < 0.001, peak-level uncorrected for display purposes). The blue cross indicates the location of the peak voxel. Right: Scatter plot of the adjusted gray matter volume as a function of the subjective happiness score at the peak voxel. [NOTE: Haven't we agreed to not show regression lines through scatter plots based on the single voxel where the effect is the largest??]“The search for happiness: Using MRI to find where happiness happens,” said one deceptive headline. Should we accept the claim that one small region of the brain is entirely responsible for generating and maintaining this complex and desirable state of being? NO. Of course not. And the experimental subjects were not actively involved in any sort of task at all. The study used a static measure of gray matter volume in four brain Regions of Interest (ROIs): left anterior cingulate gyrus, left posterior cingulate gyrus, right precuneus, and left amygdala. These ROIs were based on an fMRI activation study in 26 German men (mean age 33 yrs) who underwent a mood induction procedure (Habel et al., 2005). The German participants viewed pictures of faces with happy expressions and were told to “Look at each face and use it to help you to feel happy.” The brain activity elicited by happy faces was compared to activity elicited by a non-emotional control condition. Eight regions were reported in their Table 1.Table 1 (modified from Habel et al., 2005).Only four of those regions were selected as ROIs by Sato et al. (2015). One of these was a tiny 12 voxel region in the paracentral lobule, which was called precuneus by Sato et al. (2015).Image: John A Beal, PhD. Dept. of Cellular Biology & Anatomy, Louisiana State University Health Sciences Center Shreveport.Before you say I'm being overly pedantic, we can agree that the selected coordinates are at the border of the precuneus and the paracentral lobule. The more interesting fact is that the sadness induction of Habel et al. (2005) implicated a very large region of the posterior precuneus and surrounding regions (1562 voxels). An area over 100 times larger than the Happy Precuneus.Oops. But the precuneus contains multitudes, so maybe it's not so tragic. The precuneus is potentially involved in very lofty functions like consciousness and self-awareness and the recollection of  autobiographical memories. It's also a functional core of the default-mode network (Utevsky et al., 2014), which is active during daydreaming and mind wandering and unconstrained thinking. But it seems a bit problematic to use hand picked ROIs from a study of transient and mild “happy” states (in a population of German males) to predict a stable trait of subjective happiness in a culturally distinct group of younger Japanese college students (26 women, 25 men).Cross-Cultural Notions of HappinessIsn't “happiness” a social construct (largely defined by Western thought) that varies across cultures?... Read more »

Sato, W., Kochiyama, T., Uono, S., Kubota, Y., Sawada, R., Yoshimura, S., & Toichi, M. (2015) The structural neural substrate of subjective happiness. Scientific Reports, 16891. DOI: 10.1038/srep16891  

  • November 16, 2015
  • 05:50 AM
  • 1,099 views

The Neuroscience of Social Media: An Unofficial History

by The Neurocritic in The Neurocritic

There's a new article in Trends in Cognitive Sciences about how neuroscientists can incorporate social media into their research on the neural correlates of social cognition (Meshi et al., 2015). The authors outlined the sorts of social behaviors that can be studied via participants' use of Twitter, Facebook, Instagram, etc.: (1) broadcasting information; (2) receiving feedback; (3) observing others' broadcasts; (4) providing feedback; (5) comparing self to others.Meshi, Tamir, and Heekeren / Trends in Cognitive Sciences (2015)More broadly, these activities tap into processes and constructs like emotional state, personality, social conformity, and how people manage their self-presentation and social connections. You know, things that exist IRL (this is an important point to keep in mind for later).The neural systems that mediate these phenomena, as studied by social cognitive neuroscience types, are the Mentalizing Network (in blue below), the Self-Referential Network (red), and the Reward Network (green).Fig. 2 (Meshi et al., 2015). Proposed Brain Networks Involved in Social Media Use.  (i) mentalizing network: dorsomedial prefrontal cortex (DMPFC), temporoparietal junction (TPJ), anterior temporal lobe (ATL), inferior frontal gyrus (IFG), and the posterior cingulate cortex/precuneus (PCC); (ii) self-referential network: medial prefrontal cortex (MPFC) and PCC; and (iii) reward network: ventromedial prefrontal cortex (VMPFC), ventral striatum (VS), and ventral tegmental area (VTA).  The article's publication was announced on social media:The emerging neuroscience of social media. New review in @TrendsCognSci: https://t.co/2JDIeCvJsT pic.twitter.com/2vwv827bdI— CellPressNews (@CellPressNews) November 11, 2015I anticipated this day in 2009, when I wrote several satirical articles about the neurology of Twitter.  I proposed that someone should do a study to examine the neural correlates of Twitter use:It was bound to happen. Some neuroimaging lab will conduct an actual fMRI experiment to examine the so-called "Neural Correlates of Twitter" -- so why not write a preemptive blog post to report on the predicted results from such a study, before anyone can publish the actual findings?Here are the conditions I proposed, and the predicted results (a portion of the original post is reproduced below).A low-level baseline condition (viewing "+") and an active baseline condition (reading the public timeline [public timeline no longer exists] of random tweets from strangers) will be compared to three active conditions:(1) Celebrity Fluff(2) Social Media Marketing Drivel(3) Friends on your Following List... The hemodynamic response function to the active control condition will be compared to those from Conditions 1-3 above. Contrasts between each of these conditions and the low-level baseline will also be performed.The major predicted results are as follows:Reading the Tweets of your close friends will engage a network of regions involved in self-referential processing of similar others, including the posterior superior temporal sulcus (STS) and adjacent temporo-parietal junction (TPJ), and the ventral medial prefrontal cortex (Mitchell et al., 2006).Fig. 2A. (Mitchell et al., 2006). A region of ventral mPFC showed greater activation during judgments of the target to whom participants considered themselves to be more similar.Reading the stream of Celebrity Fluff will activate the frontal eye fields to a much greater extent than the control condition, as the participants will be engaged in rolling their eyes in response to the inane banter. Figure from Paul Pietsch, Ph.D. The frontal eye fields are i... Read more »

Meshi D, Tamir TI, Heekeren HR. (2015) The Emerging Neuroscience of Social Media. Trends in Cognitive Sciences. info:/10.1016/j.tics.2015.09.004

  • November 11, 2015
  • 03:29 AM
  • 1,025 views

Obesity Is Not Like Being "Addicted to Food"

by The Neurocritic in The Neurocritic

Credit: Image courtesy of Aalto UniversityIs it possible to be “addicted” to food, much like an addiction to substances (e.g., alcohol, cocaine, opiates) or behaviors (gambling, shopping, Facebook)? An extensive and growing literature uses this terminology in the context of the “obesity epidemic”, and looks for the root genetic and neurobiological causes (Carlier et al., 2015; Volkow & Bailer, 2015).Fig. 1 (Meule, 2015). Number of scientific publications on food addiction (1990-2014). Web of Science search term “food addiction”. Figure 1 might lead you to believe that the term “food addiction” was invented in the late 2000s by NIDA. But this term is not new at all, as Adrian Meule (2015) explained in his historical overview, Back by Popular Demand: A Narrative Review on the History of Food Addiction Research. Dr. Theron G. Randolph wrote about food addiction in 1956 (he also wrote about food allergies).Fig. 2 (Meule, 2015). History of food addiction research.Thus, the concept of food addiction predates the documented rise in obesity in the US, which really took off in the late 80s to late 90s (as shown below).1 Prevalence of Obesity in the United States, 1960-2012 1960-62 1971-74 1976-80 1988-89 1999-2000 12.80% 14.10% 14.50% 22.50% 30.50% 2007-08 2011-12 33.80% 34.90% Sources: Flegal et al. 1998, 2002, 2010; Ogden et al. 2014One problem with the “food addiction” construct is that you can live without alcohol and gambling, but you'll die if you don't eat. Complete abstinence is not an option.2 Another problem is that most obese people simply don't show signs of addiction (Hebebrand, 2015): ...irrespective of whether scientific evidence will justify use of the term food and/or eating addiction, most obese individuals have neither a food nor an eating addiction.3 Obesity frequently develops slowly over many years; only a slight energy surplus is required to in the longer term develop overweight. Genetic, neuroendocrine, physiological and environmental research has taught us that obesity is a complex disorder with many risk factors, each of which have small individual effects and interact in a complex manner. The notion of addiction as a major cause of obesity potentially entails endless and fruitless debates, when it is clearly not relevant to the great majority of cases of overweight and obesity. Still not convinced? Surely, differences in the brains' of obese individuals point to an addiction. The dopamine system is altered, right, so this must mean they're addicted to food? Well think again, because the evidence for this is inconsistent (Volkow et al., 2013; Ziauddeen & Fletcher, 2013).An important new paper by a Finnish research group has shown that D2 dopamine receptor binding in obese women is not different from that in lean participants (Karlsson et al., 2015). Conversely, μ-opioid receptor (MOR) binding is reduced, consistent with lowered hedonic processing. After the women had bariatric surgery (resulting in  mean weight loss of 26.1 kg, or 57.5 lbs), MOR returned to control values, while the unaltered D2 receptors stayed the same.In the study, 16 obese women (mean BMI=40.4, age 42.8) had PET scans before and six months after undergoing the standard Gastric Bypass procedure (Roux-en-Y Gastric Bypass) or the Sleeve Gastrectomy. A comparison group of non-obese women (BMI=22.7, age 44.9) was also scanned. The radiotracer [11C]carfentanil measured MOR availability and [11C]raclopride measured D2R availability in two separate sessions. The opioid and dopamine systems are famous for their roles in neural circuits for “liking” (pleasurable consumption) and “wanting”(incentive/motivation), respectively (Castro & Berridge, 2014).The pre-operative PET scans in the obese women showed that MOR binding was significantly lower in a number of reward-related regions, including ventral striatum, dorsal caudate, putamen, insula, amygdala, thalamus, orbitofrontal cortex and posterior cingulate cortex. Six months after surgery, there was an overall 23% increase in MOR availability, which was no longer different from controls.... Read more »

  • October 29, 2015
  • 06:54 AM
  • 983 views

Ophidianthropy: The Delusion of Being Transformed into a Snake

by The Neurocritic in The Neurocritic

Scene from Sssssss (1973).“When Dr. Stoner needs a new research assistant for his herpetological research, he recruits David Blake from the local college.  Oh, and he turns him into a snake for sh*ts and giggles.”Movie Review by Jason Grey Horror movies where people turn into snakes are relatively common (30 by one count), but clinical reports of delusional transmogrification into snakes are quite rare. This is in contrast to clinical lycanthropy, the delusion of turning into a wolf.What follows are two frightening tales of unresolved mental illness, minimal followup, and oversharing (plus mistaking an April Fool's joke for a real finding).THERE ARE NO ACTUAL PICTURES OF SNAKES [an important note for snake phobics].The first case of ophidianthropy was described by Kattimani et al. (2010):A 24 year young girl presented to us with complaints that she had died 15 days before and that in her stead she had been turned into a live snake. At times she would try to bite others claiming that she was a snake. ... We showed her photos of snakes and when she was made to face the large mirror she failed to identify herself as her real human self and described herself as snake. She described having snake skin covering her and that her entire body was that of snake except for her spirit inside.  ...  She was distressed that others did not understand or share her conviction. She felt hopeless that nothing could make her turn into real self. She made suicidal gestures and attempted to hang herself twice on the ward...The initial diagnosis was severe depressive disorder with psychotic features. A series of drug trials was unsuccessful (Prozac and four different antipsychotics), and a course of 10 ECT sessions had no lasting effect on her delusions. The authors couldn't decide whether the patient should be formally diagnosed with schizophrenia or a more general psychotic illness. Her most recent treatment regime (escitalopram plus quetiapine) was also a failure because the snake delusion persisted. “Our next plan is to employ supportive psychotherapy in combination with pharmacotherapy,” said the authors (but we never find out what happened to her). Not a positive outcome...Scene from Sssssss (1973).Ophidiantrophy with paranoid schizophrenia, cannabis use, bestiality, and history of epilepsyThe second case is even more bizarre, with a laundry list of delusions and syndromes (Mondal, 2014):A 23 year old, married, Hindu male, with past history of  ... seizures..., personal history of non pathological consumption of bhang and alcohol for the last nine years and one incident of illicit sexual intercourse with a buffalo at the age of 18 years presented ... with the chief complains of muttering, fearfulness, wandering tendency ... and hearing of voices inaudible to others for the last one month. ... he sat cross legged with hands folded in a typical posture resembling the hood of a snake. ... The patient said that he inhaled the breath of a snake passing by him following which he changed into a snake. Though he had a human figure, he could feel himself poisonous inside and to have grown a fang on the lower set of his teeth. He also had the urge to bite others but somehow controlled the desire. He said that he was not comfortable with humans then but would be happy on seeing a snake, identifying it belonging to his species. ... He says that he was converted back to a human being by the help of a parrot, which took away his snake fangs by inhaling his breath and by a cat who ate up his snake flesh once when he was lying on the ground. ...  the patient also had thought alienation phenomena in the form of thought blocking, thought withdrawal and thought broadcasting, delusion of persecution, delusion of reference, delusion of infidelity [Othello syndrome], the Fregoli delusion, bizarre delusion, nihilistic delusion [Cotard's syndrome], somatic passivity, somatic hallucinations, made act [?], third person auditory hallucinations, derealization and depersonalisation. He was diagnosed as a case of paranoid schizophrenia as per ICD 10.Wow.He was was given the antipsychotic haloperidol while being treated as an  an inpatient for 10 days. Some of his symptoms improved but others did not. “Long term follow up is not available.”The discussion of this case is a bit... terrifying:Lycanthropy encompasses two aspects, the first one consisting of primary lupine delusions and associated behavioural deviations termed as lycomania, and the second aspect being a psychosomatic problem called as lycosomatization (Kydd et al., 1991). Kydd, O.U., Major, A., Minor, C (1991). A really neat, squeaky-clean isolation and characterization of two lycanthropogens from nearly subhuman populations of Homo sapiens. J. Ultratough Molec. Biochem. 101: 3521-3532.  [this is obviously a fake citation]Endogenous lycanthropogens responsible for lycomania are lupinone and buldogone which differ by only one carbon atom in their ring structure; their plasma level having a lunar periodicity with peak level during the week of full moon. Lycosomatization likely depends on the simultaneous secretion of suprathreshold levels of both lupinone and the peptide lycanthrokinin, a second mediator, reported to be secreted by the pineal gland, that “initiates and maintains the lycanthropic process” (Davis et al., 1992). Thus, secretion of lupinone without lycanthrokinin results in only lycomania. In our patient these molecular changes were not investigated. oh my god, the paper by Davis et al. on the Psychopharmacology of Lycanthropy (and "endogenous lycanthropogens") was published in the April 1, 1992 issue of the Canadian Medical Association Journal. There is no such thing as lupinone and buldogone.Fig. 1 (Davis et al., 1992... Read more »

  • October 26, 2015
  • 01:47 AM
  • 789 views

On the Long Way Down: The Neurophenomenology of Ketamine

by The Neurocritic in The Neurocritic

Is ketamine a destructive club drug that damages the brain and bladder? With psychosis-like effects widely used as a model of schizophrenia? Or is ketamine an exciting new antidepressant, the “most important discovery in half a century”?For years, I've been utterly fascinated by these separate strands of research that rarely (if ever) intersect. Why is that? Because there's no such thing as “one receptor, one behavior.” And because like most scientific endeavors, neuro-pharmacology/psychiatry research is highly specialized, with experts in one microfield ignoring the literature produced by another (though there are some exceptions).1 Ketamine is a dissociative anesthetic and PCP-derivative that can produce hallucinations and feelings of detachment in non-clinical populations. Phamacologically it's an NMDA receptor antagonist that also acts on other systems (e.g., opioid). Today I'll focus on a recent neuroimaging study that looked at the downsides of ketamine: anhedonia, cognitive disorganization, and perceptual distortions (Pollak et al., 2015).Imaging Phenomenologically Distinct Effects of KetamineIn this study, 23 healthy male participants underwent arterial spin labeling (ASL) fMRI scanning while they were infused with either a high dose (0.26 mg/kg bolus + slow infusion) or a low dose (0.13 mg/kg bolus + slow infusion) of ketamine 2 (Pollak et al., 2015). For comparison, the typical dose used in depression studies is 0.5 mg/kg (Wan et al., 2015). Keep in mind that the number of participants in each condition was low, n=12 (after one was dropped) and n=10 respectively, so the results are quite preliminary.ASL is a post-PET and BOLD-less technique for measuring cerebral blood flow (CBF) without the use of a radioactive tracer (Petcharunpaisan et al., 2010). Instead, water in arterial blood serves as a contrast agent, after being magnetically labeled by applying a 180 degree radiofrequency inversion pulse. Basically, it's a good method for monitoring CBF over a number of minutes.ASL sequences were obtained before and 10 min after the start of ketamine infusion. Before and after the scan, participants rated their subjective symptoms of delusional thinking, perceptual distortion, cognitive disorganization, anhedonia, mania, and paranoia on the Psychotomimetic States Inventory (PSI). The study was completely open label, so it's not like they didn't know they were getting a mind-altering drug. Behavioral ratings were quite variable (note the large error bars below), but generally the effects were larger in the high-dose group, as one might expect.The changes in Perceptual Distortion and Cognitive Disorganization scores were significant for the low-dose group, with the addition of Delusional Thinking, Anhedonia, and Mania in the high-dose group. But again, it's important to remember there was no placebo condition, the significance levels were not all that impressive, and the n's were low.The CBF results (below) show increases in anterior and subgenual cingulate cortex and decreases in superior and medial temporal cortex, similar to previous studies using PET.Fig 2a (Pollak et al., 2015). Changes in CBF with ketamine in the low- and high-dose groups overlaid on a high-resolution T1-weighted image.Did I say the n's were low? The Fig. 2b maps (not shown here) illustrated significant correlations with the Anhedonia and Cognitive Disorganization subscales, but these were based on 10 and 12 data points, when outliers can drive phenomenally large effects. One might like to say...For [the high-dose] group, ketamine-induced anhedonia inversely related to orbitofrontal cortex CBF changes and cognitive disorganisation was positively correlated with CBF changes in posterior thalamus and the left inferior and middle temporal gyrus. Perceptual distortion was correlated with different regional CBF changes in the low- and high-dose groups.   ...but this clearly requires replication studies with placebo comparisons and larger subject groups.Nonetheless, the fact remains that ketamine administration in healthy participants caused negative effects like anhedonia and cognitive disorganization at doses lower than those used in studies of treatment-resistant depression (many of which were also open label). Now you can say, “well, controls are not the same as patients with refractory depression” and you'd be right (see Footnote 1). “Glutamatergic signaling profiles” and symptom reports could show a variable relationship, with severe depression at the low end and schizophrenia at the high end (with controls somewhere in the middle).A recent review of seven placebo-controlled, double-blind, randomized clinical trials of ketamine and other NMDA antagonists concluded (Newport et al., 2015):The antidepressant efficacy of ketamine ... holds promise for future glutamate-modulating strategies; however, the ineffectiveness of other NMDA antagonists suggests that any forthcoming advances will depend on improving our understanding of ketamine’s mechanism of action. The fleeting nature of ketamine’s therapeutic benefit, coupled with its potential for abuse and neurotoxicity, suggest that its use in the clinical setting warrants caution.The mysterious and paradoxical ways of ketamine continue...So take it in don't hold your breath... Read more »

  • September 27, 2015
  • 01:02 AM
  • 746 views

Neurohackers Gone Wild!

by The Neurocritic in The Neurocritic

Scene from Listening, a new neuro science fiction film by writer-director Khalil Sullins. What are some of the goals of research in human neuroscience?To explain how the mind works.To unravel the mysteries of consciousness and free will.To develop better treatments for mental and neurological illnesses.To allow paralyzed individuals to walk again.Brain decoding experiments that use fMRI or ECoG (direct recordings of the brain in epilepsy patients) to deduce what a person is looking at or saying or thinking have become increasingly popular as well. They're still quite limited in scope, but any study that can invoke “mind reading” or “brain-to-brain” scenarios will attract the press like moths to a flame....For example, here's how NeuroNews site Brain Decoder covered the latest “brain-to-brain communication” stunt and the requisite sci fi predictions:Scientists Connect 2 Brains to Play “20 Questions”Human brains can now be linked well enough for two people to play guessing games without speaking to each other, scientists report. The researchers hooked up several pairs of people to machines that connected their brains, allowing one to deduce what was on the other's mind.. . . This brain-to-brain interface technology could one day allow people to empathize or see each other's perspectives more easily by sending others concepts too difficult to explain in words, [author Andrea Stocco] said.Mind reading! Yay! But this isn't what happened. No thoughts were decoded in the making of this paper (Stocco et al., 2015).Instead, stimulation of visual cortex did all the “talking.” Player One looked at an LED that indicated “yes” (13 Hz flashes) or “no” (12 Hz flashes). Steady-state visual evoked potentials (a type of EEG signal very common in BCI research) varied according to flicker rate, and this binary code was transmitted to a second computer, which triggered a magnetic pulse delivered to the visual cortex of Player Two if the answer was yes. The TMS pulse in turn elicited a phosphene (a brief visual percept) that indicated yes (no phosphene indicated a “no” answer).Eventually, we see some backpedalling in the Brain Decoder article:Ideally, brain-to-brain interfaces would one day allow one person to think about an object, say a hammer, and another to know this, along with the hammer's shape and what the first person wanted to use it for. "That would be the ideal type of complexity of information we want to achieve," Stocco said. "We don't know whether that future is possible."  Well, um, we already have the first half of the equation to some small degree (Naselaris et al. 2015 decoded mental images of remembered scenes)...But the Big Prize goes to.... the decoders of covert speech, or inner thoughts!! (Martin et al. 2014)Scientists develop a brain decoder that can hear your inner thoughtsBrain decoder can eavesdrop on your inner voiceListening to Your ThoughtsThe new film Listening starts off with a riff on this work and spins into a dark and dangerous place where no thought is private. Given the preponderance of “hearing” metaphors above, it's fitting that the title is Listening, where fiction (in this case near-future science fiction) is stranger than truth. The hazard of watching a movie that depicts your field of expertise is that you nitpick every little thing (like the scalp EEG sensors that record from individual neurons). This impulse was exacerbated by a setting which is so near-future that it's present day.From Marilyn Monroe Neurons to Carbon NanotubesBut there were many things I did like about Listening.1  In particular, I enjoyed the way the plot developed in the second half of the film, especially in the last 30 minutes. On the lighter side was this amusing scene of a pompous professor lecturing on the real-life finding of Marilyn Monroe neurons (Quian Quiroga et al., 2005, 2009).Caltech Professor: “For example, the subject is asked to think about Marilyn Monroe. My study suggests not only conscious control in the hippocampus and parahippocampal cortex, when the neuron....” Conversation between two grad students in back of class: “Hey, you hear about the new bioengineering transfer?” ...Caltech Professor: “Mr. Thorogood, perhaps you can enlighten us all with Ryan's gossip? Or tell us what else we can conclude from this study?” Ryan the douchy hardware guy: “We can conclude that all neurosurgeons are in love with Marilyn Monroe.” David the thoughtful software guy: “A single neuron has not only the ability to carry complex code and abstract form but is also able to override sensory input through cognitive effort. It suggests thought is a stronger reality than the world around us.” Caltech Professor: “Unfortunately, I think you're both correct.” Ryan and David are grad students with Big Plans. They've set up a garage lab (with stolen computer equipment) to work on their secret EEG decoding project. Ryan the douche lets Jordan the hot bioengineering transfer into their boys' club, much to David's dismay.Ryan: “She's assigned to Professor Hamomoto's experiment with ATP-powered cell-binding nanotube devices.” [maybe these?]So she gets to stay in the garage. For the demonstration, Ryan sports an EEG net that looks remarkably like the ones made by EGI (shown below on the right).Ryan reckons they'll put cell phone companies out of business with their mind reading invention, but David realizes they have a long way to go...... Read more »

  • August 31, 2015
  • 04:31 AM
  • 880 views

Cats on Treadmills (and the plasticity of biological motion perception)

by The Neurocritic in The Neurocritic

Cats on a treadmill. From Treadmill Kittens.It's been an eventful week. The 10th Anniversary of Hurricane Katrina. The 10th Anniversary of Optogenetics (with commentary from the neuroscience community and from the inventors). The Reproducibility Project's efforts to replicate 100 studies in cognitive and social psychology (published in Science). And the passing of the great writer and neurologist, Oliver Sacks. Oh, and Wes Craven just died too...I'm not blogging about any of these events. Many many others have already written about them (see selected reading list below). And The Neurocritic has been feeling tapped out lately.Hence the cats on treadmills. They're here to introduce a new study which demonstrated that early visual experience is not necessary for the perception of biological motion (Bottari et al., 2015). Biological motion perception involves the ability to understand and visually track the movement of a living being. This phenomenon is often studied using point light displays, as shown below in a demo from the BioMotion Lab. You should really check out their flash animation that allows you to view human, feline, and pigeon walkers moving from right to left, scrambled and unscrambled, masked and unmasked, inverted and right side up.from BioMotion Lab 1Biological Motion Perception Is Spared After Early Visual DeprivationPeople born with dense, bilateral cataracts that are surgically removed at a later date show deficits in higher visual processing, including the perception of global motion, global form, faces, and illusory contours. Proper neural development during the critical, or sensitive period early in life is dependent on experience, in this case visual input. However, it seems that the perception of biological motion (BM) does not require early visual experience (Bottari et al., 2015).Participants in the study were 12 individuals with congenital cataracts that were removed at a mean age of 7.8 years (range 4 months to 16 yrs). Age at testing was 17.8 years (range 10-35 yrs). The study assessed their biological motion thresholds (extracting BM from noise) and recorded their EEG to point light displays of a walking man and to scrambled versions of the walking man (see demo).from BioMotion LabBehavioral performance on the BM threshold task didn't differ much between the congenital cataract (cc) and matched control (mc) groups (i.e., there was a lot of overlap between the filled diamonds and the open triangles below).Modified from Fig. 1 (Bottari et al., 2015).The event-related potentials (ERPs) averaged to presentations of the walking man vs. scrambled man showed the same pattern in cc and mc groups as well: larger to walking man (BM) than scrambled man (SBM). Modified from Fig. 1 (Bottari et al., 2015).The N1 component (the peak at about 0.25 sec post-stimulus) seems a little smaller in cc but that wasn't significant. On the other hand, the earlier P1 was significantly reduced in the cc group. Interestingly, the duration of visual deprivation, amount of visual experience, and post-surgical visual acuity did not correlate with the size of the N1.The authors discuss three possible explanations for these results:(1) The neural circuitries associated with the processing of BM can specialize in late childhood or adulthood. That is, as soon as visual input becomes available, initiates the functional maturation of the BM system. Alternatively the neural systems for BM might mature independently of vision. (2) Either they are shaped cross-modally or (3) they mature independent of experience.They ultimately favor the third explanation, that "the neural systems for BM specialize independently of visual experience." They also point out that the ERPs to faces vs. scrambled faces in the cc group do not show the characteristic difference between these stimulus types. What's so special about biological motion, then? Here the authors wave their hands and arms a bit:We can only speculate why these different developmental trajectories for faces and BM emerge: BM is characteristic for any type of living being and the major properties are shared across species. ... By contrast, faces are highly specific for a species and biases for the processing of faces from our own ethnicity and age have been shown.It's more important to see if a bear is running towards you than it is to recognize faces, as anyone with congenital prosopagnosia ("face blindness") might tell you...Footnote1 Troje & Westhoff (2006):"The third sequence showed a walking cat. The data are based on a high-speed (200 fps) video sequence showing a cat walking on a treadmill. Fourteen feature points were manually sampled from single frames. As with the pigeon sequence, data were approximated with a third-order Fourier series to obtain a generic walking cycle."Reference... Read more »

  • August 10, 2015
  • 07:35 AM
  • 1,036 views

Will machine learning create new diagnostic categories, or just refine the ones we already have?

by The Neurocritic in The Neurocritic

How do we classify and diagnose mental disorders?In the coming era of Precision Medicine, we'll all want customized treatments that “take into account individual differences in people’s genes, environments, and lifestyles.” To do this, we'll need precise diagnostic tools to identify the specific disease process in each individual. Although focused on cancer in the near-term, the longer-term goal of the White House initiative is to apply Precision Medicine to all areas of health. This presumably includes psychiatry, but the links between Precision Medicine, the BRAIN initiative, and RDoC seem a bit murky at present.1But there's nothing a good infographic can't fix. Science recently published a Perspective piece by the NIMH Director and the chief architect of the Research Domain Criteria (RDoC) initiative (Insel & Cuthbert, 2015). There's Deconstruction involved, so what's not to like? 2 ILLUSTRATION: V. Altounian and C. Smith / SCIENCEIn this massively ambitious future scenario, the totality of one's genetic risk factors, brain activity, physiology, immune function, behavioral symptom profile, and life experience (social, cultural, environmental) will be deconstructed and stratified and recompiled into a neat little cohort. 3The new categories will be data driven. The project might start by collecting colossal quantities of expensive data from millions of people, and continue by running classifiers on exceptionally powerful computers (powered by exceptionally bright scientists/engineers/coders) to extract meaningful patterns that can categorize the data with high levels of sensitivity and specificity. Perhaps I am filled with pathologically high levels of negative affect (Loss? Frustrative Nonreward?), but I find it hard to be optimistic about progress in the immediate future. You know, for a Precision Medicine treatment for me (and my pessimism)...But seriously.Yes, RDoC is ambitious (and has its share of naysayers). But what you may not know is that it's also trendy! Just the other day, an article in The Atlantic explained Why Depression Needs A New Definition (yes, RDoC) and even cited papers like Depression: The Shroud of Heterogeneity. 4But let's just focus on the brain for now. For a long time, most neuroscientists have viewed mental disorders as brain disorders. [But that's not to say that environment, culture, experience, etc. play no role! cf. Footnote 3]. So our opening question becomes, How do we classify and diagnose brain disorders neural circuit disorders in a fashion consistent with RDoC principles? Is there really One Brain Network for All Mental Illness, for instance? (I didn't think so.)Our colleagues in Asia and Australia and Europe and Canada may not have gotten the funding memo, however, and continue to run classifiers based on DSM categories. 5 In my previous post, I promised an unsystematic review of machine learning as applied to the classification of major depression. You can skip directly to the Appendix to see that.Regardless of whether we use DSM-5 categories or RDoC matrix constructs, what we need are robust and reproducible biomarkers (see Table 1 above). A brief but excellent primer by Woo and Wager (2015) outlined the characteristics of a useful neuroimaging biomarker: 1. Criterion 1: diagnosticity Good biomarkers should produce high diagnostic performance in classification or prediction. Diagnostic performance can be evaluated by sensitivity and specificity. Sensitivity concerns whether a model can correctly detect signal when signal exists. Effect size is a closely related concept; larger effect sizes are related to higher sensitivity. Specificity concerns whether the model produces negative results when there is no signal. Specificity can be evaluated relative to a range of specific alternative conditions that may be confusable with the condition of interest. 2. Criterion 2: interpretability Brain-based biomarkers should be meaningful and interpretable in terms of neuroscience, including previous neuroimaging studies and converging evidence from multiple sources (eg, animal models, lesion studies, etc). One potential pitfall in developing neuroimaging biomarkers is that classification or prediction models can capitalize on confounding variables that are not neuroscientifically meaningful or interesting at all (eg, in-scanner head movement). Therefore, neuroimaging biomarkers should be evaluated and interpreted in the light of existing neuroscientific findings. 3. Criterion 3: deployability Once the classification or outcome-prediction model has been developed as a neuroimaging biomarker, the model and the testing procedure should be precisely defined so that it can be prospectively applied to new data. Any flexibility in the testing procedures could introduce potential overoptimistic biases into test results, rendering them useless and potentially misleading. For example, “amygdala activity” cannot be a good neuroimaging biomarker without a precise definition of which “voxels” in the amygdala should be activated and the relative expected intensity of activity across each voxel. A well-defined model and standardized testing procedure are crucial aspects of turning neuroimaging results into a “research product,” a biomarker that can be shared and tested across laboratories. 4. Criterion 4: generalizability Clinically useful neuroimaging biomarkers aim to provide predictions about new individuals. Therefore, they should be val... Read more »

Insel, T., & Cuthbert, B. (2015) Brain disorders? Precisely. Science, 348(6234), 499-500. DOI: 10.1126/science.aab2358  

  • August 1, 2015
  • 08:42 PM
  • 862 views

The Idiosyncratic Side of Diagnosis by Brain Scan and Machine Learning

by The Neurocritic in The Neurocritic

R2D3R2D3 recently had a fantastic Visual Introduction to Machine Learning, using the classification of homes in San Francisco vs. New York as their example. As they explain quite simply: In machine learning, computers apply statistical learning techniques to automatically identify patterns in data. These techniques can be used to make highly accurate predictions. You should really head over there right now to view it, because it's very impressive.Computational neuroscience types are using machine learning algorithms to classify all sorts of brain states, and diagnose brain disorders, in humans. How accurate are these classifications? Do the studies all use separate training sets and test sets, as shown in the example above?Let's say your fMRI measure is able to differentiate individuals with panic disorder (n=33) from those with panic disorder + depression (n=26) with 79% accuracy.1 Or with structural MRI scans you can distinguish 20 participants with treatment-refractory depression from 21 never-depressed individuals with 85% accuracy.2 Besides the issues outlined in the footnotes, the “reality check” is that the model must be able to predict group membership for a new (untrained) data set. And most studies don't seem to do this.I was originally drawn to the topic by a 3 page article entitled, Machine learning algorithm accurately detects fMRI signature of vulnerability to major depression (Sato et al., 2015). Wow! Really? How accurate? Which fMRI signature? Let's take a look.machine learning algorithm = Maximum Entropy Linear Discriminant Analysis (MLDA)accurately predicts = 78.3% (72.0% sensitivity and 85.7% specificity)fMRI signature = “guilt-selective anterior temporal functional connectivity changes” (seems a bit overly specific and esoteric, no?)vulnerability to major depression = 25 participants with remitted depression vs. 21 never-depressed participantsThe authors used a “standard leave-one-subject-out procedure in which the classification is cross-validated iteratively by using a model based on the sample after excluding one subject to independently predict group membership” but they did not test their fMRI signature in completely independent groups of participants.Nor did they try to compare individuals who are currently depressed to those who are currently remitted. That didn't matter, apparently, because the authors suggest the fMRI signature is a trait marker of vulnerability, not a state marker of current mood. But the classifier missed 28% of the remitted group who did not have the “guilt-selective anterior temporal functional connectivity changes.”What is that, you ask? This is a set of mini-regions (i.e., not too many voxels in each) functionally connected to a right superior anterior temporal lobe seed region of interest during a contrast of guilt vs. anger feelings (selected from a number of other possible emotions) for self or best friend, based on written imaginary scenarios like “Angela [self] does act stingily towards Rachel [friend]” and “Rachel does act stingily towards Angela” conducted outside the scanner (after the fMRI session is over). Got that?You really need to read a bunch of other articles to understand what that means, because the current paper is less than 3 pages long. Did I say that already?modified from Fig 1B (Sato et al., 2015). Weight vector maps highlighting voxels among the 1% most discriminative for remitted major depression vs. controls, including the subgenual cingulate cortex, both hippocampi, the right thalamus and the anterior insulae.The patients were previously diagnosed according to DSM-IV-TR (which was current at the time), and in remission for at least 12 months. The study was conducted by investigators from Brazil and the UK, so they didn't have to worry about RDoC, i.e. “new ways of classifying mental disorders based on behavioral dimensions and neurobiological measures” (instead of DSM-5 criteria). A “guilt-proneness” behavioral construct, along with the “guilt-selective” network of idiosyncratic brain regions, might be more in line with RDoC than past major depression diagnosis.Could these results possibly generalize to other populations of remitted and never-depressed individuals? Well, the fMRI signature seems a bit specialized (and convoluted). And overfitting is another likely problem here... In their next post, R2D3 will discuss overfitting: Ideally, the [decision] tree should perform similarly on both known and unknown data. So this one is less than ideal. [NOTE: the one that's 90% in the top figure] These errors are due to overfitting. Our model has learned to treat every detail in the training data as important, even details that turned out to be irrelevant.In my next post, I'll present an unsystematic review of machine learning as applied to the classification of major depression. It's notable that Sato et al. (2015) used the word “classification” instead of “diagnosis.”3 Footnotes1 The sensitivity (true positive rate) was 73% and the specificity (true negative rate) was 85%. After correcting for confounding variables, these numbers were 77% and 70%, respectively.2 The abstract concludes this is a “high degree of accuracy.” Not to pick on these particular authors (this is a typical study), but Dr. Dorothy Bishop explains why this is not very helpful for screening or diagnostic purposes. And what you'd really want to do here is to discriminate between treatment-resistant vs. treatment-responsive depression. If an individual does not respond to standard treatments, it would be highly beneficial to avoid a long futile period of medication trials. 3 In case you're wondering, the title of this post was based on The Dark Side of Diagnosis by Brain Scan, which is about Dr  Daniel Amen. The work of the investigators discussed here is in ... Read more »

  • July 15, 2015
  • 04:09 AM
  • 849 views

Can Tetris Reduce Intrusive Memories of a Trauma Film?

by The Neurocritic in The Neurocritic

For some inexplicable reason, you watched the torture gore horror film Hostel over the weekend. On Monday, you're having trouble concentrating at work. Images of severed limbs and bludgeoned heads keep intruding on your attempts to code or write a paper. So you decide to read about the making of Hostel.You end up seeing pictures of the most horrifying scenes from the movie. It's all way too way much to simply shake off so then you decide to play Tetris. But a funny thing happens. The unwelcome images start to become less frequent. By Friday, the gory mental snapshots are no longer forcing their way into your mind's eye. The ugly flashbacks are gone.Meanwhile, your parnter in crime is having similar images of eye gouging pop into his head. Except he didn't review the tortuous highlights on Monday, and he didn't play Tetris. He continues to have involuntary intrusions of Hostel images once or twice a day for the rest of the week.This is basically the premise (and outcome) of a new paper in Psychological Science by Ella James and colleagues at Cambridge and Oxford. It builds on earlier work suggesting that healthy participants who play Tetris shortly after watching a “trauma” film will have fewer intrusive memories (Holmes et al, 2009, 2010). This is based on the idea that involuntary “flashbacks” in real post-traumatic stress disorder (PTSD) are visual in nature, and require visuospatial processing resources to generate and maintain. Playing Tetris will interfere with consolidation and subsequent intrusion of the images, at least in an experimental setting (Holmes et al, 2009):...Trauma flashbacks are sensory-perceptual, visuospatial mental images. Visuospatial cognitive tasks selectively compete for resources required to generate mental images. Thus, a visuospatial computer game (e.g. "Tetris") will interfere with flashbacks. Visuospatial tasks post-trauma, performed within the time window for memory consolidation [6 hrs], will reduce subsequent flashbacks. We predicted that playing "Tetris" half an hour after viewing trauma would reduce flashback frequency over 1-week.The timing is key here. In the earlier experiments, Tetris play commenced 30 min after the trauma film experience, during the 6 hour window when memories for the event are stabilized and consolidated. Newly formed memories are thought to be malleable during this time.However, if one wants to extrapolate directly to clinical application in cases of real life trauma exposure (and this is problematic, as we'll see later), it's pretty impractical to play Tetris right after an earthquake, auto accident, mortar attack, or sexual assault. So the new paper relies on the process of reconsolidation, when an act of remembering will place the memory in a labile state once again, so it can be modified (James et al., 2015).The procedure was as follows: 52 participants came into the lab on Day 0 and completed questionnaires about depression, anxiety, and previous trauma exposure. Then they watched a 12 min trauma film that included 11 scenes of actual death (or threatened death) or serious injury (James et al., 2015):...the film functioned as an experimental analogue of viewing a traumatic event in real life. Scenes contained different types of context; examples include a young girl hit by a car with blood dripping out of her ear, a man drowning in the sea, and a van hitting a teenage boy while he was using his mobile phone crossing the road. This film footage has been used in previous studies to evoke intrusive memories...After the film, they rated “how sad, hopeless, depressed, fearful, horrified, and anxious they felt right at this very moment” and “how distressing did you find the film you just watched?” They were instructed to keep a diary of intrusive images and come back to the lab 24 hours later.On Day 1, participants were randomized to either the experimental group (memory reactivation + Tetris) or the control group (neither manipulation). The experimental group viewed 11 still images from the film that served as reminder cues to initiate reconsolidation. This was followed by a 10 min filler task and then 12 min of playing Tetris (the Marathon mode shown above). The game instructions aimed to maximize the amount of mental rotation the subjects would use. The controls did the filler task and then sat quietly for 12 min.Both groups kept a diary of intrusions for the next week, and then returned on Day 7. All participants performed the Intrusion Provocation Task (IPT). Eleven blurred pictures from the film were shown, and subjects indicated when any intrusive mental images were provoked. Finally, the participants completed a few more questionnaires, as well as a recognition task that tested their verbal (T/F written statements) and visual (Y/N for scenes) memories of the film.1The results indicated that the Reactivation + Tetris manipulation was successful in decreasing the number of visual memory intrusions in both the 7-day diary and the IPT (as shown below).modified from Fig. 1 (James et al., 2015). Asterisks indicate a significant difference between groups (**p < .001). Error bars represent +1 SEM.Cool little snowman plots (actually frequency scatter plots) illustrate the time course of intrusive memories in the two groups.modified from Fig. 2 (James et al., 2015). Frequency scatter plots showing the time course of intrusive memories reported in the diary daily from Day 0 (prior to intervention) to Day 7. The intervention was on Day 1, and the red arrow is 24 hrs later (when the intervention starts working). The solid lines are the results of a generalized additive model. The size of the bubbles represents the number of participants who reported the indicated number of intrusive memories on that particular day. But now, you might be asking yourself if the critical element was Tetris or the reconsolidation update procedure (or both), since the control group did neither. Not to worry. Experiment 2 tried to disentangle this by recruiting four groups of participants (n=18 in each) — the original two groups plus two new ones: Reactivation only and Tetris only.And the results from Exp. 2 demonstrated that both were needed.... Read more »

join us!

Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.

If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.

Register Now

Research Blogging is powered by SMG Technology.

To learn more, visit seedmediagroup.com.