The Neurocritic

318 posts · 391,875 views

Born in West Virginia in 1980, The Neurocritic embarked upon a roadtrip across America at the age of thirteen with his mother. She abandoned him when they reached San Francisco and The Neurocritic descended into a spiral of drug abuse and prostitution. At fifteen, The Neurocritic's psychiatrist encouraged him to start writing as a form of therapy.

The Neurocritic
318 posts

Sort by Latest Post, Most Popular

View by Condensed, Full

  • November 16, 2015
  • 05:50 AM
  • 937 views

The Neuroscience of Social Media: An Unofficial History

by The Neurocritic in The Neurocritic

There's a new article in Trends in Cognitive Sciences about how neuroscientists can incorporate social media into their research on the neural correlates of social cognition (Meshi et al., 2015). The authors outlined the sorts of social behaviors that can be studied via participants' use of Twitter, Facebook, Instagram, etc.: (1) broadcasting information; (2) receiving feedback; (3) observing others' broadcasts; (4) providing feedback; (5) comparing self to others.Meshi, Tamir, and Heekeren / Trends in Cognitive Sciences (2015)More broadly, these activities tap into processes and constructs like emotional state, personality, social conformity, and how people manage their self-presentation and social connections. You know, things that exist IRL (this is an important point to keep in mind for later).The neural systems that mediate these phenomena, as studied by social cognitive neuroscience types, are the Mentalizing Network (in blue below), the Self-Referential Network (red), and the Reward Network (green).Fig. 2 (Meshi et al., 2015). Proposed Brain Networks Involved in Social Media Use.  (i) mentalizing network: dorsomedial prefrontal cortex (DMPFC), temporoparietal junction (TPJ), anterior temporal lobe (ATL), inferior frontal gyrus (IFG), and the posterior cingulate cortex/precuneus (PCC); (ii) self-referential network: medial prefrontal cortex (MPFC) and PCC; and (iii) reward network: ventromedial prefrontal cortex (VMPFC), ventral striatum (VS), and ventral tegmental area (VTA).  The article's publication was announced on social media:The emerging neuroscience of social media. New review in @TrendsCognSci: https://t.co/2JDIeCvJsT pic.twitter.com/2vwv827bdI— CellPressNews (@CellPressNews) November 11, 2015I anticipated this day in 2009, when I wrote several satirical articles about the neurology of Twitter.  I proposed that someone should do a study to examine the neural correlates of Twitter use:It was bound to happen. Some neuroimaging lab will conduct an actual fMRI experiment to examine the so-called "Neural Correlates of Twitter" -- so why not write a preemptive blog post to report on the predicted results from such a study, before anyone can publish the actual findings?Here are the conditions I proposed, and the predicted results (a portion of the original post is reproduced below).A low-level baseline condition (viewing "+") and an active baseline condition (reading the public timeline [public timeline no longer exists] of random tweets from strangers) will be compared to three active conditions:(1) Celebrity Fluff(2) Social Media Marketing Drivel(3) Friends on your Following List... The hemodynamic response function to the active control condition will be compared to those from Conditions 1-3 above. Contrasts between each of these conditions and the low-level baseline will also be performed.The major predicted results are as follows:Reading the Tweets of your close friends will engage a network of regions involved in self-referential processing of similar others, including the posterior superior temporal sulcus (STS) and adjacent temporo-parietal junction (TPJ), and the ventral medial prefrontal cortex (Mitchell et al., 2006).Fig. 2A. (Mitchell et al., 2006). A region of ventral mPFC showed greater activation during judgments of the target to whom participants considered themselves to be more similar.Reading the stream of Celebrity Fluff will activate the frontal eye fields to a much greater extent than the control condition, as the participants will be engaged in rolling their eyes in response to the inane banter. Figure from Paul Pietsch, Ph.D. The frontal eye fields are i... Read more »

Meshi D, Tamir TI, Heekeren HR. (2015) The Emerging Neuroscience of Social Media. Trends in Cognitive Sciences. info:/10.1016/j.tics.2015.09.004

  • November 11, 2015
  • 03:29 AM
  • 883 views

Obesity Is Not Like Being "Addicted to Food"

by The Neurocritic in The Neurocritic

Credit: Image courtesy of Aalto UniversityIs it possible to be “addicted” to food, much like an addiction to substances (e.g., alcohol, cocaine, opiates) or behaviors (gambling, shopping, Facebook)? An extensive and growing literature uses this terminology in the context of the “obesity epidemic”, and looks for the root genetic and neurobiological causes (Carlier et al., 2015; Volkow & Bailer, 2015).Fig. 1 (Meule, 2015). Number of scientific publications on food addiction (1990-2014). Web of Science search term “food addiction”. Figure 1 might lead you to believe that the term “food addiction” was invented in the late 2000s by NIDA. But this term is not new at all, as Adrian Meule (2015) explained in his historical overview, Back by Popular Demand: A Narrative Review on the History of Food Addiction Research. Dr. Theron G. Randolph wrote about food addiction in 1956 (he also wrote about food allergies).Fig. 2 (Meule, 2015). History of food addiction research.Thus, the concept of food addiction predates the documented rise in obesity in the US, which really took off in the late 80s to late 90s (as shown below).1 Prevalence of Obesity in the United States, 1960-2012 1960-62 1971-74 1976-80 1988-89 1999-2000 12.80% 14.10% 14.50% 22.50% 30.50% 2007-08 2011-12 33.80% 34.90% Sources: Flegal et al. 1998, 2002, 2010; Ogden et al. 2014One problem with the “food addiction” construct is that you can live without alcohol and gambling, but you'll die if you don't eat. Complete abstinence is not an option.2 Another problem is that most obese people simply don't show signs of addiction (Hebebrand, 2015): ...irrespective of whether scientific evidence will justify use of the term food and/or eating addiction, most obese individuals have neither a food nor an eating addiction.3 Obesity frequently develops slowly over many years; only a slight energy surplus is required to in the longer term develop overweight. Genetic, neuroendocrine, physiological and environmental research has taught us that obesity is a complex disorder with many risk factors, each of which have small individual effects and interact in a complex manner. The notion of addiction as a major cause of obesity potentially entails endless and fruitless debates, when it is clearly not relevant to the great majority of cases of overweight and obesity. Still not convinced? Surely, differences in the brains' of obese individuals point to an addiction. The dopamine system is altered, right, so this must mean they're addicted to food? Well think again, because the evidence for this is inconsistent (Volkow et al., 2013; Ziauddeen & Fletcher, 2013).An important new paper by a Finnish research group has shown that D2 dopamine receptor binding in obese women is not different from that in lean participants (Karlsson et al., 2015). Conversely, μ-opioid receptor (MOR) binding is reduced, consistent with lowered hedonic processing. After the women had bariatric surgery (resulting in  mean weight loss of 26.1 kg, or 57.5 lbs), MOR returned to control values, while the unaltered D2 receptors stayed the same.In the study, 16 obese women (mean BMI=40.4, age 42.8) had PET scans before and six months after undergoing the standard Gastric Bypass procedure (Roux-en-Y Gastric Bypass) or the Sleeve Gastrectomy. A comparison group of non-obese women (BMI=22.7, age 44.9) was also scanned. The radiotracer [11C]carfentanil measured MOR availability and [11C]raclopride measured D2R availability in two separate sessions. The opioid and dopamine systems are famous for their roles in neural circuits for “liking” (pleasurable consumption) and “wanting”(incentive/motivation), respectively (Castro & Berridge, 2014).The pre-operative PET scans in the obese women showed that MOR binding was significantly lower in a number of reward-related regions, including ventral striatum, dorsal caudate, putamen, insula, amygdala, thalamus, orbitofrontal cortex and posterior cingulate cortex. Six months after surgery, there was an overall 23% increase in MOR availability, which was no longer different from controls.... Read more »

  • October 29, 2015
  • 06:54 AM
  • 834 views

Ophidianthropy: The Delusion of Being Transformed into a Snake

by The Neurocritic in The Neurocritic

Scene from Sssssss (1973).“When Dr. Stoner needs a new research assistant for his herpetological research, he recruits David Blake from the local college.  Oh, and he turns him into a snake for sh*ts and giggles.”Movie Review by Jason Grey Horror movies where people turn into snakes are relatively common (30 by one count), but clinical reports of delusional transmogrification into snakes are quite rare. This is in contrast to clinical lycanthropy, the delusion of turning into a wolf.What follows are two frightening tales of unresolved mental illness, minimal followup, and oversharing (plus mistaking an April Fool's joke for a real finding).THERE ARE NO ACTUAL PICTURES OF SNAKES [an important note for snake phobics].The first case of ophidianthropy was described by Kattimani et al. (2010):A 24 year young girl presented to us with complaints that she had died 15 days before and that in her stead she had been turned into a live snake. At times she would try to bite others claiming that she was a snake. ... We showed her photos of snakes and when she was made to face the large mirror she failed to identify herself as her real human self and described herself as snake. She described having snake skin covering her and that her entire body was that of snake except for her spirit inside.  ...  She was distressed that others did not understand or share her conviction. She felt hopeless that nothing could make her turn into real self. She made suicidal gestures and attempted to hang herself twice on the ward...The initial diagnosis was severe depressive disorder with psychotic features. A series of drug trials was unsuccessful (Prozac and four different antipsychotics), and a course of 10 ECT sessions had no lasting effect on her delusions. The authors couldn't decide whether the patient should be formally diagnosed with schizophrenia or a more general psychotic illness. Her most recent treatment regime (escitalopram plus quetiapine) was also a failure because the snake delusion persisted. “Our next plan is to employ supportive psychotherapy in combination with pharmacotherapy,” said the authors (but we never find out what happened to her). Not a positive outcome...Scene from Sssssss (1973).Ophidiantrophy with paranoid schizophrenia, cannabis use, bestiality, and history of epilepsyThe second case is even more bizarre, with a laundry list of delusions and syndromes (Mondal, 2014):A 23 year old, married, Hindu male, with past history of  ... seizures..., personal history of non pathological consumption of bhang and alcohol for the last nine years and one incident of illicit sexual intercourse with a buffalo at the age of 18 years presented ... with the chief complains of muttering, fearfulness, wandering tendency ... and hearing of voices inaudible to others for the last one month. ... he sat cross legged with hands folded in a typical posture resembling the hood of a snake. ... The patient said that he inhaled the breath of a snake passing by him following which he changed into a snake. Though he had a human figure, he could feel himself poisonous inside and to have grown a fang on the lower set of his teeth. He also had the urge to bite others but somehow controlled the desire. He said that he was not comfortable with humans then but would be happy on seeing a snake, identifying it belonging to his species. ... He says that he was converted back to a human being by the help of a parrot, which took away his snake fangs by inhaling his breath and by a cat who ate up his snake flesh once when he was lying on the ground. ...  the patient also had thought alienation phenomena in the form of thought blocking, thought withdrawal and thought broadcasting, delusion of persecution, delusion of reference, delusion of infidelity [Othello syndrome], the Fregoli delusion, bizarre delusion, nihilistic delusion [Cotard's syndrome], somatic passivity, somatic hallucinations, made act [?], third person auditory hallucinations, derealization and depersonalisation. He was diagnosed as a case of paranoid schizophrenia as per ICD 10.Wow.He was was given the antipsychotic haloperidol while being treated as an  an inpatient for 10 days. Some of his symptoms improved but others did not. “Long term follow up is not available.”The discussion of this case is a bit... terrifying:Lycanthropy encompasses two aspects, the first one consisting of primary lupine delusions and associated behavioural deviations termed as lycomania, and the second aspect being a psychosomatic problem called as lycosomatization (Kydd et al., 1991). Kydd, O.U., Major, A., Minor, C (1991). A really neat, squeaky-clean isolation and characterization of two lycanthropogens from nearly subhuman populations of Homo sapiens. J. Ultratough Molec. Biochem. 101: 3521-3532.  [this is obviously a fake citation]Endogenous lycanthropogens responsible for lycomania are lupinone and buldogone which differ by only one carbon atom in their ring structure; their plasma level having a lunar periodicity with peak level during the week of full moon. Lycosomatization likely depends on the simultaneous secretion of suprathreshold levels of both lupinone and the peptide lycanthrokinin, a second mediator, reported to be secreted by the pineal gland, that “initiates and maintains the lycanthropic process” (Davis et al., 1992). Thus, secretion of lupinone without lycanthrokinin results in only lycomania. In our patient these molecular changes were not investigated. oh my god, the paper by Davis et al. on the Psychopharmacology of Lycanthropy (and "endogenous lycanthropogens") was published in the April 1, 1992 issue of the Canadian Medical Association Journal. There is no such thing as lupinone and buldogone.Fig. 1 (Davis et al., 1992... Read more »

  • October 26, 2015
  • 01:47 AM
  • 666 views

On the Long Way Down: The Neurophenomenology of Ketamine

by The Neurocritic in The Neurocritic

Is ketamine a destructive club drug that damages the brain and bladder? With psychosis-like effects widely used as a model of schizophrenia? Or is ketamine an exciting new antidepressant, the “most important discovery in half a century”?For years, I've been utterly fascinated by these separate strands of research that rarely (if ever) intersect. Why is that? Because there's no such thing as “one receptor, one behavior.” And because like most scientific endeavors, neuro-pharmacology/psychiatry research is highly specialized, with experts in one microfield ignoring the literature produced by another (though there are some exceptions).1 Ketamine is a dissociative anesthetic and PCP-derivative that can produce hallucinations and feelings of detachment in non-clinical populations. Phamacologically it's an NMDA receptor antagonist that also acts on other systems (e.g., opioid). Today I'll focus on a recent neuroimaging study that looked at the downsides of ketamine: anhedonia, cognitive disorganization, and perceptual distortions (Pollak et al., 2015).Imaging Phenomenologically Distinct Effects of KetamineIn this study, 23 healthy male participants underwent arterial spin labeling (ASL) fMRI scanning while they were infused with either a high dose (0.26 mg/kg bolus + slow infusion) or a low dose (0.13 mg/kg bolus + slow infusion) of ketamine 2 (Pollak et al., 2015). For comparison, the typical dose used in depression studies is 0.5 mg/kg (Wan et al., 2015). Keep in mind that the number of participants in each condition was low, n=12 (after one was dropped) and n=10 respectively, so the results are quite preliminary.ASL is a post-PET and BOLD-less technique for measuring cerebral blood flow (CBF) without the use of a radioactive tracer (Petcharunpaisan et al., 2010). Instead, water in arterial blood serves as a contrast agent, after being magnetically labeled by applying a 180 degree radiofrequency inversion pulse. Basically, it's a good method for monitoring CBF over a number of minutes.ASL sequences were obtained before and 10 min after the start of ketamine infusion. Before and after the scan, participants rated their subjective symptoms of delusional thinking, perceptual distortion, cognitive disorganization, anhedonia, mania, and paranoia on the Psychotomimetic States Inventory (PSI). The study was completely open label, so it's not like they didn't know they were getting a mind-altering drug. Behavioral ratings were quite variable (note the large error bars below), but generally the effects were larger in the high-dose group, as one might expect.The changes in Perceptual Distortion and Cognitive Disorganization scores were significant for the low-dose group, with the addition of Delusional Thinking, Anhedonia, and Mania in the high-dose group. But again, it's important to remember there was no placebo condition, the significance levels were not all that impressive, and the n's were low.The CBF results (below) show increases in anterior and subgenual cingulate cortex and decreases in superior and medial temporal cortex, similar to previous studies using PET.Fig 2a (Pollak et al., 2015). Changes in CBF with ketamine in the low- and high-dose groups overlaid on a high-resolution T1-weighted image.Did I say the n's were low? The Fig. 2b maps (not shown here) illustrated significant correlations with the Anhedonia and Cognitive Disorganization subscales, but these were based on 10 and 12 data points, when outliers can drive phenomenally large effects. One might like to say...For [the high-dose] group, ketamine-induced anhedonia inversely related to orbitofrontal cortex CBF changes and cognitive disorganisation was positively correlated with CBF changes in posterior thalamus and the left inferior and middle temporal gyrus. Perceptual distortion was correlated with different regional CBF changes in the low- and high-dose groups.   ...but this clearly requires replication studies with placebo comparisons and larger subject groups.Nonetheless, the fact remains that ketamine administration in healthy participants caused negative effects like anhedonia and cognitive disorganization at doses lower than those used in studies of treatment-resistant depression (many of which were also open label). Now you can say, “well, controls are not the same as patients with refractory depression” and you'd be right (see Footnote 1). “Glutamatergic signaling profiles” and symptom reports could show a variable relationship, with severe depression at the low end and schizophrenia at the high end (with controls somewhere in the middle).A recent review of seven placebo-controlled, double-blind, randomized clinical trials of ketamine and other NMDA antagonists concluded (Newport et al., 2015):The antidepressant efficacy of ketamine ... holds promise for future glutamate-modulating strategies; however, the ineffectiveness of other NMDA antagonists suggests that any forthcoming advances will depend on improving our understanding of ketamine’s mechanism of action. The fleeting nature of ketamine’s therapeutic benefit, coupled with its potential for abuse and neurotoxicity, suggest that its use in the clinical setting warrants caution.The mysterious and paradoxical ways of ketamine continue...So take it in don't hold your breath... Read more »

  • September 27, 2015
  • 01:02 AM
  • 627 views

Neurohackers Gone Wild!

by The Neurocritic in The Neurocritic

Scene from Listening, a new neuro science fiction film by writer-director Khalil Sullins. What are some of the goals of research in human neuroscience?To explain how the mind works.To unravel the mysteries of consciousness and free will.To develop better treatments for mental and neurological illnesses.To allow paralyzed individuals to walk again.Brain decoding experiments that use fMRI or ECoG (direct recordings of the brain in epilepsy patients) to deduce what a person is looking at or saying or thinking have become increasingly popular as well. They're still quite limited in scope, but any study that can invoke “mind reading” or “brain-to-brain” scenarios will attract the press like moths to a flame....For example, here's how NeuroNews site Brain Decoder covered the latest “brain-to-brain communication” stunt and the requisite sci fi predictions:Scientists Connect 2 Brains to Play “20 Questions”Human brains can now be linked well enough for two people to play guessing games without speaking to each other, scientists report. The researchers hooked up several pairs of people to machines that connected their brains, allowing one to deduce what was on the other's mind.. . . This brain-to-brain interface technology could one day allow people to empathize or see each other's perspectives more easily by sending others concepts too difficult to explain in words, [author Andrea Stocco] said.Mind reading! Yay! But this isn't what happened. No thoughts were decoded in the making of this paper (Stocco et al., 2015).Instead, stimulation of visual cortex did all the “talking.” Player One looked at an LED that indicated “yes” (13 Hz flashes) or “no” (12 Hz flashes). Steady-state visual evoked potentials (a type of EEG signal very common in BCI research) varied according to flicker rate, and this binary code was transmitted to a second computer, which triggered a magnetic pulse delivered to the visual cortex of Player Two if the answer was yes. The TMS pulse in turn elicited a phosphene (a brief visual percept) that indicated yes (no phosphene indicated a “no” answer).Eventually, we see some backpedalling in the Brain Decoder article:Ideally, brain-to-brain interfaces would one day allow one person to think about an object, say a hammer, and another to know this, along with the hammer's shape and what the first person wanted to use it for. "That would be the ideal type of complexity of information we want to achieve," Stocco said. "We don't know whether that future is possible."  Well, um, we already have the first half of the equation to some small degree (Naselaris et al. 2015 decoded mental images of remembered scenes)...But the Big Prize goes to.... the decoders of covert speech, or inner thoughts!! (Martin et al. 2014)Scientists develop a brain decoder that can hear your inner thoughtsBrain decoder can eavesdrop on your inner voiceListening to Your ThoughtsThe new film Listening starts off with a riff on this work and spins into a dark and dangerous place where no thought is private. Given the preponderance of “hearing” metaphors above, it's fitting that the title is Listening, where fiction (in this case near-future science fiction) is stranger than truth. The hazard of watching a movie that depicts your field of expertise is that you nitpick every little thing (like the scalp EEG sensors that record from individual neurons). This impulse was exacerbated by a setting which is so near-future that it's present day.From Marilyn Monroe Neurons to Carbon NanotubesBut there were many things I did like about Listening.1  In particular, I enjoyed the way the plot developed in the second half of the film, especially in the last 30 minutes. On the lighter side was this amusing scene of a pompous professor lecturing on the real-life finding of Marilyn Monroe neurons (Quian Quiroga et al., 2005, 2009).Caltech Professor: “For example, the subject is asked to think about Marilyn Monroe. My study suggests not only conscious control in the hippocampus and parahippocampal cortex, when the neuron....” Conversation between two grad students in back of class: “Hey, you hear about the new bioengineering transfer?” ...Caltech Professor: “Mr. Thorogood, perhaps you can enlighten us all with Ryan's gossip? Or tell us what else we can conclude from this study?” Ryan the douchy hardware guy: “We can conclude that all neurosurgeons are in love with Marilyn Monroe.” David the thoughtful software guy: “A single neuron has not only the ability to carry complex code and abstract form but is also able to override sensory input through cognitive effort. It suggests thought is a stronger reality than the world around us.” Caltech Professor: “Unfortunately, I think you're both correct.” Ryan and David are grad students with Big Plans. They've set up a garage lab (with stolen computer equipment) to work on their secret EEG decoding project. Ryan the douche lets Jordan the hot bioengineering transfer into their boys' club, much to David's dismay.Ryan: “She's assigned to Professor Hamomoto's experiment with ATP-powered cell-binding nanotube devices.” [maybe these?]So she gets to stay in the garage. For the demonstration, Ryan sports an EEG net that looks remarkably like the ones made by EGI (shown below on the right).Ryan reckons they'll put cell phone companies out of business with their mind reading invention, but David realizes they have a long way to go...... Read more »

  • August 31, 2015
  • 04:31 AM
  • 748 views

Cats on Treadmills (and the plasticity of biological motion perception)

by The Neurocritic in The Neurocritic

Cats on a treadmill. From Treadmill Kittens.It's been an eventful week. The 10th Anniversary of Hurricane Katrina. The 10th Anniversary of Optogenetics (with commentary from the neuroscience community and from the inventors). The Reproducibility Project's efforts to replicate 100 studies in cognitive and social psychology (published in Science). And the passing of the great writer and neurologist, Oliver Sacks. Oh, and Wes Craven just died too...I'm not blogging about any of these events. Many many others have already written about them (see selected reading list below). And The Neurocritic has been feeling tapped out lately.Hence the cats on treadmills. They're here to introduce a new study which demonstrated that early visual experience is not necessary for the perception of biological motion (Bottari et al., 2015). Biological motion perception involves the ability to understand and visually track the movement of a living being. This phenomenon is often studied using point light displays, as shown below in a demo from the BioMotion Lab. You should really check out their flash animation that allows you to view human, feline, and pigeon walkers moving from right to left, scrambled and unscrambled, masked and unmasked, inverted and right side up.from BioMotion Lab 1Biological Motion Perception Is Spared After Early Visual DeprivationPeople born with dense, bilateral cataracts that are surgically removed at a later date show deficits in higher visual processing, including the perception of global motion, global form, faces, and illusory contours. Proper neural development during the critical, or sensitive period early in life is dependent on experience, in this case visual input. However, it seems that the perception of biological motion (BM) does not require early visual experience (Bottari et al., 2015).Participants in the study were 12 individuals with congenital cataracts that were removed at a mean age of 7.8 years (range 4 months to 16 yrs). Age at testing was 17.8 years (range 10-35 yrs). The study assessed their biological motion thresholds (extracting BM from noise) and recorded their EEG to point light displays of a walking man and to scrambled versions of the walking man (see demo).from BioMotion LabBehavioral performance on the BM threshold task didn't differ much between the congenital cataract (cc) and matched control (mc) groups (i.e., there was a lot of overlap between the filled diamonds and the open triangles below).Modified from Fig. 1 (Bottari et al., 2015).The event-related potentials (ERPs) averaged to presentations of the walking man vs. scrambled man showed the same pattern in cc and mc groups as well: larger to walking man (BM) than scrambled man (SBM). Modified from Fig. 1 (Bottari et al., 2015).The N1 component (the peak at about 0.25 sec post-stimulus) seems a little smaller in cc but that wasn't significant. On the other hand, the earlier P1 was significantly reduced in the cc group. Interestingly, the duration of visual deprivation, amount of visual experience, and post-surgical visual acuity did not correlate with the size of the N1.The authors discuss three possible explanations for these results:(1) The neural circuitries associated with the processing of BM can specialize in late childhood or adulthood. That is, as soon as visual input becomes available, initiates the functional maturation of the BM system. Alternatively the neural systems for BM might mature independently of vision. (2) Either they are shaped cross-modally or (3) they mature independent of experience.They ultimately favor the third explanation, that "the neural systems for BM specialize independently of visual experience." They also point out that the ERPs to faces vs. scrambled faces in the cc group do not show the characteristic difference between these stimulus types. What's so special about biological motion, then? Here the authors wave their hands and arms a bit:We can only speculate why these different developmental trajectories for faces and BM emerge: BM is characteristic for any type of living being and the major properties are shared across species. ... By contrast, faces are highly specific for a species and biases for the processing of faces from our own ethnicity and age have been shown.It's more important to see if a bear is running towards you than it is to recognize faces, as anyone with congenital prosopagnosia ("face blindness") might tell you...Footnote1 Troje & Westhoff (2006):"The third sequence showed a walking cat. The data are based on a high-speed (200 fps) video sequence showing a cat walking on a treadmill. Fourteen feature points were manually sampled from single frames. As with the pigeon sequence, data were approximated with a third-order Fourier series to obtain a generic walking cycle."Reference... Read more »

  • August 10, 2015
  • 07:35 AM
  • 910 views

Will machine learning create new diagnostic categories, or just refine the ones we already have?

by The Neurocritic in The Neurocritic

How do we classify and diagnose mental disorders?In the coming era of Precision Medicine, we'll all want customized treatments that “take into account individual differences in people’s genes, environments, and lifestyles.” To do this, we'll need precise diagnostic tools to identify the specific disease process in each individual. Although focused on cancer in the near-term, the longer-term goal of the White House initiative is to apply Precision Medicine to all areas of health. This presumably includes psychiatry, but the links between Precision Medicine, the BRAIN initiative, and RDoC seem a bit murky at present.1But there's nothing a good infographic can't fix. Science recently published a Perspective piece by the NIMH Director and the chief architect of the Research Domain Criteria (RDoC) initiative (Insel & Cuthbert, 2015). There's Deconstruction involved, so what's not to like? 2 ILLUSTRATION: V. Altounian and C. Smith / SCIENCEIn this massively ambitious future scenario, the totality of one's genetic risk factors, brain activity, physiology, immune function, behavioral symptom profile, and life experience (social, cultural, environmental) will be deconstructed and stratified and recompiled into a neat little cohort. 3The new categories will be data driven. The project might start by collecting colossal quantities of expensive data from millions of people, and continue by running classifiers on exceptionally powerful computers (powered by exceptionally bright scientists/engineers/coders) to extract meaningful patterns that can categorize the data with high levels of sensitivity and specificity. Perhaps I am filled with pathologically high levels of negative affect (Loss? Frustrative Nonreward?), but I find it hard to be optimistic about progress in the immediate future. You know, for a Precision Medicine treatment for me (and my pessimism)...But seriously.Yes, RDoC is ambitious (and has its share of naysayers). But what you may not know is that it's also trendy! Just the other day, an article in The Atlantic explained Why Depression Needs A New Definition (yes, RDoC) and even cited papers like Depression: The Shroud of Heterogeneity. 4But let's just focus on the brain for now. For a long time, most neuroscientists have viewed mental disorders as brain disorders. [But that's not to say that environment, culture, experience, etc. play no role! cf. Footnote 3]. So our opening question becomes, How do we classify and diagnose brain disorders neural circuit disorders in a fashion consistent with RDoC principles? Is there really One Brain Network for All Mental Illness, for instance? (I didn't think so.)Our colleagues in Asia and Australia and Europe and Canada may not have gotten the funding memo, however, and continue to run classifiers based on DSM categories. 5 In my previous post, I promised an unsystematic review of machine learning as applied to the classification of major depression. You can skip directly to the Appendix to see that.Regardless of whether we use DSM-5 categories or RDoC matrix constructs, what we need are robust and reproducible biomarkers (see Table 1 above). A brief but excellent primer by Woo and Wager (2015) outlined the characteristics of a useful neuroimaging biomarker: 1. Criterion 1: diagnosticity Good biomarkers should produce high diagnostic performance in classification or prediction. Diagnostic performance can be evaluated by sensitivity and specificity. Sensitivity concerns whether a model can correctly detect signal when signal exists. Effect size is a closely related concept; larger effect sizes are related to higher sensitivity. Specificity concerns whether the model produces negative results when there is no signal. Specificity can be evaluated relative to a range of specific alternative conditions that may be confusable with the condition of interest. 2. Criterion 2: interpretability Brain-based biomarkers should be meaningful and interpretable in terms of neuroscience, including previous neuroimaging studies and converging evidence from multiple sources (eg, animal models, lesion studies, etc). One potential pitfall in developing neuroimaging biomarkers is that classification or prediction models can capitalize on confounding variables that are not neuroscientifically meaningful or interesting at all (eg, in-scanner head movement). Therefore, neuroimaging biomarkers should be evaluated and interpreted in the light of existing neuroscientific findings. 3. Criterion 3: deployability Once the classification or outcome-prediction model has been developed as a neuroimaging biomarker, the model and the testing procedure should be precisely defined so that it can be prospectively applied to new data. Any flexibility in the testing procedures could introduce potential overoptimistic biases into test results, rendering them useless and potentially misleading. For example, “amygdala activity” cannot be a good neuroimaging biomarker without a precise definition of which “voxels” in the amygdala should be activated and the relative expected intensity of activity across each voxel. A well-defined model and standardized testing procedure are crucial aspects of turning neuroimaging results into a “research product,” a biomarker that can be shared and tested across laboratories. 4. Criterion 4: generalizability Clinically useful neuroimaging biomarkers aim to provide predictions about new individuals. Therefore, they should be val... Read more »

Insel, T., & Cuthbert, B. (2015) Brain disorders? Precisely. Science, 348(6234), 499-500. DOI: 10.1126/science.aab2358  

  • August 1, 2015
  • 08:42 PM
  • 783 views

The Idiosyncratic Side of Diagnosis by Brain Scan and Machine Learning

by The Neurocritic in The Neurocritic

R2D3R2D3 recently had a fantastic Visual Introduction to Machine Learning, using the classification of homes in San Francisco vs. New York as their example. As they explain quite simply: In machine learning, computers apply statistical learning techniques to automatically identify patterns in data. These techniques can be used to make highly accurate predictions. You should really head over there right now to view it, because it's very impressive.Computational neuroscience types are using machine learning algorithms to classify all sorts of brain states, and diagnose brain disorders, in humans. How accurate are these classifications? Do the studies all use separate training sets and test sets, as shown in the example above?Let's say your fMRI measure is able to differentiate individuals with panic disorder (n=33) from those with panic disorder + depression (n=26) with 79% accuracy.1 Or with structural MRI scans you can distinguish 20 participants with treatment-refractory depression from 21 never-depressed individuals with 85% accuracy.2 Besides the issues outlined in the footnotes, the “reality check” is that the model must be able to predict group membership for a new (untrained) data set. And most studies don't seem to do this.I was originally drawn to the topic by a 3 page article entitled, Machine learning algorithm accurately detects fMRI signature of vulnerability to major depression (Sato et al., 2015). Wow! Really? How accurate? Which fMRI signature? Let's take a look.machine learning algorithm = Maximum Entropy Linear Discriminant Analysis (MLDA)accurately predicts = 78.3% (72.0% sensitivity and 85.7% specificity)fMRI signature = “guilt-selective anterior temporal functional connectivity changes” (seems a bit overly specific and esoteric, no?)vulnerability to major depression = 25 participants with remitted depression vs. 21 never-depressed participantsThe authors used a “standard leave-one-subject-out procedure in which the classification is cross-validated iteratively by using a model based on the sample after excluding one subject to independently predict group membership” but they did not test their fMRI signature in completely independent groups of participants.Nor did they try to compare individuals who are currently depressed to those who are currently remitted. That didn't matter, apparently, because the authors suggest the fMRI signature is a trait marker of vulnerability, not a state marker of current mood. But the classifier missed 28% of the remitted group who did not have the “guilt-selective anterior temporal functional connectivity changes.”What is that, you ask? This is a set of mini-regions (i.e., not too many voxels in each) functionally connected to a right superior anterior temporal lobe seed region of interest during a contrast of guilt vs. anger feelings (selected from a number of other possible emotions) for self or best friend, based on written imaginary scenarios like “Angela [self] does act stingily towards Rachel [friend]” and “Rachel does act stingily towards Angela” conducted outside the scanner (after the fMRI session is over). Got that?You really need to read a bunch of other articles to understand what that means, because the current paper is less than 3 pages long. Did I say that already?modified from Fig 1B (Sato et al., 2015). Weight vector maps highlighting voxels among the 1% most discriminative for remitted major depression vs. controls, including the subgenual cingulate cortex, both hippocampi, the right thalamus and the anterior insulae.The patients were previously diagnosed according to DSM-IV-TR (which was current at the time), and in remission for at least 12 months. The study was conducted by investigators from Brazil and the UK, so they didn't have to worry about RDoC, i.e. “new ways of classifying mental disorders based on behavioral dimensions and neurobiological measures” (instead of DSM-5 criteria). A “guilt-proneness” behavioral construct, along with the “guilt-selective” network of idiosyncratic brain regions, might be more in line with RDoC than past major depression diagnosis.Could these results possibly generalize to other populations of remitted and never-depressed individuals? Well, the fMRI signature seems a bit specialized (and convoluted). And overfitting is another likely problem here... In their next post, R2D3 will discuss overfitting: Ideally, the [decision] tree should perform similarly on both known and unknown data. So this one is less than ideal. [NOTE: the one that's 90% in the top figure] These errors are due to overfitting. Our model has learned to treat every detail in the training data as important, even details that turned out to be irrelevant.In my next post, I'll present an unsystematic review of machine learning as applied to the classification of major depression. It's notable that Sato et al. (2015) used the word “classification” instead of “diagnosis.”3 Footnotes1 The sensitivity (true positive rate) was 73% and the specificity (true negative rate) was 85%. After correcting for confounding variables, these numbers were 77% and 70%, respectively.2 The abstract concludes this is a “high degree of accuracy.” Not to pick on these particular authors (this is a typical study), but Dr. Dorothy Bishop explains why this is not very helpful for screening or diagnostic purposes. And what you'd really want to do here is to discriminate between treatment-resistant vs. treatment-responsive depression. If an individual does not respond to standard treatments, it would be highly beneficial to avoid a long futile period of medication trials. 3 In case you're wondering, the title of this post was based on The Dark Side of Diagnosis by Brain Scan, which is about Dr  Daniel Amen. The work of the investigators discussed here is in ... Read more »

  • July 15, 2015
  • 04:09 AM
  • 767 views

Can Tetris Reduce Intrusive Memories of a Trauma Film?

by The Neurocritic in The Neurocritic

For some inexplicable reason, you watched the torture gore horror film Hostel over the weekend. On Monday, you're having trouble concentrating at work. Images of severed limbs and bludgeoned heads keep intruding on your attempts to code or write a paper. So you decide to read about the making of Hostel.You end up seeing pictures of the most horrifying scenes from the movie. It's all way too way much to simply shake off so then you decide to play Tetris. But a funny thing happens. The unwelcome images start to become less frequent. By Friday, the gory mental snapshots are no longer forcing their way into your mind's eye. The ugly flashbacks are gone.Meanwhile, your parnter in crime is having similar images of eye gouging pop into his head. Except he didn't review the tortuous highlights on Monday, and he didn't play Tetris. He continues to have involuntary intrusions of Hostel images once or twice a day for the rest of the week.This is basically the premise (and outcome) of a new paper in Psychological Science by Ella James and colleagues at Cambridge and Oxford. It builds on earlier work suggesting that healthy participants who play Tetris shortly after watching a “trauma” film will have fewer intrusive memories (Holmes et al, 2009, 2010). This is based on the idea that involuntary “flashbacks” in real post-traumatic stress disorder (PTSD) are visual in nature, and require visuospatial processing resources to generate and maintain. Playing Tetris will interfere with consolidation and subsequent intrusion of the images, at least in an experimental setting (Holmes et al, 2009):...Trauma flashbacks are sensory-perceptual, visuospatial mental images. Visuospatial cognitive tasks selectively compete for resources required to generate mental images. Thus, a visuospatial computer game (e.g. "Tetris") will interfere with flashbacks. Visuospatial tasks post-trauma, performed within the time window for memory consolidation [6 hrs], will reduce subsequent flashbacks. We predicted that playing "Tetris" half an hour after viewing trauma would reduce flashback frequency over 1-week.The timing is key here. In the earlier experiments, Tetris play commenced 30 min after the trauma film experience, during the 6 hour window when memories for the event are stabilized and consolidated. Newly formed memories are thought to be malleable during this time.However, if one wants to extrapolate directly to clinical application in cases of real life trauma exposure (and this is problematic, as we'll see later), it's pretty impractical to play Tetris right after an earthquake, auto accident, mortar attack, or sexual assault. So the new paper relies on the process of reconsolidation, when an act of remembering will place the memory in a labile state once again, so it can be modified (James et al., 2015).The procedure was as follows: 52 participants came into the lab on Day 0 and completed questionnaires about depression, anxiety, and previous trauma exposure. Then they watched a 12 min trauma film that included 11 scenes of actual death (or threatened death) or serious injury (James et al., 2015):...the film functioned as an experimental analogue of viewing a traumatic event in real life. Scenes contained different types of context; examples include a young girl hit by a car with blood dripping out of her ear, a man drowning in the sea, and a van hitting a teenage boy while he was using his mobile phone crossing the road. This film footage has been used in previous studies to evoke intrusive memories...After the film, they rated “how sad, hopeless, depressed, fearful, horrified, and anxious they felt right at this very moment” and “how distressing did you find the film you just watched?” They were instructed to keep a diary of intrusive images and come back to the lab 24 hours later.On Day 1, participants were randomized to either the experimental group (memory reactivation + Tetris) or the control group (neither manipulation). The experimental group viewed 11 still images from the film that served as reminder cues to initiate reconsolidation. This was followed by a 10 min filler task and then 12 min of playing Tetris (the Marathon mode shown above). The game instructions aimed to maximize the amount of mental rotation the subjects would use. The controls did the filler task and then sat quietly for 12 min.Both groups kept a diary of intrusions for the next week, and then returned on Day 7. All participants performed the Intrusion Provocation Task (IPT). Eleven blurred pictures from the film were shown, and subjects indicated when any intrusive mental images were provoked. Finally, the participants completed a few more questionnaires, as well as a recognition task that tested their verbal (T/F written statements) and visual (Y/N for scenes) memories of the film.1The results indicated that the Reactivation + Tetris manipulation was successful in decreasing the number of visual memory intrusions in both the 7-day diary and the IPT (as shown below).modified from Fig. 1 (James et al., 2015). Asterisks indicate a significant difference between groups (**p < .001). Error bars represent +1 SEM.Cool little snowman plots (actually frequency scatter plots) illustrate the time course of intrusive memories in the two groups.modified from Fig. 2 (James et al., 2015). Frequency scatter plots showing the time course of intrusive memories reported in the diary daily from Day 0 (prior to intervention) to Day 7. The intervention was on Day 1, and the red arrow is 24 hrs later (when the intervention starts working). The solid lines are the results of a generalized additive model. The size of the bubbles represents the number of participants who reported the indicated number of intrusive memories on that particular day. But now, you might be asking yourself if the critical element was Tetris or the reconsolidation update procedure (or both), since the control group did neither. Not to worry. Experiment 2 tried to disentangle this by recruiting four groups of participants (n=18 in each) — the original two groups plus two new ones: Reactivation only and Tetris only.And the results from Exp. 2 demonstrated that both were needed.... Read more »

  • June 28, 2015
  • 03:05 AM
  • 919 views

Who Will Pay for All the New DBS Implants?

by The Neurocritic in The Neurocritic

Recently, Science and Nature had news features on big BRAIN funding for the development of deep brain stimulation technologies. The ultimate aim of this research is to treat and correct malfunctioning neural circuits in psychiatric and neurological disorders. Both pieces raised ethical issues, focused on device manufacturers and potential military applications, respectively.A different ethical concern, not mentioned in either article, is who will have access to these new devices, and who is going to pay the medical costs once they hit the market. DBS for movement disorders is a test case, because Medicare (U.S.) approved coverage for Parkinson's disease (PD) and essential tremor in 2003. Which is good, given that unilateral surgery costs about $50,000.Willis et al. (2014) examined Medicare records for 657,000 PD patients and found striking racial disparities. The odds of receiving DBS in white PD patients were five times higher than for African Americans, and 1.8 times higher than for Asians. And living in a neighborhood with high socioeconomic status was associated with 1.4-fold higher odds of receiving DBS. Out-of-pocket costs for Medicare patients receiving DBS are over $2,000 per year, which is quite a lot of money for low-income senior citizens.Aaron Saenz raised a similar issue regarding the cost of the DEKA prosthetic arm (aka "Luke"):But if you're not a veteran, neither DARPA project may really help you much. The Luke Arm is slated to cost $100,000+.... That's well beyond the means of most amputees if they do not have the insurance coverage provided by the Veteran's Administration. ... As most amputees are not veterans, I think that the Luke Arm has a good chance of being priced out of a large market share. The availability of qualified neurosurgeons, even in affluent areas, will be another problem once future indications are FDA-approved (or even trialed).The situation in one Canadian province (British Columbia, with a population of 4.6 million) is instructive. An article in the Vancouver Sun noted that in March 2013, only one neurosurgeon was qualified to perform DBS surgeries for Parkinson's disease (or for dystonia). This resulted in a three year waiting list. Imagine, all these eligible patients with Parkinson's have to endure their current condition (and worse) for years longer, instead of having a vastly improved quality of life. Funding, doctors needed if brain stimulation surgery to expand in B.C.:... “But here’s the problem: We already have a waiting list of almost three years, from the time family doctors first put in the referral to the DBS clinic. And I’m the only one in B.C. doing this. So we really aren’t able to do more than 40 cases a year,” [Dr. Christopher Honey] said.. . ....The health authority allocates funding of $1.1 million annually, which includes the cost of the $20,000 devices, and $14,000 for each battery replacement. On average, batteries need to be replaced every three years.. . .To reduce wait times, the budget would have to increase and a Honey clone would have to be trained and hired.Back in the U.S., Rossi et al. (2014) called out Medicare for curbing medical progress: Devices for DBS have been approved by the FDA for use in treating Parkinson disease, essential tremor, obsessive-compulsive disorder, and dystonia,2 but expanding DBS use to include new indications has proven difficult—specifically because of the high cost of DBS devices and generally because of disincentives for device manufacturers to sponsor studies when disease populations are small and the potential for a return on investment is not clear. In many of these cases, Medicare coverage will determine whether a study will proceed. ... Ultimately, uncertain Medicare coverage coupled with the lack of economic incentives for industry sponsorship could limit investigators’ freedom of inquiry and ability to conduct clinical trials for new uses of DBS therapy. But the question remains, where is all this health care money supposed to come from?The device manufacturers aren't off the hook, either, but BRAIN is trying to reel them in. NIH recently sponsored a two-day workshop, BRAIN Initiative Program for Industry Partnerships to Facilitate Early Access Neuromodulation and Recording Devices for Human Clinical Studies [agenda PDF]. The purpose was to:Bring together stakeholders and interested parties to disseminate information on opportunities for research using latest-generation devices for CNS neuromodulation and interfacing with the brain in humans.Describe the proposed NIH framework for facilitating and lowering the cost of new studies using these devices.Discuss regulatory and intellectual property considerations.Solicit recommendations for data coordination and access. The Program Goals [PDF]:...we hope to spur human research bridging the “valley of death” that has been a barrier to translating pre-clinical research into therapeutic outcomes. We expect the new framework will allow academic researchers to test innovative ideas for new therapies, or to address scientific unknowns regarding mechanisms of disease or device action, which will facilitate the creation of solid business cases by industry and venture capital for the larger clinical trials required to take these ideas to market.To advance these goals, NIH is pursuing general agreements (Memoranda of Understanding, MOUs) with device manufacturers to set up a framework for this funding program. In the MOUs, we expect each company to specify the capabilities of their devices, along with information, support and any other concessions they are willing to provide to researchers. In other words, it's a public/private partnership to advance the goal of having all depressed Americans implanted with the CyberNeuroTron WritBit device by 2035 (just kidding!!).But seriously... before touting the impending clinical relevance of a study in rodents, basic scientists and bureaucrats alike should listen to patients with the current generation of DBS devices. Participants in the halted BROADEN Trial for refractory depression reported outcomes ranging from “...the side effects caused by the device were, at times, worse than the depression itself” to “I feel like I have a second chance at life.”What do you do with a medical device that causes ... Read more »

  • June 21, 2015
  • 06:28 AM
  • 880 views

The Future of Depression Treatment

by The Neurocritic in The Neurocritic

2014Jessica is depressed again. After six straight weeks of overtime, her boss blandly praised her teamwork at the product launch party. And the following week she was passed over for a promotion in favor of Jason, her junior co-worker. "It's always that way, I'll never get ahead..." She arrives at her therapist's office late, looking stressed, disheveled, and dejected. The same old feelings of worthlessness and despair prompted her to resume her medication and CBT routine."You deserve to be recognized for your work," said Dr. Harrison. "The things you're telling yourself right now are cognitive distortions: the black and white thinking, the overgeneralization, the self-blame, jumping to conclusions... " "I guess so," muttered Jessica, looking down. "And you need a vacation!". . .A brilliant suggestion, Dr. Harrison. As we all know, taking time off to relax and recharge after a stressful time will do wonders for our mental health. And building up a reserve of happy memories to draw upon during darker times is a cornerstone of positive psychology.Jessica and her husband Michael take a week-long vacation in Hawaii, creating new episodic memories that involve snorkling, parasailing, luaus, and mai tais on the beach. Jessica ultimately decides to quit her job and sell jewelry on Etsy.2015Michael is depressed after losing his job. His self-esteem has plummeted, and he feels useless. But he's too proud to ask for help. "Depression is something that happens to other people (like my wife), but not to me." He grows increasingly angry and starts drinking too much.Jessica finally convinces him to see Dr. Harrison's colleague. Dr. Roberts is a psychiatrist with a Ph.D. in neuroscience. She's adopted a translational approach and tries to incorporate the latest preclinical research into her practice. She's intrigued by the latest finding from Tonegawa's lab, which suggests that the reactivation of a happy memory is more effective in alleviating depression than experiencing a similar event in the present.Recalling happier memories can reverse depression, said the MIT press release.  So instead of telling Michael to take time off and travel and practice mindfulness and live in the present, she tells him to recall his fondest memory from last year's vacation in Hawaii.  It doesn't work.Michael goes to see Dr. Harrison, who prescribes bupropion and venlafaxine. Four weeks later, he feels much better, and starts a popular website that repudiates positive psychology. Seligman and Zimbardo are secretly chagrined. . . .Happy Hippocampusphoto credit: S. RamirezArtificially reactivating positive [sexual] memories [in male mice] could offer an alternative to traditional antidepressants makes them struggle more when you hold them by the tail after 10 days of confinement.1Not as upbeat as the press release, eh?The findings ... offer a possible explanation for the success of psychotherapies in which depression patients are encouraged to recall pleasant experiences. They also suggest new ways to treat depression by manipulating the brain cells where memories are stored...“Once you identify specific sites in the memory circuit which are not functioning well, or whose boosting will bring a beneficial consequence, there is a possibility of inventing new medical technology where the improvement will be targeted to the specific part of the circuit, rather than administering a drug and letting that drug function everywhere in the brain,” says Susumu Tonegawa, ... senior author of the paper.Although this type of intervention is not yet possible in humans, “This type of analysis gives information as to where to target specific disorders,” Tonegawa adds.Before considering what the mice might actually experience when their happy memory cells are activated with light, let's all marvel at what was accomplished here.Ramirez et al. (2015) studied mice that were genetically engineered to allow blue light to activate a specific set of granule cells in the dentate gyrus subfield of the hippocampus. These neurons are critical for the formation of new memories and are considered “engram cells” that undergo physical changes and store discrete memories (Liu et al., 2015). When a cue reactivates the same set of neurons, the episodic memory is retrieved. In this study, the engram cells were part of a larger circuit that included the amygdala and the nucleus accumbens, regions important for processing emotion, motivation, and reward.Ramiriez, Liu, Tonegawa and colleagues have repeatedly demonstrated their masterful manipulation of mouse memories: activating fear memories, implanting false memories, and changing the valence of memories. These experiments are technically challenging and far outside my areas of expertise (greater detail in the Appendix below). In brief, the authors were able to label discrete sets of dentate gyrus cells while they were naturally activated during an interval of positive, neutral, or negative treatment. Then some groups of  animals were stressed for 10 days, and others remained in their home cages. ... Read more »

Liu, X., Ramirez, S., Redondo, R., & Tonegawa, S. (2014) Identification and Manipulation of Memory Engram Cells. Cold Spring Harbor Symposia on Quantitative Biology, 59-65. DOI: 10.1101/sqb.2014.79.024901  

Ramirez, S., Liu, X., MacDonald, C., Moffa, A., Zhou, J., Redondo, R., & Tonegawa, S. (2015) Activating positive memory engrams suppresses depression-like behaviour. Nature, 522(7556), 335-339. DOI: 10.1038/nature14514  

Timmins, L., & Lombard, M. (2005) When “Real” Seems Mediated: Inverse Presence. Presence: Teleoperators and Virtual Environments, 14(4), 492-500. DOI: 10.1162/105474605774785307  

  • June 7, 2015
  • 01:12 PM
  • 650 views

Use of Anti-Inflammatories Associated with Threefold Increase in Homicides

by The Neurocritic in The Neurocritic

Scene from Elephant, a fictional film by Gus Van SantRegular use of over-the-counter pain relievers like aspirin, ibuprofen, naproxen, and acetaminophen was associated with three times the risk of committing a homicide in a new Finnish study (Tiihonen et al., 2015). The association between NSAID use and murderous acts was far greater than the risk posed by antidepressants.Clearly, drug companies are pushing dangerous, toxic chemicals and we should ban the substances that are causing school massacres — Advil and Alleve and Tylenol are evil!!Wait..... what?Tiihonen and colleagues wanted to test the hypothesis that antidepressant treatment is associated with an increased risk of committing a homicide. Because, you know, the Scientology-backed Citizens Commission on Human Rights of Colorado thinks so (and their blog is cited in the paper!!):After a high-profile homicide case, there is often discussion in the media on whether or not the killing was caused or facilitated by a psychotropic medication. Antidepressants have especially been blamed by non-scientific organizations for a large number of senseless acts of violence, e.g., 13 school shootings in the last decade in the U.S. and Finland [1]. The authors reviewed a database of all homicides investigated by the police in Finland between 2003 and 2011. A total of 959 offenders were included in the analysis. Each offender was matched to 10 controls selected from the Population Information System. Then the authors checked purchases in the Finnish Prescription Register. A participant was considered a "user" if they had a current purchase in the system.1 The main drug classes examined were antidepressants, benzodiazepines, and antipsychotics. The primary outcome measure was risk of offending for current use vs. no use of those drugs (with significance set to p<0.016 to correct for multiple comparisons). Seven other drug classes were examined as secondary outcome measures (with α adjusted to .005): opioid analgesics, non-opioid analgesics (e.g., NSAIDs), antiepileptics, lithium, stimulants, meds for addictive disorders, and non-benzo anxiolytics.Lo and behold, current use of antidepressants in the adult offender population was associated with a 31% greater risk of committing a homicide, but this did not reach significance (p=0.022). On the other hand, benzodiazepine use was associated with a 45% greater risk (p<.001), while antipsychotics were not associated with greater risk of offending (p=0.54).Most dangerous of all were pain relievers. Current use of opioid analgesics (like Oxycontin and Vicodin) was associated with 92% greater risk. Non-opioid analgesics were even worse: individuals taking these meds were at 206% greater risk of offending — that's a threefold increase. 2  Taken in the context of this surprising result, the anti-psych-med faction doth complain too much about antidepressants.Furthermore, analysis of young offenders (25 yrs or less) revealed that none of the medications were associated with greater risk of committing a homicide (benzos and opioids were p=.07 and .04 respectively). To repeat: In Finland at least, there was no association between antidepressant use and the risk of becoming a school shooter.What are we to make of the provocative NSAIDs? More study is needed:The surprisingly high risk associated with opioid and non-opioid analgesics deserves further attention in the treatment of pain among individuals with criminal history.Drug-related murders in oxycodone abusers don't come as a great surprise, but aspirin-related violence is hard to explain...3 Footnotes1 Having a purchase doesn't mean the individual was actually taking the drug before/during the time of the offense, however. 2 RR = 3.06; 95% CI: 1.78-5.24, p<0.001 for Advil, Tylenol, and the like. And the population-adjusted odds ratios (OR) weren't substantially different, although this wasn't reported for NSAIDs:The analysis based on case-control design showed an adjusted OR of 1.30 (95% CI: 0.97-1.75) as the risk of homicide for the current use of an antidepressant, 2.52 (95% CI: 1.90-3.35) for benzodiazepines, 0.62 (95% CI: 0.41-0.93) for antipsychotics, and 2.16 (95% CI: 1.41-3.30) for opioid analgesics.3 P.S. Just to be clear here, correlation ≠ causation. Disregarding the anomalous nature of the finding in the first place, it could be that murderers have more headaches and muscle pain, so they take more anti-inflammatories (rather than ibuprofen "causing" violence). But if the anti-med faction uses these results to argue that "antidepressants cause school shootings" then explain how ibuprofen raises the risk threefold...ReferenceTiihonen, J., Lehti, M., Aaltonen, M., Kivivuori, J., Kautiainen, H., J. Virta, L., Hoti, F., Tanskanen, A., & Korhonen, P. (2015). Psychotropic drugs and homicide: A prospective cohort study from Finland. World Psychiatry, 14 (2), 245-247. DOI: 10.1002/wps.20220

... Read more »

Tiihonen, J., Lehti, M., Aaltonen, M., Kivivuori, J., Kautiainen, H., J. Virta, L., Hoti, F., Tanskanen, A., & Korhonen, P. (2015) Psychotropic drugs and homicide: A prospective cohort study from Finland. World Psychiatry, 14(2), 245-247. DOI: 10.1002/wps.20220  

  • May 31, 2015
  • 09:33 PM
  • 769 views

Capgras for Cats and Canaries

by The Neurocritic in The Neurocritic

Capgras syndrome is the delusion that a familiar person has been replaced by a nearly identical duplicate. The imposter is usually a loved one or a person otherwise close to the patient.Originally thought to be a manifestation of schizophrenia and other psychotic illnesses, the syndrome is most often seen in individuals with dementia (Josephs, 2007). It can also result from acquired damage to a secondary (dorsal) face recognition system important for connecting the received images with an affective tone (Ellis & Young, 1990).1 Because of this, the delusion crosses the border between psychiatry and neurology.The porous etiology of Capgras syndrome raises the question of how phenomenologically similar delusional belief systems can be constructed from such different underlying neural malfunctions. This is not a problem for Freudian types, who promote psychodynamic explanations (e.g., psychic conflict, regression, etc.). For example, Koritar and Steiner (1988) maintain that “Capgras' Syndrome represents a nonspecific symptom of regression to an early developmental stage characterized by archaic modes of thought, resulting from a relative activation of primitive brain centres.”The psychodynamic view was nicely dismissed by de Pauw (1994), who states:While often ill-founded and convoluted, these formulations have, until recently, dominated many theoretical approaches to the phenomenon. Generally post hoc and teleological in nature, they postulate motives that are not introspectable and defence mechanisms that cannot be observed, measured or refuted. While psychosocial factors can and often do play a part in the development, content and course of the Capgras delusion in individual patients it remains to be proven that such factors are necessary and sufficient to account for delusional misidentification in general and the Capgras delusion in particular.Canary CapgrasAlthough psychodynamic explanations were sometimes applied 2 to cases of Capgras syndrome for animals,3 other clinicians report that the delusional misindentification of pets can be ameliorated by pharmacological treatment of the underlying psychotic disorder. Rösler et al. (2001) presented the case of “a socially isolated woman who felt her canary was replaced by a duplicate”:Mrs. G., a 67-year-old woman, was admitted for the first time to a psychiatric hospital for late paraphrenia. ... She had been a widow for 11 years, had no children, and lived on her own with very few social contacts. Furthermore, she suffered from concerns that her canary was alone at home. She was delighted with the suggestion that the bird be transferred to the ward. However, during the first two days she repeatedly asserted that the canary in the cage was not her canary and reported that the bird looked exactly like her canary, but was in fact a duplicate. There were otherwise no misidentifications of persons or objects.Earlier, Somerfield (1999) had reported a case of parrot Capgras, also in an elderly woman with a late-onset delusional disorder:I would like to report an unusual case of a 91-year-old woman with a 10-year history of late paraphrenia (LP) and episodes of Capgras syndrome involving her parrot. She was a widow of 22 years, nulliparous, with profound deafness and a fiercely independent character.  The psychotic symptoms were usually well controlled by haloperidol 0.5 mg orally. However, she was periodically non-compliant with medication, resulting in deterioration of her mental state, refusal of food and her barricading herself in her room to stop her parrot being stolen. At times she accused others of “swapping” the parrot and said the bird was an identical imposter. There was no misidentifcation of people or objects. Her symptoms would attenuate rapidly with reinstatement of haloperidol.Both of these patients believed their beloved pet birds had been replaced by impostors, but neither of them misidentified any human beings. Clearly, this form of Capgras syndrome is different from what can happen after acquired damage to the affective face identification system (Ellis & Young, 1990). Is there an isolated case of sudden onset Capgras for animals that does not encompass person identification as well? I couldn't find one.A Common Explanation?Despite these differences, Ellis and Lewis (2001) suggested that “It seems parsimonious to seek a common explanation for the delusion, regardless of its aetiology.” I'm not so sure. If that's true, then haloperidol should effectively treat all instances of Capgras syndrome, including those that arise after a stroke. And there's evidence suggesting that antipsychotics would be ineffective in such patients.Are there systematic differences in the symptoms shown by Capgras patients with varying etiologies? Josephs (2007) reviewed 47 patient records and found no major differences between the delusions in patients with neurodegenerative vs. non-neurodegenerative disorders. In all 47 cases, the delusion involved a spouse, child, or other relative. {There were no cases involving animals or objects.}The factors that differed were age of onset (older in dementia patients) and other reported symptoms (e.g., visual hallucinations 4 in all patients with Lewy body dementia, LBD). In this series, 81% of patients had a neurodegenerative disease, and only 4% had schizophrenia [perhaps the Capgras delusion was under-reported in the context of wide-ranging delusions?]. Other cases were due to methamphetamine abuse (4%) or sudden onset brain injury, e.g. hemorrhage (11%).Interestingly, Josephs puts forth dopamine dysfunction as a unifying theme, in line with Ellis and Lewis's general suggestion of a common explanation. The pathology in dementia with Lewy bodies includes degeneration of neurons containing dopamine and acetylcholine. The cognitive/behavioral symptoms of LBD overlap with those seen in Parkinson's dementia, which also involves degeneration of dopaminergic neurons. But dopamine-blocking antipsychotics like haloperidol should not be used in treating LBD. So from a circuit perspective, using “dopamine dysregulation” as a parsimonious explanation isn't really an explanation. And this conception doesn't fit with the neuropsychological model (shown at the bottom of the page).I'm not a fan of parsimony in matters of brain function and dysfunction. We don't know why one person thinks her canary has been replaced by an impostor, another thinks her husband has been replaced by a woman, while a third is convinced there are six copies of his wife floating around.5 I don't expect there to be a unifying explanation. The ... Read more »

Ellis, H., & Young, A. (1990) Accounting for delusional misidentifications. The British Journal of Psychiatry, 157(2), 239-248. DOI: 10.1192/bjp.157.2.239  

Rösler, A., Holder, G., & Seifritz, E. (2001) Canary Capgras. The Journal of Neuropsychiatry and Clinical Neurosciences, 13(3), 429-429. DOI: 10.1176/jnp.13.3.429  

  • May 16, 2015
  • 02:00 PM
  • 674 views

Shooting the Phantom Head (perceptual delusional bicephaly)

by The Neurocritic in The Neurocritic

I have two headsWhere's the man, he's late--Throwing Muses, Devil's Roof Medical journals are enlivened by case reports of bizarre and unusual syndromes. Although somatic delusions are relatively common in schizophrenia, reports of hallucinations and delusions of bicephaly are rare. For a patient to attempt to remove a perceived second head by shooting and to survive the experience for more than two years may well be unique, and merits presentation. --David Ames, British Journal of Psychiatry (1984)In 1984, Dr. David Ames of Royal Melbourne Hospital published a truly bizarre case report about a 39 year old man hospitalized with a self-inflicted gunshot wound through the left frontal lobe (Ames, 1984). The man was driven to this desperate act by the delusion of having a second head on his shoulder. The interloping head belonged to his wife's gynecologist.In an even more macabre twist, his wife had died in a car accident two years earlier..... and the poor man had been driving at the time!Surprisingly, the man survived a bullet through his skull (in true Phineas Gage fashion). After waking from surgery to remove the bullet fragments, the patient was interviewed:He described a second head on his shoulder. He believed that the head belonged to his wife's gynaecologist, and described previously having felt that his wife was having an affair with this gynaecologist, prior to her death. He described being able to see the second head when he went to bed at night, and stated that it had been trying to dominate his normal head. He also stated that he was hearing voices, including the voice of his wife's gynaecologist from the second head, as well as the voices of Jesus and Abraham around him, conversing with each other. All the voices were confirming that he had two heads...I'm two headed one free one sticky--Throwing Muses, Devil's Roof “The other head kept trying to dominate my normal head, and I would not let it. It kept trying to say to me I would lose, and I said bull-shit ... and decided to shoot my other head off.”A gun was not his first choice, however... he originally wanted to use an ax.He stated that he fired six shots, the first at the second head, which he then decided was hanging by a thread, and then another one through the roof of his mouth. He then fired four more shots, one of which appeared to have gone through the roof of his mouth and three of which missed. He said that he felt good at that stage, and that the other head was not felt any more. Then he passed out. Prior to shooting himself, he had considered using an axe to remove the phantom head.Not surprisingly, the patient was diagnosed with schizophrenia and given antipsychotics.He was seen regularly in psychiatric out-patients following this operation and by March, stated that the second head was dead, that he was taking his chlorpromazine regularly, and that he had no worries.  [This was Australia, after all.]Unfortunately, the man died two years later from a Streptococcus pneumoniae infection in his brain.  Ames (1984) concluded his lively and bizarre case report by naming the singular syndrome “perceptual delusional bicephaly”:This case illustrates an interesting phenomenon of perceptual delusional bicephaly; the delusion caused the patient to attempt to remove the second head by shooting. It is notable that following his head injury and treatment with chlorpromazine, the initial symptoms resolved, although he was left with the problems of social disinhibition and poor volition, typical of patients with frontal lobe injuries. As far as I know, this specific delusion has not yet been depicted in a horror film (or in an episode of Perception or Black Box). ReferenceAmes, D. (1984). Self shooting of a phantom head The British Journal of Psychiatry, 145 (2), 193-194 DOI: 10.1192/bjp.145.2.193

... Read more »

Ames, D. (1984) Self shooting of a phantom head. The British Journal of Psychiatry, 145(2), 193-194. DOI: 10.1192/bjp.145.2.193  

  • May 5, 2015
  • 06:14 AM
  • 787 views

Tylenol Doesn't Really Blunt Your Emotions

by The Neurocritic in The Neurocritic

A new study has found that the pain reliever TYLENOL® (acetaminophen) not only dampens negative emotions, it blunts positive emotions too. Or does it?Durso and colleagues (2015) reckoned that if acetaminophen can lessen the sting of psychological pain (Dewall et al., 2010; Randles et al., 2013) – which is doubtful in my view – then it might also lessen reactivity to positive stimuli. Evidence in favor of their hypothesis would support differential susceptibility, the notion that the same factors govern reactivity to positive and negative experiences.1 This outcome would also contradict the framework of acetaminophen as an all-purpose treatment for physical and psychological pain.The Neurocritic is not keen on TYLENOL® as a remedy for existential dread or social rejection. In high doses acetaminophen isn't great for your liver, either. And a recent meta-analysis even showed that it's ineffective in treating lower back pain (Machado et al., 2015)...But I'll try to be less negative than usual. The evidence presented in the main manuscript supported the authors' hypothesis. Participants who took acetaminophen rated positive and negative IAPS pictures as less emotionally arousing compared to a separate group of participants on placebo. The drug group also rated the unpleasant pictures less negatively and the pleasant pictures less positively. “In all, rather than being labeled as merely a pain reliever, acetaminophen might be better described as an all-purpose emotion reliever,” they concluded (Durso et al., 2015).Appearing in the prestigious Journal of Psychological Acetaminophen Studies, the paper described two experiments on healthy undergraduates, both of which yielded a raft of null results.Wait a minute..... what? How can that be?The main manuscript reported the results collapsed across the two studies, and the Supplemental Material presented the results from each experiment separately. Why does this matter?Eighty-two participants in Study 1 and 85 participants in Study 2 were recruited to participate in an experiment on “Tylenol and social cognition” in exchange for course credit. Our stopping rule of at least 80 participants per study was based on previously published research on acetaminophen (DeWall et al., 2010; Randles et al., 2013), in which 30 to 50 participants were recruited per condition (i.e., acetaminophen vs. a placebo).  ... The analyses reported here for the combined studies are reported for each study separately in the Supplemental Material available online.What this means is that the authors violated their stopping rule, and recruited twice the number of participants as originally planned. Like the other JPAS articles, this was a between-subjects design (unfortunately), and there were over 80 participants in each condition (instead of 30 to 50).After running Experiment 1, the authors were faced with results like these:As expected, however, a main effect of treatment (though not significantly significant in this study) was obtained, F(1,72) = 2.15, p = .147, ηp2 = .029, as was the predicted interaction (although it was not statistically significant in this study), F(3.3, 240.3) = 1.15, p = .330, ηp2 = .016. Contrast analyses indicated that participants taking acetaminophen were marginally significantly less emotionally aroused by extremely pleasant stimuli (M = 5.01, SD = 1.75) than were participants taking placebo (M = 5.65, SD = 1.55), t(72) = 1.67, p = .099. Similarly, participants receiving acetaminophen were less emotionally aroused by extremely unpleasant stimuli (M = 6.88, SD = 1.25) than were participants assigned the placebo condition (M = 7.23, SD = 1.84), although this difference was not statistically significant in this study, t(72) = 0.96, p = .341. Furthermore, participants taking acetaminophen tended to be less emotionally aroused by moderately pleasant stimuli (M = 2.91, SD = 1.64) than participants taking placebo (M = 3.49, SD = 1.89), t(72) = 1.44, p = .155, and participants taking acetaminophen also tended to be less emotionally aroused by moderated unpleasant stimuli (M = 4.68, SD = 1.42) than participants taking placebo (M = 5.25, SD = 2.02), t(72) = 1.42, p = .161, although these differences were not statistically significant in this study. Wow, what a disappointment to get these results. Nothing looks statistically significant!Let's look at Experiment 2:...Contrast analyses revealed that participants taking acetaminophen tended to rate extremely unpleasant stimuli (M = -3.39, SD = 1.14) less negatively than participants receiving placebo (M = -3.74, SD = 0.74), t(77) = 1.60, p = .115, though this contrast was not itself statistically significant within this study. Participants taking acetaminophen also rated extremely pleasant stimuli (M = +2.51, SD = 1.07) significantly less positively than participants receiving placebo (M = +3.19, SD = 0.88), t(77) = 3.06, p = .003.Participants taking acetaminophen also tended to evaluate moderately pleasant stimuli (M = +1.15, SD = 0.91) less positively than participants receiving placebo (M = +1.42, SD = 0.89), t(77) = 1.30, p = .198, although this difference was not statistically significant in this study. Finally, participants taking acetaminophen tended to rate moderately unpleasant stimuli less negatively (M = -1.84, SD = 0.99) than participants taking placebo (M = -1.93, SD = 0.95), although this difference was not significant in this study, t(77) = 0.42, p = .678. [NOTE: "tended"? really?] Evaluations of neutral stimuli surprisingly differed as a function of treatment, t(77) = 2.94, p = .004, such that participants taking acetaminophen evaluated these stimuli significantly less positively (M = -0.05, SD = 0.42) than did participants taking placebo (M = +0.22, SD = 0.38). One of the arguments that acetaminophen affects ratings of emotional stimuli specifically (both positive and negative) is that it does not affect ratings for neutral stimuli. Yet it did here. So in the paragraphs above, extremely pleasant stimuli and neutral stimuli were both rated as less positive by the drug group, but ratings for extremely unpleasant, moderately pleasant, and moderately unpleasant pictures did not dif... Read more »

  • April 26, 2015
  • 11:53 PM
  • 779 views

FDA says no to marketing FDDNP for CTE

by The Neurocritic in The Neurocritic

The U.S. Food and Drug Administration recently admonished TauMark™, a brain diagnostics company, for advertising brain scans that can diagnose chronic traumatic encephalopathy (CTE), Alzheimer's disease, and other types of dementia. The Los Angeles Times reported that the FDA ordered UCLA researcher Dr. Gary Small and his colleague/business partner Dr. Jorge Barrio to remove misleading information from their company website (example shown below).CTE has been in the news because the neurodegenerative condition has been linked to a rash of suicides in retired NFL players, based on post-mortem observations. And the TauMark™ group made headlines two years ago with a preliminary study claiming that CTE pathology is detectable in living players (Small et al., 2013).The FDA letter stated:The website suggests in a promotional context that FDDNP, an investigational new drug, is safe and effective for the purpose for which it is being investigated or otherwise promotes the drug. As a result, FDDNP is misbranded under section 502(f)(1) of the FD&C Act... [18F]-FDDNP1 is a molecular imaging probe that crosses the blood brain barrier and binds to several kinds of abnormal proteins in the brain. When tagged with a radioactive tracer, FDDNP can be visualized using PET (positron emission tomography).Despite what the name of the company implies, FDDNP is not an exclusive tau marker. FDDNP may bind to tau protein [although this is disputed],2 but it also binds to beta-amyloid, found in the clumpy plaques that form in the brains of those with Alzheimer's disease. Tau is found in neurofibrillary tangles, also characteristic of Alzheimer's pathology, and seen in other neurodegenerative tauopathies such as CTE.The big deal with this and other radiotracers is that the pathological proteins can now be visualized in living human beings. Previously, an Alzheimer's diagnosis could only be given at autopsy, when the post-mortem brain tissue was processed to reveal plaques and tangles. So PET imaging is a BIG improvement. But still, a scan alone is not completely diagnostic, as noted by the Alzheimer's Association:Even though amyloid plaques in the brain are a characteristic feature of Alzheimer's disease, their presence cannot be used to diagnose the disease. Many people have amyloid plaques in the brain but have no symptoms of cognitive decline or Alzheimer's disease. Because amyloid plaques cannot be used to diagnose Alzheimer's disease, amyloid imaging is not recommended for routine use in patients suspected of having Alzheimer's disease.from TauMark's old websiteThere are currently three FDA-approved molecular tracers that bind to beta-amyloid: florbetapir, flutemetamol, and florbetaben (note that none of these is FDDNP). But the big selling point of TauMark™ is (of course) the tau marker part, which would also label tau in the brains of individuals with CTE and frontotemporal dementia, diseases not characterized by amyloid plaques. But how can you tell the difference, when FDDNP targets plaques and tangles (and prion proteins, for that matter)?A new study by the UCLA team demonstrated that the distribution of FDDNP labeling in the brains of Alzheimer's patients differs from that seen in a selected group of former NFL players with cognitive complaints (Barrio et al., 2015). These retired athletes (and others with a history of multiple concussions) are at risk of developing the brain pathology known as chronic traumatic encephalopathy. from Fig. 1 (Barrio et al., 2015).  mTBI = mild traumatic brain injury, or concussion. T1 to T4 = progressive FDDNP PET signal patterns.It's a well-established fact that brains with Alzheimer's disease, frontotemporal lobar degeneration, or Lou Gehrig's disease (for example) all show different patterns of neurodegeneration, so why not extend this to CTE? This may seem like a reasonable approach, but there are problems with some of the assumptions.Perhaps the most deceptive claim is that “TauMark owns the exclusive license of the first and only brain measure of tau protein...” Au c... Read more »

Barrio, J., Small, G., Wong, K., Huang, S., Liu, J., Merrill, D., Giza, C., Fitzsimmons, R., Omalu, B., Bailes, J.... (2015) In vivo characterization of chronic traumatic encephalopathy using [F-18]FDDNP PET brain imaging. Proceedings of the National Academy of Sciences, 201409952. DOI: 10.1073/pnas.1409952112  

Zimmer, E., Leuzy, A., Gauthier, S., & Rosa-Neto, P. (2014) Developments in Tau PET Imaging. The Canadian Journal of Neurological Sciences, 41(05), 547-553. DOI: 10.1017/cjn.2014.15  

  • March 16, 2015
  • 05:47 AM
  • 718 views

Update on the BROADEN Trial of DBS for Treatment-Resistant Depression

by The Neurocritic in The Neurocritic

Website for the BROADEN™ study, which was terminatedIn these days of irrational exuberance about neural circuit models, it's wise to remember the limitations of current deep brain stimulation (DBS) methods to treat psychiatric disorders. If you recall (from late 2013), Neurotech Business Report revealed that "St. Jude Medical failed a futility analysis of its BROADEN trial of DBS for treatment of depression..."A recent comment on my old post about the BROADEN Trial1 had an even more pessimistic revelation: there was only a 17.2% chance of a successful study outcome:Regarding Anonymous' comment on January 30, 2015 11:01 AM, as follows in part:"Second, the information that it failed FDA approval or halted by the FDA is prima facie a blatant lie and demonstratively false. St Jude, the company, withdrew the trial."Much of this confusion could be cleared up if the study sponsors practiced more transparency.A bit of research reveals that St. Judes' BROADEN study was discontinued after the results of a futility analysis predicted the probability of a successful study outcome to be no greater than 17.2%. (According to a letter from St. Jude)Medtronic hasn't fared any better. Like the BROADEN study, Medtronics' VC DBS study was discontinued owing to inefficacy based on futility Analysis. If the FDA allowed St. Jude to save face with its shareholders and withdraw the trial rather than have the FDA take official action, that's asserting semantics over substance.If you would like to read more about the shortcomings of these major studies, please read (at least):Deep Brain Stimulation for Treatment-resistant Depression: Systematic Review of Clinical Outcomes,Takashi Morishita & Sarah M. Fayad &Masa-aki Higuchi & Kelsey A. Nestor & Kelly D. FooteThe American Society for Experimental NeuroTherapeutics, Inc. 2014NeurotherapeuticsDOI 10.1007/s13311-014-0282-1 The Anonymous Commenter kindly linked to a review article (Morishita et al., 2014), which indeed stated:A multicenter, prospective, randomized trial of SCC DBS for severe, medically refractory MDD (the BROADEN study), sponsored by St. Jude Medical, was recently discontinued after the results of a futility analysis (designed to test the probability of success of the study after 75 patients reached the 6-month postoperative follow-up) statistically predicted the probability of a successful study outcome to be no greater than 17.2 % (letter from St. Jude Medical Clinical Study Management).I (and others) had been looking far and wide for an update on the BROADEN Trial, whether in ClinicalTrials.gov or published by the sponsors. Instead, the authors of an outside review article (who seem to be involved in DBS for movement disorders and not depression) had access to a letter from St. Jude Medical Clinical Studies.Another large randomized controlled trial that targeted different brain structures (ventral capsule/ventral striatum, VC/VS) also failed a futility analysis (Morishita et al., 2014):Despite the very encouraging outcomes reported in the open-label studies described above, a recent multicenter, prospective, randomized trial of VC/VS DBS for MDD sponsored by Medtronic failed to show significant improvement in the stimulation group compared with a sham stimulation group 16 weeks after implantation of the device. This study was discontinued owing to perceived futility, and while investigators remain hopeful that modifications of inclusion criteria and technique might ultimately result in demonstrable clinical benefit in some cohort of severely debilitated, medically refractory patients with MDD, no studies investigating the efficacy of VC/VS DBS for MDD are currently open.In this case, however, the results were published (Dougherty et al., 2014):There was no significant difference in response rates between the active (3 of 15 subjects; 20%) and control (2 of 14 subjects; 14.3%) treatment arms and no significant difference between change in Montgomery-Åsberg Depression Rating Scale scores as a continuous measure upon completion of the 16-week controlled phase of the trial. The response rates at 12, 18, and 24 months during the open-label continuation phase were 20%, 26.7%, and 23.3%, respectively.Additional studies (with different stimulation parameters, better target localization, more stringent subject selection criteria) are needed, one would say. Self-reported outcomes from the patients themselves range from “...the side effects caused by the device were, at times, worse than the depression itself” to “I feel like I have a second chance at life.”So where do we go now?? Here's a tip: all the forward-looking investors are into magnetic nanoparticles these days (see Magnetic 'rust' controls brain activity)...Footnote1 BROADEN is an tortured acronym for BROdmann Area 25 DEep brain Neuromodulation. The target was subgenual cingulate cortex (aka BA 25). The trial was either halted by the FDA or ... Read more »

  • March 10, 2015
  • 12:26 AM
  • 727 views

Daylight Savings Time and "The Dress"

by The Neurocritic in The Neurocritic

もう何番煎じかも分からないけど例のドレス問題をまとめてみました。青黒/白金に見える人の色覚やモニタを疑ってる人はぜひご覧ください。 pic.twitter.com/6euNYw9xUa— ぶどう茶 (@budoucha) February 27, 2015Could one's chronotype (degree of "morningness" vs. "eveningness") be related to your membership on Team white/gold vs. Team blue/black?Dreaded by night owls everywhere, Daylight Savings Time forces us to get up an hour earlier. Yes, [my time to blog and] I have been living under a rock, but this evil event and an old tweet by Vaughan Bell piqued my interest in melanopsin and intrinsically photosensitive retinal ganglion cells.Totally speculative: wonder whether perceptual diffs reflect diffs in melanopsin. Blue sensitive, mediates brightness http://t.co/841bN6zvCs— Vaughan Bell (@vaughanbell) February 28, 2015I thought this was a brilliant idea, perhaps differences in melanopsin genes could contribute to differences in brightness perception. More about that in a moment.{Everyone already knows about #thedress from Tumblr and Buzzfeed and Twitter obviously}In the initial BuzzFeed poll, 75% saw it as white and gold, rather than the actual colors of blue and black. Facebook's more systematic research estimated this number was only 58% (and influenced by probably exposure to articles that used Photoshop). Facebook also reported differences by sex (males more b/b), age (youngsters more b/b), and interface (more b/b on computer vs. iPhone and Android).Dr. Cedar Riener wrote two informative posts about why people might perceive the colors differently, but Dr. Bell was not satisfied with this and other explanations. Wired consulted two experts in color vision:“Our visual system is supposed to throw away information about the illuminant and extract information about the actual reflectance,” says Jay Neitz, a neuroscientist at the University of Washington. “But I’ve studied individual differences in color vision for 30 years, and this is one of the biggest individual differences I’ve ever seen.”and “What’s happening here is your visual system is looking at this thing, and you’re trying to discount the chromatic bias of the daylight axis,” says Bevil Conway, a neuroscientist who studies color and vision at Wellesley College. “So people either discount the blue side, in which case they end up seeing white and gold, or discount the gold side, in which case they end up with blue and black.” Finally, Dr. Conway threw out the chronotype card:So when context varies, so will people’s visual perception. “Most people will see the blue on the white background as blue,” Conway says. “But on the black background some might see it as white.” He even speculated, perhaps jokingly, that the white-gold prejudice favors the idea of seeing the dress under strong daylight. “I bet night owls are more likely to see it as blue-black,” Conway says.Melanopsin and Intrinsically Photosensitive Retinal Ganglion CellsRods and cones are the primary photoreceptors in the retina that convert light into electrical signals. The role of the third type of photoreceptor is very different. Intrinsically photosensitive retinal ganglion cells (ipRGCs) sense light without vision and: ...play a major role in synchronizing circadian rhythms to the 24-hour light/dark cycle [via direct projections to the suprachiasmatic nucleus]......contribute to the regulation of pupil size and other behavioral responses to ambient lighting conditions......contribute to photic regulation of, and acute photic suppression of, release of the hormone melatonin...Recent research suggests that ipRGCs may play more of a role in visual perception than was originally believed. As Vaughan said, melanopsin (the photopigment in ipRGCs) is involved in brightness discrimination and is most sensitive to blue light. Brown et al. (2012) found that melanopsin knockout mice showed a change in spectral sensitivity that affected brightness discrimination; the KO mice needed higher green radiance to perform the task as well as the control mice.The figure below shows the spectra of human cone cells most sensitive to Short (S), Medium (M), and Long (L) wavelengths.Spectral sensitivities of human cone cells, S, M, and L types. X-axis is in nm.The peak spectral sensitivity for melanopsin photoreceptors is in the blue range. How do you isolate the role of melanopsin in humans? Brown et al. (2012) used metamers, which are......light stimuli that appear indistinguishable to cones (and therefore have the same color and photopic luminance) despite having different spectral power distributions.  ... to maximize the melanopic excitation achievable with the metamer approach, we aimed to circumvent rod-based responses by working at background light levels sufficiently bright to saturate rods.They verifie... Read more »

  • February 20, 2015
  • 02:48 AM
  • 687 views

One Brain Network for All Mental Illness

by The Neurocritic in The Neurocritic

What do schizophrenia, bipolar disorder, major depression, addiction, obsessive compulsive disorder, and anxiety have in common? A loss of gray matter in the dorsal anterior cingulate cortex (dACC) and bilateral anterior insula, according to a recent review of the structural neuroimaging literature (Goodkind et al., 2015). These two brain regions are important for executive functions, the top-down cognitive processes that allow us to maintain goals and flexibly alter our behavior in response to changing circumstances. The authors modestly concluded they had identified a “Common Neurobiological Substrate for Mental Illness.”One problem with this view is that the specific pattern of deficits in executive functions, and their severity, differ across these diverse psychiatric disorders. For instance, students with anxiety perform worse than controls in verbal selection tasks, while those with depression actually perform better (Snyder et al., 2014). Another problem is that gray matter volume in the dorsolateral prefrontal cortex, a key region for working memory (a core impairment in schizophrenia and to a lesser extent, in major depression and non-psychotic bipolar disorder), was oddly unaffected in the meta-analysis.The NIMH RDoC movement (Research Domain Criteria) aims to explain the biological basis of psychiatric symptoms that cut across traditional DSM diagnostic categories. But I think some of the recent research that uses this framework may carry the approach too far (Goodkind et al., 2015):Our findings ... provide an organizing model that emphasizes the import of shared endophenotypes across psychopathology, which is not currently an explicit component of psychiatric nosology. This transdiagnostic perspective is consistent...with newer dimensional models such as the NIMH’s RDoC Project.However, not even the Director of NIMH believes this is true:"The idea that these disorders share some common brain architecture and that some functions could be abnormal across so many of them is intriguing," said Thomas Insel, MD... [BUT]"I wouldn't have expected these results. I've been working under the assumption that we can use neuroimaging to help classify the different forms of mental illness," Insel said. "This makes it harder."Anterior Cingulate and Anterior Insula and Everyone We KnowThe dACC and anterior insula are ubiquitously activated 1 in human neuroimaging studies (leading Micah Allen to dub it the ‘everything’ network), and comprise either a “salience network” or “task-set network” (or even two separate cingulo-opercular systems) in resting state functional connectivity studies. But the changes reported in the newly published work were structural in nature. They were based on a meta-analysis of 193 voxel-based morphometry (VBM) studies that quantified gray matter volume across the entire brain in psychiatric patient groups, and compared this to controls.Goodkind et al., (2015) included a handy flow chart for how they selected the papers for their review.I could be wrong, but it looks like 34 papers were excluded because they found no differences between patients and controls. This would of course bias the results towards greater differences between patients and controls. And we don't know which of the six psychiatric diagnoses were included in the excluded batch. Was there an over-representation of null results in OCD? Anxiety? Depression?What Does VBM Measure, Anyway? Typically, VBM measures gray matter volume, which in the cortex is determined by surface area (which can vary due to differences in folding patterns) and by thickness (Kanai & Rees, 2011). These can be differentially related to some ability or characteristic. For example, Song et al. (2015) found that having a larger surface area in early visual cortex (V1 and V2) was correlated with better performance in a perceptual discrimination task, while larger cortical thickness was actually correlated with worse performance. Other investigators warn that volume really isn't the best measure of structural differences between patients and controls, and that cortical thickness is better (Ehrlich et al., 2012):Cortical thickness is assumed to reflect the arrangement and density of neuronal and glial cells, synaptic spines, as well as passing axons. Postmortem studies in patients with schizophrenia showed reduced neuronal size and a decrease in interneuronal neuropil, dendritic trees, cortical afferents, and synaptic spines, while no reduction in the number of neurons or signs of gliosis could be demonstrated.This leads us to the huge gap between dysfunction in cortical and subcortical microcircuits and gross changes in gray matter volume.Psychiatric Disorders Are Circuit DisordersThis motto tells us that mental illnesses are disorder of neural circuits, in line with the funding priorities of NIMH and the BRAIN Initiative. But structural MRI studies tell us nothing about the types of neurons that are affected. Or how their size, shape, and synaptic connections might be altered. Basically, volume loss in dACC and anterior insula could be caused by any number of reasons, and by different mechanisms across the disorders under consideration. Goodkind et al., (2015) state:Our connection of executive functioning to integrity of a well-established brain network that is perturbed across a broad range of psychiatric diagnoses helps ground a transdiagnostic understanding of mental illness in a context suggestive of common neural mechanisms for disease etiology and/or expression.But actually, we might find a reduction in the density of von Economo neurons in the dACC of individuals with early-onset schizophrenia (... Read more »

Goodkind, M., Eickhoff, S., Oathes, D., Jiang, Y., Chang, A., Jones-Hagata, L., Ortega, B., Zaiko, Y., Roach, E., Korgaonkar, M.... (2015) Identification of a Common Neurobiological Substrate for Mental Illness. JAMA Psychiatry. DOI: 10.1001/jamapsychiatry.2014.2206  

  • January 26, 2015
  • 06:23 AM
  • 640 views

Is it necessary to use brain imaging to understand teen girls' sexual decision making?

by The Neurocritic in The Neurocritic

“It is feasible to recruit and retain a cohort of female participants to perform a functional magnetic resonance imaging [fMRI] task focused on making decisions about sex, on the basis of varying levels of hypothetical sexual risk, and to complete longitudinal prospective diaries following this task. Preliminary evidence suggests that risk level differentially impacts brain activity related to sexual decision making in these women [i.e., girls aged 14-15 yrs], which may be related to past and future sexual behaviors.”-Hensel et al. (2015) Can the brain activity of adolescents predict whether they are likely to make risky sexual decisions in the future?  I think this is the goal of a new pilot study by researchers at Indiana University and the Kinsey Institute (Hensel et al., 2015). While I have no reason to doubt the good intentions of the project, certain aspects of it make me uncomfortable.But first, I have a confession to make. I'm not an expert in adolescent sexual health like first author Dr. Devon Hensel. Nor do I know much about pediatrics, adolescent medicine, health risk behaviors, sexually transmitted diseases, or the epidemiology of risk, like senior author Dr. J. Dennis Fortenberry (who has over 300 publications on these topics).  His papers include titles such as Time from first intercourse to first sexually transmitted infection diagnosis among adolescent women and Sexual learning, sexual experience, and healthy adolescent sex. Clearly, these are very important topics with serious personal and public health implications. But are fMRI studies of a potentially vulnerable population the best way to address these societal problems?The study recruited 14 adolescent girls (mean age = 14.7 yrs) from health clinics in lower- to middle-income neighborhoods. Most of the participants (12 of the 14) were African-American, most did not drink or do drugs, and most had not yet engaged in sexual activity.  However, the clinics served areas with “high rates of early childbearing and sexually transmitted infection” so the implication is that these young women are at greater risk of poor outcomes than those who live in different neighborhoods.Detailed sexual histories were obtained from the girls upon enrollment (see below). They also kept a diary of sexual thoughts and behaviors for 30 days. Given the sensitive nature of the information revealed by minors, it's especially important to outline the informed consent procedures and the precautions taken to protect privacy. Yes, a parent or guardian gave their approval, and the girls completed informed consent documents that were approved by the local IRB. But I wanted to see more about this in the Methods. For example, did the parent or guardian have access to their daughters' answers and/or diaries, or was that private? This could have influenced the willingness of the girls to disclose potentially embarrassing behavior or “verboten” activities (prohibited by parental mores, church teachings, legal age of consent,1 etc.).  I don't know, maybe the standard procedures are obvious to those within the field of sexual health behavior, but they weren't to me.Turning to more familiar territory, the experimental design for the neuroimaging study involved presentation of four different types of stimuli: (1) faces of adolescent males; (2) alcoholic beverages; (3) restaurant food; (4) household items (e.g., frying pan). My made-up examples of the stimuli are shown below.Each picture was presented with information that indicated the item's risk level (“high” or “low”):Adolescent male faces: number of previous sexual partners and typical condom use (yes/no)Alcoholic beverages: number of alcohol units and whether there was a designated driver (yes/no)Food: calorie content and whether the restaurant serving the food had been cited in the past year for health code violations (yes/no) Household items: whether the object could be returned to the store (yes/no)For each picture, participants rated how likely they were to: (1) have sex with the male, (2) drink the beverage, (3) eat the food, or (4) purchase the product (1 = very unlikely to 4 = very likely). There were 35 exemplars of each category, and each stimulus was presented in both “high” and “low” risk contexts. So oddly, the pizza was 100 calories and from a clean restaurant on one trial, compared to 1,000 calories and from a roach-infested dump on another trial.The faces task was adapted from a study in adult women (Rupp et al., 2009) where the participants gave a mean likelihood rating of 2.45 for sex with low risk men vs. 1.41 for high risk men (significantly less likely for the latter). The teen girls showed the opposite result: 2.85 for low risk teen boys vs. 3.85 for high risk teen boys (significantly more likely) — the “bad boy” effect?But the actual values were quite confusing. At one point the authors say they omitted the alcohol condition: “The present study focused on the legal behaviors (e.g., sexual behavior, buying item, and eating food) in which adolescents could participate.”But in the Fig. 1 legend, they say the opposite (that the alcohol condition was included):Panel (A) provides the average likelihood of young women's endorsing low- and high-risk decisions in the boy, alcohol, food, and household item (control) stimulus categories. Then they say that the low-risk male faces were rated as the most unlikely (i.e., least preferred) of all stimuli.  But Fig. 1 itself shows that the low-risk food stimuli were rated as the most unlikely...Regardless of the precise ratings, the young women were more drawn to all stimuli when they were in the high risk condition. The authors tried to make a case for more "risky" sexual choices among participants with higher levels of overt or covert sexual reporting, but the numbers were either impossibly low (for behavior) or thought-crimes only (for dreams/fantasy). So it's really hard to see how brain activity of any sort could be diagnostic of actual behavior at this point in their lives.And the neuroimaging results were confusing as well. First, the less desirable low-risk stimuli elicited greater responses in cognitive and emotional control regions:Neural activity in a cognitive-affective network, including prefrontal and anterior cingulate (ACC) regions, was significantly greater during low-risk decisions. But then, we see that the more desirable high-risk sexual stimuli elicited greater responses in cognitive/emotional control regions:Compared with other decisions, high-risk sexual decisions elicited greater activity in the anterior cingulate, and low-risk sexual decision elicited greater activity in regions of the visual cortex.&... Read more »

join us!

Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.

If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.

Register Now

Research Blogging is powered by SMG Technology.

To learn more, visit seedmediagroup.com.