Neuroskeptic , Neuroskeptic

715 posts · 592,979 views

Neuroskeptic
440 posts

Sort by Latest Post, Most Popular

View by Condensed, Full

  • May 15, 2010
  • 05:40 PM
  • 1,080 views

Do It Like You Dopamine It

by Neuroskeptic in Neuroskeptic

Neuroskeptic readers will know that I'm a big fan of theories. Rather than just poking around (or scanning) the brain under different conditions and seeing what happens, it's always better to have a testable hypothesis.I just found a 2007 paper by Israeli computational neuroscientists Niv et al that puts forward a very interesting theory about dopamine. Dopamine is a neurotransmitter, and dopamine cells are known to fire in phasic bursts - short volleys of spikes over millisecond timescales - in response to something which is either pleasurable in itself, or something that you've learned is associated with pleasure. Dopamine is therefore thought to be involved in learning what to do in order to get pleasurable rewards.But baseline, tonic dopamine levels vary over longer periods as well. The function of this tonic dopamine firing, and its relationship, if any, to phasic dopamine signalling, is less clear. Niv et al's idea is that the tonic dopamine level represents the brain's estimate of the average availability of rewards in the environment, and that it therefore controls how "vigorously" we should do stuff.A high reward availability means that, in general, there's lots of stuff going on, lots of potential gains to be made. So if you're not out there getting some reward, you're missing out. In economic terms, the opportunity cost of not acting, or acting slowly, is high - so you need to hurry up. On the other hand, if there's only minor rewards available, you might as well take things nice and slow, to conserve your energy. Niv et al present a simple mathematical model in which a hypothetical rat must decide how often to press a lever in order to get food, and show that it accounts for the data from animal learning experiments.The distinction between phasic dopamine (a specific reward) vs. tonic dopamine (overall reward availability) is a bit like the distinction between fear vs. anxiety. Fear is what you feel when something scary, i.e. harmful, is right there in front of you. Anxiety is the sense that something harmful could be round the next corner.This theory accounts for the fact that if you give someone a drug that increases dopamine levels, such as amphetamine, they become hyperactive - they do more stuff, faster, or at least try to. That's why they call it speed. This happens to animals too. Yet this hyperactivity starts almost immediately, which means that it can't be a product of learning.It also rings true in human terms. The feeling that everything's incredibly important, and that everyday tasks are really exciting, is one of the main effects of amphetamine. Every speed addict will have a story about the time they stayed up all night cleaning every inch of their house or organizing their wardrobe. This can easily develop into the compulsive, pointless repetition of the same task over and over. People with bipolar disorder often report the same kind of thing during (hypo)mania.What controls tonic dopamine levels? A really brilliantly elegant answer would be: phasic dopamine. Maybe every time phasic dopamine levels spike in response to a reward (or something which you've learned to associate with a reward), some of the dopamine gets left over. If there's lots of phasic dopamine firing, which suggests that the availability of rewards is high, the tonic dopamine levels rise.Unfortunately, it's probably not that simple, as signals from different parts of the brain seem to alter tonic and phasic dopamine firing largely independently, and this would mean that tonic dopamine would only increase after a good few rewards, not pre-emptively, which seems unlikely. The truth is, we don't know what sets the dopamine tone, and we don't really know what it does; but Niv et al's account is the most convincing I've come across...Niv Y, Daw ND, Joel D, & Dayan P (2007). Tonic dopamine: opportunity costs and the control of response vigor. Psychopharmacology, 191 (3), 507-20 PMID: 17031711... Read more »

  • October 20, 2013
  • 06:18 AM
  • 1,080 views

The Colorful Case of the Philosophical Zombie?

by Neuroskeptic in Neuroskeptic_Discover

The philosophical zombie, or p-zombie, is a hypothetical creature which is indistinguishable from a normal human, except that it has no conscious experience. Whether a p-zombie could exist, and whether it even makes sense to ask that question, are popular dinner-table topics of conversation amongst philosophers of mind. A new case report from Swiss neurologists […]The post The Colorful Case of the Philosophical Zombie? appeared first on Neuroskeptic.... Read more »

  • August 17, 2009
  • 10:09 AM
  • 1,078 views

Schizophrenia: The Mystery of the Missing Genes

by Neuroskeptic in Neuroskeptic

It's a cliché, but it's true - "schizophrenia genes" are the Holy Grail of modern psychiatry.Were they to be discovered, such genes would provide clues towards a better understanding of the biology of the disease, and that could lead directly to the development of better medications. It might also allow "genetic counselling" for parents concerned about their children's risk of schizophrenia.Perhaps most importantly for psychiatrists, the definitive identification of genes for a mental illness would provide cast-iron proof that psychiatric disorders are "real diseases", and that biological psychiatry is a branch of medicine like any other. Schizophrenia, generally thought of as the most purely "biological" of all mental disorders, is the best bet.With this in mind, let's look at three articles (1,2,3) published in Nature last month to much excited fanfare along the lines of 'Schizophrenia genes discovered!' All three were based on genome-wide association studies (GWAS). In a GWAS, you examine a huge number of genetic variants in the hope that some of them are associated with the disease or trait you're interested in. Several hundred thousand variants per study is standard at the moment. This is the genetic equivalent of trying to find the person responsible for a crime by fingerprinting everyone in town.The Nature papers were based on three seperate large GWAS projects - the SGENE-plus, the MGS, and the ICS. In total, there were over 8,000 schizophrenia patients and 19,000 healthy controls in these studies - enormous samples by the standards of human genetics research, and large enough that if there were any common genetic variants with even a modest effect on schizophrenia risk, they would probably have found them.What did they find? On the face of it, not much. The MGS(1) "did not produce genome-wide significant findings...power was adequate in the European-ancestry sample to detect very common risk alleles (30–60% frequency) with genotypic relative risks of approximately 1.3 ...The results indicate that there are few or no single common loci with such large effects on risk." In the SGENE-plus(2), likewise, "None of the markers gave P values smaller than our genome-wide significance threshold".The ISC study(3) did find one significantly associated variant in the Major Histocompatability Complex (MHC) region on chromosome 6. The MHC is known to be involved in immune function. When the data from all three studies were pooled together, several variants in the same region were also found to be significantly associated with schizophrenia.Somewhat confusingly, all three papers did this pooling, although they each did it in slightly different ways - the only area in which all three analyses found a result was the MHC region. The SGENE team's analysis, which was larger, also implicated two other, unrelated variants, which were not found in other two papers.To summarize, three very large studies found just one "schizophrenia gene" even after pooling their data. The variant, or possibly cluster of related ones, is presumably involved in the immune system. Although the authors of the Nature papers made much of this finding, the main news here is that there is at most one common variant which raises the relative risk of schizophrenia by even just 20%. Given that the baseline risk of schizophrenia is about 1%, there is at most one common gene which raises your risk to more than 1.2%. That's it.So, what does this mean? There are three possibilites. First, it could be that schizophrenia genes are not "common". This possibility is getting a lot of attention at the moment, thanks to a report from a few months back, Walsh et al, suggesting that some cases of schizophrenia are caused by just one rare, high-impact mutation, but a different mutation in each case. In other words, each case of schizophrenia could be genetically almost unique. GWAS studies would be unable to detect such effects.Second, there could be lots of common variants, each with an effect on risk so tiny that it wasn't found even in these three large projects. The only way to identify them would be to do even bigger studies. The ISC team's paper claims that this is true, on the basis of this graph: They took all of the variants which were more common in schizophrenics than in controls, even if they were only slightly more common, and totalled up the number of "slight risk" variants each person has.The graph shows that these "slight risk" markers were more common in people with schizophrenia from two entirely seperate studies, and are also more common in people with bipolar disorder, but were not associated with five medical illnesses like diabetes. This is an interesting result, but these variants must have such a tiny effect on risk that finding them would involve spending an awful lot of time (and money) for questionable benefit.The third and final possibility is that "schizophrenia" is just less genetic than most psychiatrists think, because the true causes of the disorder are not genetic, and/or because "schizophrenia" is an umbrella term for many different diseases with different causes. This possibility is not talked about much in respectable circles, but if genetics doesn't start giving solid results soon, it may be.Purcell, S., & et Al (2009). Common polygenic variation contributes to risk of schizophrenia and bipolar disorder Nature DOI: 10.1038/nature08185Shi, J., & et Al (2009). Common variants on chromosome 6p22.1 are associated with schizophrenia Nature DOI: 10.1038/nature08192... Read more »

  • June 14, 2010
  • 06:39 AM
  • 1,077 views

The Face of a Mouse in Pain

by Neuroskeptic in Neuroskeptic

Have you ever wanted to know whether a mouse is in pain?Of course you have. And now you can, thanks to Langford et al's paper Coding of facial expressions of pain in the laboratory mouse.It turns out that mice, just like people, display a distinctive "Ouch!" facial expression when they're suffering acute pain. It consists of narrowing of the eyes, bulging nose and cheeks, ears pulled back, and whiskers either pulled back or forwards.With the help of a high-definition video camera and a little training, you can reliably and accurately tell how much pain a mouse is feeling. It works for most kinds of mouse pain, although it's not seen in either extremely brief or very long-term pain.Langford et al tried it out on mice with a certain genetic mutation, which causes severe migraines in humans. These mice displayed the pain face even in the absence of external painful stimuli, showing that they were suffering internally. A migraine drug was able to stop the pain.Finally, lesions to a part of the brain called the anterior insula stopped mice from expressing their pain. This is exactly what happens in people as well, suggesting that our displays of suffering are an evolutionary ancient mechanism. Of course this kind of study can't prove that animals consciously feel pain in the same way that we do, but I see no reason to doubt it: we feel pain as a result of neural activity, and mammals have exactly the same brain systems.Langford, D., Bailey, A., Chanda, M., Clarke, S., Drummond, T., Echols, S., Glick, S., Ingrao, J., Klassen-Ross, T., LaCroix-Fralish, M., Matsumiya, L., Sorge, R., Sotocinal, S., Tabaka, J., Wong, D., van den Maagdenberg, A., Ferrari, M., Craig, K., & Mogil, J. (2010). Coding of facial expressions of pain in the laboratory mouse Nature Methods, 7 (6), 447-449 DOI: 10.1038/nmeth.1455... Read more »

Langford, D., Bailey, A., Chanda, M., Clarke, S., Drummond, T., Echols, S., Glick, S., Ingrao, J., Klassen-Ross, T., LaCroix-Fralish, M.... (2010) Coding of facial expressions of pain in the laboratory mouse. Nature Methods, 7(6), 447-449. DOI: 10.1038/nmeth.1455  

  • October 26, 2009
  • 04:24 PM
  • 1,075 views

Barack Obama Boosts Testosterone

by Neuroskeptic in Neuroskeptic

But only if you voted for him, and only if you're a man. That's according to a PLoS One paper called Dominance, Politics, and Physiology.It's already known that in males, winning competitions - achieving "dominance" - causes a rapid rise in testosterone release, whilst losing does the opposite. That's true in humans, as well as in other mammals. The authors wondered whether the same thing happens when men "win" vicariously - i.e. when someone we identify with triumphs.What better way of testing this than the U.S. Presidential Election? The authors took 163 American voters, and got them to provide saliva samples before, during and after the results came in on the night of the 4th November. Here's what happened -In Obama supporters (the blue line, natch), salivary testosterone levels stayed flat throughout the crucial hours. But supporters of John McCain or Libertarian candidate Bob Barr, suffered a testosterone crash after Obama's victory became apparent. That was only true in men, though; in women, there was no change.Heh. Of course, we hardly needed biology to tell us that people often identify strongly with their preferred political parties, and the fact that social events cause hormonal changes shouldn't surprise anyone - the brain controls the secretion of most hormones.The gender difference is interesting, though. Does this mean that men identify closer with politicians? Or maybe only with male ones - what would have happened if Hilary had won... or Palin? It could be that the testosterone surge accompanying success is strictly a man thing, although it's been shown to occur in women in some studies, but not consistently.Finally, I should mention that this paper contains some excellent quotes, such as "...Robert Barr, who arguably did not have a chance of winning...", "In retrospective reports of their affective state upon the announcement of Obama as the president-elect, McCain and Barr voters felt significantly more unhappy" and my favourite, "men who voted for John McCain or Bob Barr (losers)". That last one may be taken slightly out of context.Stanton, S., Beehner, J., Saini, E., Kuhn, C., & LaBar, K. (2009). Dominance, Politics, and Physiology: Voters' Testosterone Changes on the Night of the 2008 United States Presidential Election PLoS ONE, 4 (10) DOI: 10.1371/journal.pone.0007543... Read more »

  • December 14, 2009
  • 09:08 AM
  • 1,074 views

In the Brain, Acidity Means Anxiety

by Neuroskeptic in Neuroskeptic

According to Mormon author and fruit grower "Dr" Robert O. Young, pretty much all diseases are caused by our bodies being too acidic. By adopting an "alkaline lifestyle" to raise your internal pH (lower pH being more acidic), you'll find that
if you maintain the saliva and the urine pH, ideally at 7.2 or above, you will never get sick. That’s right you will NEVER get sick!
Wow. Important components of the alkaline lifestyle include eating plenty of the right sort of fruits and vegetables, ideally ones grown by Young, and taking plenty of nutritional supplements. These don't come cheap, but when the payoff is being free of all diseases, who could complain?

Young calls his amazing theory the Alkavorian Approach™, aka the New Biology™. Almost everyone else calls it quack medicine and pseudoscience. Because it is quack medicine and pseudoscience. But a paper just published in Cell suggests an interesting role for pH in, of all things, anxiety and panic - The amygdala is a chemosensor that detects carbon dioxide and acidosis to elicit fear behavior.

The authors, Ziemann et al, were interested in a protein called Acid Sensing Ion Channel 1a, ASIC1a, which as the name suggests, is acid-sensitive. Nerve cells expressing ASIC1a are activated when the fluid around them becomes more acidic.

One of the most common causes of acidosis (a fall in body pH) is carbon dioxide, CO2. Breathing is how we get rid of the CO2 produced by our bodies; if breathing is impaired, for example during suffocation, CO2 levels rise, and pH falls as CO2 is converted to carbonic acid in the bloodstream.

In previous work, Ziemann et al found that the amygdala contains lots of ASIC1a. This is intriguing, because the amygdala is a brain region believed to be involved in fear, anxiety and panic, although it has other functions as well. It's long been known that breathing air with added CO2 can trigger anxiety and panic, especially in people vulnerable to panic attacks.

What's unclear is why this happens; various biological and psychological theories have been proposed. Ziemann et al set out to test the idea that ASIC1a in the amygdala mediates anxiety caused by CO2.

In a number of experiments they showed that mice genetically engineered have no ASIC1a (knockouts) were resistant to the anxiety-causing effects of air containing 10% or 20% CO2. Also, unlike normal mice, the knockouts were happy to enter a box with high CO2 levels - normal mice hated it. Injections of a weakly acidic liquid directly into the amygdala caused anxiety in normal mice, but not in the knockouts.

Most interestingly, they found that knockout mice could be made to fear CO2 by giving them ASIC1a in the amygdala. Knockouts injected in the amygdala with a virus containing ASIC1a DNA, which caused their cells to start producing the protein, showed anxiety (freezing behaviour) when breathing CO2. But it only worked if the virus was injected into the amygdala, not nearby regions.

This is a nice series of experiments which shows convincingly that ASIC1a mediates acidosis-related anxiety, at least in mice. What's most interesting however is that it also seems to involved in other kinds of anxiety and fear. The ASIC1a knockout mice were slightly less anxious in general; injections of an alkaline solution prevented CO2-related anxiety, but also reduced anxiety caused by other scary things, such as the smell of a cat.

The authors conclude by proposing that amygdala pH might be involved in fear more generally

Thus, we speculate that when fear-evoking stimuli activate the amygdala, its pH may fall. For example, synaptic vesicles release protons, and intense neural activity is known to lower pH.

But this is, as they say, speculation. The link between CO2, pH and panic attacks seems more solid. As the authors of another recent paper put it

We propose that the shared characteristics of CO2/H+ sensing neurons overlap to a point where threatening disturbances in brain pH homeostasis, such as those produced by CO2 inhalations, elicit a primal emotion that can range from breathlessness to panic.

ResearchBlogging.orgZiemann, A., Allen, J., Dahdaleh, N., Drebot, I., Coryell, M., Wunsch, A., Lynch, C., Faraci, F., Howard III, M., & Welsh, M. (2009). The Amygdala Is a Chemosensor that Detects Carbon Dioxide and Acidosis to Elicit Fear Behavior Cell, 139 (5), 1012-1021 DOI: 10.1016/j.cell.2009.10.029... Read more »

Ziemann, A., Allen, J., Dahdaleh, N., Drebot, I., Coryell, M., Wunsch, A., Lynch, C., Faraci, F., Howard III, M., & Welsh, M. (2009) The Amygdala Is a Chemosensor that Detects Carbon Dioxide and Acidosis to Elicit Fear Behavior. Cell, 139(5), 1012-1021. DOI: 10.1016/j.cell.2009.10.029  

  • August 2, 2011
  • 04:21 AM
  • 1,073 views

The 30something Brain

by Neuroskeptic in Neuroskeptic

Brain maturation continues for longer than previously thought - well up until age 30. That's according to two papers just out, which may be comforting for those lamenting the fact that they're nearing the big Three Oh.This challenges the widespread view that maturation is essentially complete by the end of adolescence, in the early to mid 20s.Petanjek et al show that the number of dendritic spines in the prefrontal cortex increases during childhood and then rapidly falls during puberty - which probably represents a kind of "pruning" process. That's nothing new, but they also found that the pruning doesn't stop when you hit 20. It continues, albeit gradually, up to 30 and beyond.This study looked at post-mortem brain samples taken from people who died at various different ages. Lebel and Beaulieu used diffusion MRI to examine healthy living brains. They scanned 103 people and everyone got at least 2 scans a few year years apart, so they could look at changes over time.They found that the fractional anisotropy (a measure of the "integrity") of different white matter tracts varies with age in a non-linear fashion. All tracts become stronger during childhood, and most peak at about 20. Then they start to weaken again. But not all of them - others, such as the cingulum, take longer to mature.Also, total white matter volume continues rising well up to age 30.Plus, there's a lot of individual variability. Some people's brains were still maturing well into their late 20s, even in white matter tracts that on average are mature by 20. Some of this will be noise in the data, but not all of it.These results also fit nicely with this paper from last year that looked at functional connectivity of brain activity.So, while most maturation does happen before and during adolescence, these results show that it's not a straightforward case of The Adolescent Brain turning suddenly into The Adult Brain when you hit 21, which point it solidifies into the final product,Lebel C, & Beaulieu C (2011). Longitudinal development of human brain wiring continues from childhood into adulthood. The Journal of Neuroscience, 31 (30), 10937-47 PMID: 21795544Petanjek, Z., Judas, M., Simic, G., Rasin, M., Uylings, H., Rakic, P., & Kostovic, I. (2011). Extraordinary neoteny of synaptic spines in the human prefrontal cortex Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1105108108... Read more »

Lebel C, & Beaulieu C. (2011) Longitudinal development of human brain wiring continues from childhood into adulthood. The Journal of neuroscience : the official journal of the Society for Neuroscience, 31(30), 10937-47. PMID: 21795544  

Petanjek, Z., Judas, M., Simic, G., Rasin, M., Uylings, H., Rakic, P., & Kostovic, I. (2011) Extraordinary neoteny of synaptic spines in the human prefrontal cortex. Proceedings of the National Academy of Sciences. DOI: 10.1073/pnas.1105108108  

  • August 4, 2011
  • 04:27 AM
  • 1,072 views

Brain-Modifying Drugs

by Neuroskeptic in Neuroskeptic

What if there was a drug that didn't just affect the levels of chemicals in your brain, it turned off genes in your brain? That possibility - either exciting or sinister depending on how you look at it - could be remarkably close, according to a report just out from a Spanish group.The authors took an antidepressant, sertraline, and chemically welded it to a small interfering RNA (siRNA). A siRNA is kind of like a pair of genetic handcuffs. It selectively blocks the expression of a particular gene, by binding to and interfering with RNA messengers. In this case, the target was the serotonin 5HT1A receptor.The authors injected their molecule into the brains of some mice. The sertraline was there to target the siRNA at specific cell types. Sertraline works by binding to and blocking the serotonin transporter (SERT), and this is only expressed on cells that release serotonin; so only these cells were subject to the 5HT1A silencing.The idea is that this receptor acts as a kind of automatic off-switch for these cells, making them reduce their firing in response to their own output, to keep them from firing too fast. There's a theory that this feedback can be a bad thing, because it stops antidepressants from being able to boost serotonin levels very much, although this is debated.Anyway, it worked. The treated mice showed a strong and selective reduction in the density of the 5HT1A receptor in the target area (the Raphe nuclei containing serotonin cells), but not in the rest of the brain.Note that this isn't genetic modification as such. The gene wasn't deleted, it was just silenced, temporarily one hopes; the effect persisted for at least 3 days, but they didn't investigate just how long it lasted.That's remarkable enough, but what's more, it also worked when they administered the drug via the intranasal route. In many siRNA experiments, the payload is injected directly into the brain. That's fine for lab mice, but not very practical for humans. Intranasal administration, however, is popular and easy.So siRNA-sertraline, and who knows what other drugs built along these lines, may be closer to being ready for human consumption than anyone would have predicted. However... the mouse's brain is a lot closer to its nose than the human brain is, so it might not go quite as smoothly.The mind boggles at the potential. If you could selectively alter the gene expression of selective neurons, you could do things to the brain that are currently impossible. Existing drugs hit the whole brain, yet there are many reasons why you'd prefer to only affect certain areas. And editing gene expression would allow much more detailed control over those cells than is currently possible.Currently available drugs are shotguns and sledgehammers. These approaches could provide sniper rifles and scalpels. But whether it will prove to be safe remains to be seen. I certainly wouldn't want to be first one to snort this particular drug.Bortolozzi, A., Castañé, A., Semakova, J., Santana, N., Alvarado, G., Cortés, R., Ferrés-Coy, A., Fernández, G., Carmona, M., Toth, M., Perales, J., Montefeltro, A., & Artigas, F. (2011). Selective siRNA-mediated suppression of 5-HT1A autoreceptors evokes strong anti-depressant-like effects Molecular Psychiatry DOI: 10.1038/mp.2011.92... Read more »

Bortolozzi, A., Castañé, A., Semakova, J., Santana, N., Alvarado, G., Cortés, R., Ferrés-Coy, A., Fernández, G., Carmona, M., Toth, M.... (2011) Selective siRNA-mediated suppression of 5-HT1A autoreceptors evokes strong anti-depressant-like effects. Molecular Psychiatry. DOI: 10.1038/mp.2011.92  

  • April 7, 2010
  • 08:48 AM
  • 1,069 views

Why Do We Dream?

by Neuroskeptic in Neuroskeptic

A few months ago, I asked Why Do We Sleep?That post was about sleep researcher Jerry Siegel, who argues that sleep evolved as a state of "adaptive inactivity". According to this idea, animals sleep because otherwise we'd always be active, and constant activity is a waste of energy. Sleeping for a proportion of the time conserves calories, and also keeps us safe from nocturnal predators etc.Siegel's theory in what we might call minimalist. That's in contrast to other hypotheses which claim that sleep serves some kind of vital restorative biological function, or that it's important for memory formation, or whatever. It's a hotly debated topic.But Siegel wasn't the first sleep minimalist. J. Allan Hobson and Robert McCarley created a storm in 1977 with The Brain As A Dream State Generator; I read somewhere that it provoked more letters to the Editor in the American Journal of Psychiatry than any other paper in that journal.Hobson and McCarley's article was so controversial because they argued that dreams are essentially side-effects of brain activation. This was a direct attack on the Freudian view that we dream as a result of our subconscious desires, and that dreams have hidden meanings. Freudian psychoanalysis was incredibly influential in American psychiatry in the 1970s.Freud believed that dreams exist to fulfil our fantasies, often though not always sexual ones. We dream about what we'd like to do - except we don't dream about it directly, because we find much of our desires shameful, so our minds disguise the wishes behind layers of metaphor etc. "Steep inclines, ladders and stairs, and going up or down them, are symbolic representations of the sexual act..." Interpreting the symbolism of dreams can therefore shed light on the depths of the mind.Hobson and McCarley argued that during REM sleep, our brains are active in a similar way to when we are awake; many of the systems responsible for alertness are switched on, unlike during deep, dreamless, non-REM sleep. But of course during REM there is no sensory input (our eyes are closed), and also, we are paralysed: an inhibitory pathway blocks the spinal cord, preventing us from moving, except for our eyes - hence why it's Rapid Eye Movement sleep.Dreams are simply a result of the "awake-like" forebrain - the "higher" perceptual, cognitive and emotional areas - trying to make sense of the input that it's receiving as a result of waves of activation arising from the brainstem. A dream is the forebrain's "best guess" at making a meaningful story out of the assortment of sensations (mostly visual) and concepts activated by these periodic waves. There's no attempt to disguise the shameful parts; the bizarreness of dreams simply reflects the fact that the input is pretty much random.Hobson and McCarley proposed a complex physiological model in which the activation is driven by the giant cells of the pontine tegmentum. These cells fire in bursts according to a genetically hard-wired rhythm of excitation and inhibition.The details of this model are rather less important than the fact that it reduces dreaming to a neurological side effect. This doesn't mean that the REM state has no function; maybe it does, but whatever it is, the subjective experience of dreams serves no purpose.A lot has changed since 1977, but Hobson seems to have stuck by the basic tenets of this theory. A good recent review came out in Nature Neuroscience last year, REM sleep and dreaming. In this paper Hobson proposes that the function of REM sleep is to act as a kind of training system for the developing brain.The internally-generated signals that arise from the brainstem (now called PGO waves) during REM help the forebrain to learn how to process information. This explains why we spend more time in REM early in life; newborns have much more REM than adults; in the womb, we are in REM almost all the time. However, these are not dreams per se because children don't start reporting experiencing dreams until about the age of 5.Protoconscious REM sleep could therefore provide a virtual world model, complete with an emergent imaginary agent (the protoself) that moves (via fixed action patterns) through a fictive space (the internally engendered environment) and experiences strong emotion as it does so.This is a fascinating hypothesis, although very difficult to test, and it begs the question of how useful "training" based on random, meaningless input is.While Hobson's theory is minimalist in that it reduces dreams, at any rate in adulthood, to the status of a by-product, it doesn't leave them uninteresting. Freudian dream re-interpretation is probably ruled out ("That train represents your penis and that cat was your mother", etc.), but if dreams are our brains processing random noise, then they still provide an insight into how our brains process information. Dreams are our brains working away on their own, with the real world temporarily removed.Of course most dreams are not going to give up life-changing insights. A few months back I had a dream which was essentially a scene-for-scene replay of the horror movie Cloverfield. It was a good dream, scarier than the movie itself, because I didn't know it was a movie. But I think all it tells me is that I was paying attention when I watched Cloverfield.On the other hand, I have had several dreams that have made me realize important things about myself and my situation at the time. By paying attention to your dreams, you can work out how you really think, and feel, about things, what your preconceptions and preoccupations are. Sometimes.Hobson JA, & McCarley RW (1977). The brain as a dream state generator: an activation-synthesis hypothesis of the dream process. The American journal of psychiatry, 134 (12), 1335-48 PMID: 21570Hobson, J. (2009). REM sleep and dreaming: towards a theory of protoconsciousness Nature Reviews Neuroscience, 10 (11), 803-813 DOI: 10.1038/nrn2716... Read more »

  • January 22, 2011
  • 12:46 PM
  • 1,067 views

When "Healthy Brains" Aren't

by Neuroskeptic in Neuroskeptic

There's a lot of talk, much of it rather speculative, about "neuroethics" nowadays.But there's one all too real ethical dilemma, a direct consequence of modern neuroscience, that gets very little attention. This is the problem of incidental findings on MRI scans.An "incidental finding" is when you scan someone's brain for research purposes, and, unexpectedly, notice that something looks wrong with it. This is surprisingly common: estimates range from 2–8% of the general population. It will happen to you if you regularly use MRI or fMRI for research purposes, and when it does, it's a shock. Especially when the brain in question belongs to someone you know. Friends, family and colleagues are often the first to be recruited for MRI studies.This is why it's vital to have a system in place for dealing with incidental findings. Any responsible MRI scanning centre will have one, and as a researcher you ought to be familiar with it. But what system is best?Broadly speaking there are two extreme positions:Research scans are not designed for diagnosis, and 99% of MRI researchers are not qualified to make a diagnosis. What looks "abnormal" to Joe Neuroscientist BSc or even Dr Bob Psychiatrist is rarely a sign of illness, and likewise they can easily miss real diseases. So, we should ignore incidental findings, pretend the scan never happened, because for all clinical purposes, it didn't.You have to do whatever you can with an incidental finding. You have the scans, like it or not, and if you ignore them, you're putting lives at risk. No, they're not clinical scans, they can still detect many diseases. So all scans should be examined by a qualified neuroradiologist, and any abnormalities which are possibly pathological should be followed-up.Neither of these extremes is very satisfactory. Ignoring incidental findings sounds nice and easy, until you actually have to do it, especially if it's your girlfriend's brain. On the other hand, to get every single scan properly checked by a neuroradiologist would be expensive and time-consuming. Also, it would effectively turn your study into a disease screening program - yet we know that screening programs can cause more harm than good, so this is not necessarily a good idea.Most places adopt a middle-of-the-road approach. Scans aren't routinely checked by an expert, but if a researcher spots something weird, they can refer the scan to a qualified clinician to follow up. Almost always, there's no underlying disease. Even large, OMG-he-has-a-golf-ball-in-his-brain findings can be benign. But not always.This is fine but it doesn't always work smoothly. The details are everything. Who's the go-to expert for your study, and what are their professional obligations? Are they checking your scan "in a personal capacity", or is this a formal clinical referral? What's their e-mail address? What format should you send the file in? If they're on holiday, who's the backup? At what point should you inform the volunteer about what's happening?Like fire escapes, these things are incredibly boring, until the day when they're suddenly not.A new paper from the University of California Irvine describes a computerized system that made it easy for researchers to refer scans to a neuroradiologist. A secure website was set up and publicized in University neuroscience community.Suspect scans could be uploaded, in one of two common formats. They were then anonymized and automatically forwarded to the Department of Radiology for an expert opinion. Email notifications kept everyone up to date with the progress of each scan.This seems like a very good idea, partially because of the technical advantages, but also because of the "placebo effect" - the fact that there's an electronic system in place sends the message: we're serious about this, please use this system.Out about 5,000 research scans over 5 years, there were 27 referrals. Most were deemed benign... except one which turned out to be potentially very serious - suspected hydrocephalus, increased fluid pressure in the brain, which prompted an urgent referral to hospital for further tests.There's no ideal solution to the problem of incidental findings, because by their very nature, research scans are kind of clinical and kind of not. But this system seems as good as any.Cramer SC, Wu J, Hanson JA, Nouri S, Karnani D, Chuang TM, & Le V (2011). A system for addressing incidental findings in neuroimaging research. NeuroImage PMID: 21224007... Read more »

Cramer SC, Wu J, Hanson JA, Nouri S, Karnani D, Chuang TM, & Le V. (2011) A system for addressing incidental findings in neuroimaging research. NeuroImage. PMID: 21224007  

  • July 22, 2011
  • 11:54 AM
  • 1,066 views

New Antidepressant - Old Tricks

by Neuroskeptic in Neuroskeptic

The past decade has been a bad one for antidepressant manufacturers. Quite apart from all the bad press these drugs have been getting lately, there's been a remarkable lack of new antidepressants making it to the market. The only really novel drug to hit the shelves since 2000 has been agomelatine. There were a couple of others that were just minor variants on old molecules, but that's it.This makes "Lu AA21004" rather special. It's a new antidepressant currently in development and by all accounts it's making good progress. It's now in Phase III trials, the last stage before approval. And a large clinical trial has just been published finding that it works.But is it a medical advance or merely a commercial one?Pharmacologically, Lu AA21004 is kind of a new twist on an old classic . Its main mechanism of action is inhibiting the reuptake of serotonin, just like Prozac and other SSRIs. However, unlike them, it also blocks serotonin 5HT3 and 5HT7 receptors, activates 5HT1A receptors and partially agonizes 5HT1B.None of these things cry out "antidepressant" to me, but they do at least make it a bit different.The new trial took 430 depressed people and randomized them to get Lu AA21004, at two different doses, 5mg or 10mg, or the older antidepressant venlafaxine at the high-ish dose of 225 mg, or placebo.It worked. Over 6 weeks, people on the new drug improved more than those on placebo, and equally as well as people on venlafaxine; the lower 5 mg dose was a bit less effective, but not significantly so.The size of the effect was medium, with a benefit over-and-above placebo of about 5 points on the MADRS depression scale, which considering that the baseline scores in this study averaged 34, is not huge, but it compares well to other antidepressant trials.Now we come to the side effects, and this is the most important bit, as we'll see later. The authors did not specifically probe for these, they just relied on spontaneous report, which tends to underestimate adverse events.Basically, the main problem with Lu AA21004 was that it made people sick. Literally - 9% of people on the highest dose suffered vomiting, and 38% got nausea. However, the 5 mg dose was no worse than venlafaxine for nausea, and was relatively vomit-free. Unlike venlafaxine, it didn't cause dry mouth, constipation, or sexual problems.So that's lovely then. Let's get this stuff to market!Hang on.The big selling point for this drug is clearly the lack of side effects. It was no more effective than the (much cheaper, because off-patent) venlafaxine. It was better tolerated, but that's not a great achievement to be honest. Venlafaxine is quite notorious for causing side effects, especially at higher doses.I take venlafaxine 300 mg and the side effects aren't the end of the world, but they're no fun, and the point is, they're well known to be worse than you get with other modern drugs, most notably SSRIs.If you ask me, this study should have compared the new drug to an SSRI, because they're used much more widely than venlafaxine. Which one? How about escitalopram, a drug which is, according to most of the literature, one of the best SSRIs, as effective as venlafaxine, but with fewer side effects.Actually, according to Lundbeck, who make escitalopram, it's even better than venlafaxine. Now, they would say that, given that they make it - but the makers of Lu AA21004 ought to believe them, because, er, they're the same people. "Lu" stands for Lundbeck.The real competitor for this drug, according to Lundbeck, is escitalopram. But no-one wants to be in competition with themselves.This may be why, although there are no fewer than 26 registered clinical trials of Lu AA21004 either ongoing or completed, only one is comparing it to an SSRI. The others either compare it to venlafaxine, or to duloxetine, which has even worse side effects. The one trial that will compare it to escitalopram has a narrow focus (sexual dysfunction).Pharmacologically, remember, this drug is an SSRI with a few "special moves", in terms of hitting some serotonin receptors. The question is - do those extra tricks actually make it better? Or is it just a glorified, and expensive, new SSRI? We don't know and we're not going to find out any time soon.If Lu AA21004 is no more effective, and no better tolerated, than tried-and-tested old escitalopram, anyone who buys it will be paying extra for no real benefit. The only winner, in that case, being Lundbeck.Alvarez E, Perez V, Dragheim M, Loft H, & Artigas F (2011). A double-blind, randomized, placebo-controlled, active reference study of Lu AA21004 in patients with major depressive disorder. The International Journal of Neuropsychopharmacology , 1-12 PMID: 21767441... Read more »

Alvarez E, Perez V, Dragheim M, Loft H, & Artigas F. (2011) A double-blind, randomized, placebo-controlled, active reference study of Lu AA21004 in patients with major depressive disorder. The international journal of neuropsychopharmacology / official scientific journal of the Collegium Internationale Neuropsychopharmacologicum (CINP), 1-12. PMID: 21767441  

  • March 31, 2010
  • 09:18 AM
  • 1,060 views

Predicting Psychosis

by Neuroskeptic in Neuroskeptic

"Prevention is better than cure", so they say. And in most branches of medicine, preventing diseases, or detecting early signs and treating them pre-emptively before the symptoms appear, is an important art.Not in psychiatry. At least not yet. But the prospect of predicting the onset of psychotic illnesses like schizophrenia, and of "early intervention" to try to prevent them, is a hot topic at the moment.Schizophrenia and similar illnesses usually begin with a period of months or years, generally during adolescence, during which subtle symptoms gradually appear. This is called the "prodrome" or "at risk mental state". The full-blown disorder then hits later. If we could detect the prodromal phase and successfully treat it, we could save people from developing the illness. That's the plan anyway.But many kids have "prodromal symptoms" during adolescence and never go on to get ill, so treating everyone with mild symptoms of psychosis would mean unnecessarily treating a lot of people. There's also the question of whether we can successfully prevent progression to illness at all, and there have been only a few very small trials looking at whether treatments work for that - but that's another story.Stephan Ruhrmann et al. claim to have found a good way of predicting who'll go on to develop psychosis in their paper Prediction of Psychosis in Adolescents and Young Adults at High Risk. This is based on the European Prediction of Psychosis Study (EPOS) which was run at a number of early detection clinics in Britain and Europe. People were referred to the clinics through various channels if someone was worried they seemed a bit, well, prodromalReferral sources included psychiatrists, psychologists, general practitioners, outreach clinics, counseling services, and teachers; patients also initiated contact. Knowledge about early warning signs (eg, concentration and attention disturbances, unexplained functional decline) and inclusion criteria was disseminated to mental health professionals as well as institutions and persons who might be contacted by at-risk persons seeking help.245 people consented to take part in the study and met the inclusion criteria meaning they were at "high risk of psychosis" according to at least one of two different systems, the Ultra High Risk (UHR) or the COGDIS criteria. Both class you as being at risk if you show short lived or mild symptoms a bit like those seen in schizophrenia i.e.COGDIS: inability to divide attention; thought interference, pressure, and blockage; and disturbances of receptive and expressive speech, disturbance of abstract thinking, unstable ideas of reference, and captivation of attention by details of the visual field...UHR: unusual thought content/ delusional ideas, suspiciousness/persecutory ideas, grandiosity, perceptual abnormalities/hallucinations, disorganized communication, and odd behavior/appearance... Brief limited intermittent psychotic symptoms (BLIPS) i.e. hallucinations, delusions, or formal thought disorders that occurred resolved spontaneously within 1 week...Then they followed up the 245 kids for 18 months and saw what happened to them.What happened was that 37 of them developed full-blown psychosis: 23 suffered schizophrenia according to DSM-IV criteria, indicating severe and prolonged symptoms; 6 had mood disorders, i.e depression or bipolar disorder, with psychotic features, and the rest mostly had psychotic episodes too short to be classed as schizophrenia. 37 people is 19% of the 183 for whom full 18 month data was available; the others dropped out of the study, or went missing for some reason.Is 19% high or low? Well, it's much higher than the rate you'd see in randomly selected people, because the risk of getting schizophrenia is less than 1% lifetime and this was only 18 months; the risk of a random person developing psychosis in any given year has been estimated at 0.035% in Britain. So the UHR and COGDIS criteria are a lot better than nothing.On the other hand 19% is far from being "all": 4 out of 5 of the supposedly "high risk" kids in this study didn't in fact get ill, although some of them probably developed illness after the 18 month period was over.The authors also came up with a fancy algorithm for predicting risk based on your score on various symptom rating scales, and they claim that this can predict psychosis much better, with 80% accuracy. As this graph shows, the rate of developing psychosis in those scoring highly on their Prognostic Index is really high. (In case you were wondering the Prognostic Index is [1.571 x SIPS-Positive score 16] + [0.865 x bizarre thinking score] + [0.793 x sleep disturbances score] + [1.037 x SPD score] + [0.033 x (highest GAF-M score in the past year – 34.64)] + [0.250 x (years of education – 12.52)]. Use it on your friends for hours of psychiatric fun!)However they came up with the algorithm by putting all of their dozens of variables into a big mathematical model, crunching the numbers and picking the ones that were most highly correlated with later psychosis - so they've specifically selected the variables that best predict illness in their sample, but that doesn't mean they'll do so in any other case. This is basically the non-independence problem that has so troubled fMRI, although the authors, to their credit, recognize this and issue the appropriate cautions.So overall, we can predict psychosis, a bit, but far from perfectly. More research is needed. One of the proposed additions to the new DSM-V psychiatric classification system is "Psychosis Risk Syndrome" i.e. the prodrome; it's not currently a disorder in DSM-IV. This idea has been attacked as an invitation to push antipsychotic drugs on kids who aren't actually ill and don't need them. On the other hand though, we shouldn't forget that we're talking about terrible illnesses here: if we could successfully predict and prevent psychosis, we'd be doing a lot of good.Ruhrmann, S., Schultze-Lutter, F., Salokangas, R., Heinimaa, M., Linszen, D., Dingemans, P., Birchwood, M., Patterson, P., Juckel, G., Heinz, A., Morrison, A., Lewis, S., Graf von Reventlow, H., & Klosterkotter, J. (2010). Prediction of Psychosis in Adolescents and Young Adults at High Risk: Results From the Prospective European Prediction of Psychosis Study ... Read more »

Ruhrmann, S., Schultze-Lutter, F., Salokangas, R., Heinimaa, M., Linszen, D., Dingemans, P., Birchwood, M., Patterson, P., Juckel, G., Heinz, A.... (2010) Prediction of Psychosis in Adolescents and Young Adults at High Risk: Results From the Prospective European Prediction of Psychosis Study. Archives of General Psychiatry, 67(3), 241-251. DOI: 10.1001/archgenpsychiatry.2009.206  

  • March 2, 2010
  • 02:52 PM
  • 1,057 views

Is Your Brain A Communist?

by Neuroskeptic in Neuroskeptic

Capitalists beware. No less a journal than Nature has just published a paper proving conclusively that the human brain is a Communist, and that it's plotting the overthrow of the bourgeois order and its replacement by the revolutionary Dictatorship of the Proletariat even as we speak.Kind of. The article, Neural evidence for inequality-averse social preferences, doesn't mention the C word, but it does claim to have found evidence that people's brains display more egalitarianism than people themselves admit to.Tricomi et al took 20 pairs of men. At the start of the study, both men got a $30 payment, but one member of each pair was then randomly chosen to get a $50 bonus. Thus, one guy was "rich", while the other was "poor". Both men then had fMRI scans, during which they were offered various sums of money and saw their partner being offered money too. They rated how "appealing" these money transfers were on a 10 point scale.What happened? Unsurprisingly both "rich" and "poor" said that they were pleased at the prospect of getting more cash for themselves, the poor somewhat more so, but people also had opinions about payments to the other guy:the low-pay group disliked falling farther behind the high-pay group (‘disadvantageous inequality aversion’), because they rated positive transfers to the high-pay participants negatively, even though these transfers had no effect on their own earnings. Conversely, the high-pay group seemed to value transfers [to the poor person] that closed the gap between their earnings and those of the low-pay group (‘advantageous inequality aversion’)What about the brain? When people received money for themselves, activity in the ventromedial prefrontal cortex (vmPFC) and the ventral striatum correlated with the size of their gain.However, when presented with a payment to the other person, these areas seemed to be rather egalitarian. Activity rose in rich people when their poor colleagues got money. In fact, it was greater in that case than when they got money themselves, which means the "rich" people's neural activity was more egalitarian than their subjective ratings were. Whereas in "poor" people, the vmPFC and the ventral striatum only responded to getting money, not to seeing the rich getting even richer.The authors conclude that thisindicates that basic reward structures in the brain may reflect even stronger equity considerations than is necessarily expressed or acted on at the behavioural level... Our results provide direct neurobiological evidence in support of the existence of inequality-averse social preferences in the human brain.Notice that this is essentially a claim about psychology, not neuroscience, even though the authors used neuroimaging in this study. They started out by assuming some neuroscience - in this case, that activity in the vmPFC and the ventral striatum indicates reward i.e. pleasure or liking - and then used this to investigate psychology, in this case, the idea that people value equality per se, as opposed to the alternative idea, that "dislike for unequal outcomes could also be explained by concerns for social image or reciprocity, which do not require a direct aversion towards inequality."This is known as reverse inference, i.e. inference from data about the brain to theories about the mind. It's very common in neuroimaging papers - we've all done it - but it is problematic. In this case, the problem is that the argument relies on the idea that activity in the vmPFC and ventral striatum is evidence for liking.But while there's certainly plenty of evidence that these areas are activated by reward, and the authors confirmed that activity here correlated with monetary gain, that doesn't mean that they only respond to reward. They could also respond to other things. For example, there's evidence that the vmPFC is also activated by looking at angry and sad faces.Or to put it another way: seeing someone you find attractive makes your pupils dilate. If you were to be confronted by a lion, your pupils would dilate. Fortunately, that doesn't mean you find lions attractive - because fear also causes pupil dilation.So while Tricomi et al argue that people, or brains, like equality, on the basis of these results, I remain to be fully convinced. As Russell Poldrack noted in 2006caution should be exercised in the use of reverse inference... In my opinion, reverse inference should be viewed as another tool (albeit an imperfect one) with which to advance our understanding of the mind and brain. In particular, reverse inferences can suggest novel hypotheses that can then be tested in subsequent experiments.Tricomi E, Rangel A, Camerer CF, & O'Doherty JP (2010). Neural evidence for inequality-averse social preferences. Nature, 463 (7284), 1089-91 PMID: 20182511... Read more »

Tricomi E, Rangel A, Camerer CF, & O'Doherty JP. (2010) Neural evidence for inequality-averse social preferences. Nature, 463(7284), 1089-91. PMID: 20182511  

  • December 9, 2009
  • 08:08 AM
  • 1,055 views

Testosterone, Aggression... Confusion

by Neuroskeptic in Neuroskeptic

Breaking news from the BBC -Testosterone link to aggression 'all in the mind' Work in Nature magazine suggests the mind can win over hormones... Testosterone induces anti-social behaviour in humans, but only because of our own prejudices about its effect rather than its biological activity, suggest the authors. The researchers, led by Ernst Fehr of the University of Zurich, Switzerland, said the results suggested a case of "mind over matter" with the brain overriding body chemistry. "Whereas other animals may be predominantly under the influence of biological factors such as hormones, biology seems to exert less control over human behaviour," they said. Phew, that's a relief - for a minute back there I was worried we didn't have free will. But look a little closer at the study, and it turns out that all is not as it seems. The experiment (Eisenegger et al) involved giving healthy women 0.5 mg testosterone, or placebo, in a randomized double-blind manner, and then getting them to take part in the "Ultimatum Game".This is a game for two players. One, the Proposer, is given some money, and then has to offer to give a certain proportion of it to the other player, the Receiver. If the Receiver accepts the offer, both players get the agreed-upon amount of money. If they reject it, however, no-one gets anything.The Proposer is basically faced with the choice of making a "fair" offer, e.g. giving away 50%, or a greedy one, say offering 10% and keeping 90% for themselves. Receivers generally accept fair offers, but most people get annoyed or insulted by unfair ones, and reject them, even though this means they lose money (10% of the money is still more than 0%).What happened? Testosterone affected behaviour. It had no effect on women playing the role of the Receivers, but the Proposers given testosterone made significantly fairer offers on average, compared to those given placebo. That's not mind over matter, that's matter over mind - give someone a hormone and their behaviour changes.The direction of the effect is quite interesting - if testosterone increased aggression, as popular belief has it, you might expect it to decrease fair offers. Or, you might not. I suppose it depends on your understanding of "aggression". For their part, Eisenegger et al interpret this finding as suggesting that testosterone doesn't increase aggression per se, but rather increases our motivation to achieve "status", which leads to Proposers making fairer offers, so as to appear nicer. Hmm. Maybe.But where did the BBC get the whole "all in the mind" thing from? Well, after the testing was over, the authors asked the women whether they thought they had taken testosterone or placebo. The results showed that the women couldn't actually tell which they'd had - they were no more accurate than if they were guessing - but women who believed they'd got testosterone made more unfair offers than women who believed they got placebo. The size of this effect was bigger than the effect of testosterone.Is that "mind over matter"? Do beliefs about testosterone exert a more powerful effect on behaviour than testosterone itself? Maybe they do, but these data don't tell us anything about that. The women's beliefs weren't manipulated in any way in this trial, so as an experiment it couldn't investigate belief effects. In order to show that belief alters behaviour, you'd need to control beliefs. You could randomly assign some subjects to be told they were taking testosterone, and compare them to others told they were on placebo, say.This study didn't do anything like that. Beliefs about testosterone were only correlated with behaviour, and unless someone's changed the rules recently, correlation isn't causation. It's like finding that people with brown skin are more likely to be Hindus than people with white skin, and concluding that belief in Brahma alters pigmentation. It could even be that the behaviour drove the belief, because subjects were quizzed about their testosterone status after the Ultimatum Game - maybe women who, for whatever reason, behaved selfishly, decided that this meant they had taken testosterone!Overall, this study provides quite interesting data about hormonal effects on behaviour, but tells us nothing about the effects of beliefs about hormones. On that issue, the way the media have covered this experiment is rather more informative than the experiment itself.Eisenegger, C., Naef, M., Snozzi, R., Heinrichs, M., & Fehr, E. (2009). Prejudice and truth about the effect of testosterone on human bargaining behaviour Nature DOI: 10.1038/nature08711... Read more »

  • August 9, 2010
  • 01:33 PM
  • 1,051 views

Zapping Memory Better in Alzheimer's

by Neuroskeptic in Neuroskeptic

Last month I wrote about how electrical stimulation of the hippocampus causes temporary amnesia - Zapping Memories Away.Now Toronto neurologists Laxton et al have tried to use deep brain stimulation (DBS) to improve memory in people with Alzheimer's disease. Progressive loss of memory is the best-known symptom of this disorder, and while some drugs are available, they provide partial relief at best.This study stems from a chance discovery by the same Toronto group. In 2008, they reported that stimulation of the hypothalamus caused vivid memory recollections a 50 year old man. In that case, the effect was entirely unintended and unexpected. The patient was being given DBS to try to curb his appetite (he weighed 420 pounds.) The hypothalamus is involved in regulating appetite, not memory - but the fornix, a nerve bundle that passes through that area, is. It's the main pathway connecting the hippocampus to the rest of the brain, and the hippocampus is vital for memory.In this new study, Laxton et al implanted electrodes to stimulate the fornix in 6 patients with mild (early-stage) Alzheimer's. What happened? The results, unfortunately, were quite messy. On average, the patients symptoms got worse over the course of the year. Alzheimer's is a progressive degenerative disease, so this is what you'd expect to happen without treatment. The authors say that the decline was a bit slower than you'd expect in these kinds of patients, but to be honest, it's impossible to tell because there was no control group.However, two patients did show memory improvements, and these were the same two who reported vivid recollections when the electrodes were first implanted (similar to the original obese guy):Two of the 6 patients reported stimulation induced experiential phenomena. Patient 2 reported having the sensation of being in her garden, tending to the plants on a sunny day... Patient 4 reported having the memory of being fishing on a boat on a wavy blue colored lake with his sons and catching a large green and white fish. On later questioning in both patients, these events were autobiographical, had actually occurred in the past, and were accurately reported according to the patient’s spouse.Also, the stimulation caused brain activation, generally switching "on" the areas that are turned "off" in Alzheimer's, and this lasted for a year (the length of the study so far). And there were no major side-effects. That's all good.Overall, these results are extremely interesting, but we don't know how well the treatment really works, and we won't know until someone does a randomized controlled trial with a longer follow-up period; something which is, unfortunately, true of a lot of the latest DBS studies.Link: The Neurocritic on the original 2008 paper.Laxton AW, Tang-Wai DF, McAndrews MP, Zumsteg D, Wennberg R, Keren R, Wherrett J, Naglie G, Hamani C, Smith GS, & Lozano AM (2010). A phase I trial of deep brain stimulation of memory circuits in Alzheimer's disease. Annals of neurology PMID: 20687206... Read more »

Laxton AW, Tang-Wai DF, McAndrews MP, Zumsteg D, Wennberg R, Keren R, Wherrett J, Naglie G, Hamani C, Smith GS.... (2010) A phase I trial of deep brain stimulation of memory circuits in Alzheimer's disease. Annals of neurology. PMID: 20687206  

  • January 22, 2010
  • 06:32 PM
  • 1,048 views

Brain Scanning Software Showdown

by Neuroskeptic in Neuroskeptic

You've just finished doing some research using fMRI to measure brain activity. You designed the study, recruited the volunteers, and did all the scans. Phew. Is that it? Can you publish the findings yet?Unfortunately, no. You still need to do the analysis, and this is often the most trickiest stage. The raw data produced during an fMRI experiment are meaningless - in most cases, each scan will give you a few hundred almost-identical grey pictures of the person's brain. Making sense of them requires some complex statistical analysis.The very first step is choosing which software to use. Just as some people swear by Firefox while others prefer Internet Explorer for browsing the web, neuroscientists have various options to choose from in terms of image analysis software. Everyone's got a favourite. In Britain, the most popular are FSL (developed at Oxford) and SPM (London), while in the USA BrainVoyager sees a lot of use.These three all do pretty much the same thing, give or take a few minor technical differences, so which one you use ultimately makes little difference. But just as there's more than one way to skin a cat, there's more than one way to analyze a brain. A paper from Fusar-Poli et al compares the results you get with SPM to the results obtained using XBAM, a program which uses a quite different statistical approach.Here's what happened, according to SPM, when 15 volunteers looked at pictures of faces expressing the emotion of fear, and their brain activity was compared to when they were just looking at a boring "X" on the screen (I think - either that it's compared to looking at neutral faces; the paper isn't clear, but given the size of the blobs I doubt it's that.)Various bits of the brain were more activated by the scared face pics, as you can see by the huge, fiery blobs. The activation is mostly at the back of the brain, in occipital cortex areas which deal with vision, which is as you'd expect. The cerebellum was also strongly activated, which is a bit less expected.Now, here's what happens if you analyze exactly the same data using XBAM, setting the statistical threshold at the same level (i.e. in theory being no more or less "strict") -You get the same visual system blobs, but you also see activation in a number of other areas. Or as Fusar-Poli et al put it -Analysis using both programs revealed that during the processing of emotional faces, as compared to the baseline stimulus, there was an increased activation in the visual areas (occipital, fusiform and lingual gyri), in the cerebellum, in the parietal cortex [etc] ... Conversely, the temporal regions, insula and putamen were found to be activated using the XBAM analysis software only.*This begs two questions: why the difference, and which way is right?The difference must be a product of the different methods used. SPM uses a technique called statistical parametric mapping (hence the name) based on the assumption of normality. FSL and BrainVoyager do too. XBAM, on the other hand, differs from more orthodox software in a number of other ways; the most basic difference is that it uses non-parametric statistics but this document lists no less than five major innovations -"not to assume normality but to use permutation testing to construct the null distribution used to make inference about the probability of an "activation" under the null hypothesis.""recognizing the existence of correlation in the residuals after fitting a statistical model to the data."using "a mixed effects analysis of group level fMRI data by taking into account both intra and inter subject variances."using "3D cluster level statistics based on cluster mass (the sum of all the statistical values in the cluster) rather than cluster area (number of voxels)."using "a wavelet-based time series permutation approach that permitted the handling of complex noise processes in fMRI data rather than simple stationary autocorrelation."Phew. Which combination of these are responsible for the difference is impossible to say.The biggest question, though, is: should we all be using XBAM? Is it "better" than SPM? This is where things get tricky. The truth is that there's no right way to statistically analyze any data, let alone fMRI data. There are lots of wrong ways, but even if you avoid making any mistakes, there are still various options as to which statistical methods to use, and which method you use depends on which assumptions you're making. XBAM rests of different assumptions from SPM.Whether XBAM's assumptions are more appropriate than those of SPM is a difficult question. The people who wrote XBAM think so, and they're very smart people. But so are the people who wrote SPM. The point is, it's a very complex issue, the mathematical details of which go far beyond the understanding of most fMRI users (myself included).My worry about this paper is that the average Joe Neuroscientist will decide that, because XBAM produces more activation than SPM, it must be "better". The authors are careful not to say this, but for fMRI researchers working in the publish-or-perish world of modern science, and whose greatest fear is that they'll run an analysis and end up with no blobs at all, the temptation to think "the more blobs the merrier" is a powerful one.Fusar-Poli, P., Bhattacharyya, S., Allen, P., Crippa, J., Borgwardt, S., Martin-Santos, R., Seal, M., O’Carroll, C., Atakan, Z., & Zuardi, A. (2010). Effect of image analysis software on neurofunctional activation during processing of emotional human faces Journal of Clinical Neuroscience DOI: 10.1016/j.jocn.2009.06.027... Read more »

Fusar-Poli, P., Bhattacharyya, S., Allen, P., Crippa, J., Borgwardt, S., Martin-Santos, R., Seal, M., O’Carroll, C., Atakan, Z., & Zuardi, A. (2010) Effect of image analysis software on neurofunctional activation during processing of emotional human faces. Journal of Clinical Neuroscience. DOI: 10.1016/j.jocn.2009.06.027  

  • October 19, 2009
  • 12:20 PM
  • 1,047 views

Antidepressant Sales Rise as Depression Falls

by Neuroskeptic in Neuroskeptic

Antidepressant sales are rising in most Western countries, and they have been for at least a decade. Recently, we learned that the proportion of Americans taking antidepressants in any given year nearly doubled from 1996 to 2005.The situation has been thought to be similar in the UK. But a hot-off-the-press paper in the British Medical Journal reveals some surprising facts about the issue: Explaining the rise in antidepressant prescribing.The authors examined medical records from 1.7 million British patients in primary care (General Practice, i.e. family doctors.) They found that antidepressant sales rose strongly between 1993 and 2005, not because more people are taking these drugs, but entirely because of an increase in the duration of treatment amongst the antidepressant users. It's not that more people are taking them, it's that people are taking them for longer.In fact, the number of people being diagnosed with depression and prescribed antidepressants has actually fallen over time. The rate of diagnosed depression remained steady from 1993 to about 2001, and then fell markedly, by about a third, up to 2005. This trend was seen in both men and women, but there were age differences. In 18-30 year olds, there was a gradual increase in diagnoses before the decrease. (Note that these graphs show the number of people getting their first ever diagnosis of depression in each year.)The likelihood of being given antidepressants for a diagnosis of depression stayed roughly constant, at about 75-80% across the years. However, the average duration of treatment increased over time -The change doesn't look like much, but remember that even a small change in the number of long-term users translates into a large effect on the total number of sales, because each long-term user takes a lot of pills. The authors concludeAntidepressant prescribing nearly doubled during the study period—the average number of prescriptions issued per patient increased from 2.8 in 1993 to 5.6 in 2004. ... the rise in antidepressant prescribing is mainly explained by small changes in the proportion of patients receiving long term treatment.Wow. I didn't see that coming, I'll admit. A lot of people, myself included, had assumed that rising antidepressant use was caused by people becoming more willing to seek treatment for depression. Or maybe that doctors were becoming more eager to prescribe drugs. Others believed that rates of clinical depression were rising.There's no evidence for either of these theories in this British data-set. The recent fall in clinical depression diagnoses, following an increase in young people over the course of the 1990s, is especially surprising. This conflicts with the only British population survey of mental health, the APMS. The APMS found that rates of depression and mixed anxiety/depression increased between 1993 and 2000 in most age groups but least of all in the young, and little change 2000 to 2007. I trust this new data more, because population surveys almost certainly overestimate mental illness.How does this result compare to elsewhere? In the USA, the average number of antidepressant prescriptions per patient per year rose from "5.60 in 1996 to 6.93 in 2005" according to a recent estimate. In this study yearly "prescriptions issued per patient increased from 2.8 in 1993 to 5.6 in 2004." So there's a major trans-Atlantic difference. In Britain, the length of use increased greatly, while in the US it only rose slightly, but from a higher baseline.Finally, why has this happened? We can only speculate. Maybe doctors have become more keen on long-term treatment to prevent depressive relapse. Or maybe users have become more willing to take antidepressants long-term. Modern drugs generally have milder side effects than older ones, so this makes sense, although some people would say that this is just further proof that modern antidepressants are "addictive"...Moore M, Yuen HM, Dunn N, Mullee MA, Maskell J, & Kendrick T (2009). Explaining the rise in antidepressant prescribing: a descriptive study using the general practice research database. BMJ (Clinical research ed.), 339 PMID: 19833707... Read more »

  • June 2, 2011
  • 04:21 AM
  • 1,046 views

The Holographic Brain

by Neuroskeptic in Neuroskeptic

According to the holonomic brain theory,Cognitive function is guided by a matrix of neurological wave interference patterns situated temporally between holographic Gestalt perception and discrete, affective, quantum vectors derived from reward anticipation potentials.Well, I don't know about that, but a group of neuroscientists have just reported on using holograms as a tool for studying brain function: Three-dimensional holographic photostimulation of the dendritic arbor.A while ago, scientists worked out how to "cage" interesting compounds, such as neurotransmitters, inside large, inert molecules. Then, by shining laser light of the right wavelength at the cages, it's possible to break them and release what's inside. This is very useful because it allows you to, say, selectively release neurotransmitters in particular places, just by pointing the laser at them.There's a problem though. The uncaging doesn't happen immediately: the laser has to be pointing at the same point for a certain fixed time. This makes it very difficult to simultaneously stimulate many different points - which is, ideally, what you'd want to do, because in the real brain, everything happens at the same time: a given cell might be receiving input from dozens of others, and sending output to the same number.One solution is to simply split and block the beam into several smaller, parallell beams. This allows you to hit several spots simultaneously but it suffers from the problem that all the spots have to lie in the same 2D "slice". A bit like how, if you taped several laser pointers together, you could project a complex series of dots onto the wall, but not a 3D one.This is where holograms come in. As everyone knows, holograms appear to be 3D images. By adopting the same kind of algorithms as are used in the construction of holograms, the authors were able to use a single laser to generate a series of stimulation spots within 3D space. The image above shows that they were able to stimulate a single dendritic spine of a single neuron by uncaging glutamate.Then they moved on to a real experiment: stimulating several branches of a single cell. What they found was that if you stimulate several branches simultaneously, the overall excitation produced is less than the sum of the individual stimulations.The bottom graph shows this: the grey line is what you'd expect if it was simply summed. Interestingly, a drug called 4-AP, which is used to provoke epileptic seizures in experimental animals, blocked this effect and made cells respond in a linear fashion.This is clearly an extremely promising method. I've previously blogged about how it's possible to visualize individual dendritic branches in the living brain using another laser-based method, two-photon microscopy. In theory, therefore, it might be possible to both see, and manipulate, the brain on a microscopic level, all without physically touching it at all.Yang S, Papagiakoumou E, Guillon M, de Sars V, Tang CM, & Emiliani V (2011). Three-dimensional holographic photostimulation of the dendritic arbor. Journal of neural engineering, 8 (4) PMID: 21623008... Read more »

Yang S, Papagiakoumou E, Guillon M, de Sars V, Tang CM, & Emiliani V. (2011) Three-dimensional holographic photostimulation of the dendritic arbor. Journal of neural engineering, 8(4), 46002. PMID: 21623008  

  • August 20, 2010
  • 10:02 AM
  • 1,045 views

Schizophrenia, Genes and Environment

by Neuroskeptic in Neuroskeptic

Schizophrenia is generally thought of as the "most genetic" of all psychiatric disorders and in the past 10 years there have been heroic efforts to find the genes responsible for it, with not much success so far.A new study reminds us that there's more to it than genes alone: Social Risk or Genetic Liability for Psychosis? The authors decided to look at adopted children, because this is one of the best ways of disentangling genes and environment.If you find that the children of people with schizophrenia are at an increased risk of schizophrenia (they are), that doesn't tell you whether the risk is due to genetics, or environment, because we share both with our parents. Only in adoption is the link between genes and environment broken.Wicks et al looked at all of the kids born in Sweden and then adopted by another Swedish family, over several decades (births 1955-1984). To make sure genes and environment were independent, they excluded those who were adopted by their own relatives (i.e. grandparents), and those lived with their biological parents between the ages of 1 and 15. This is the kind of study you can only do in Scandinavia, because only those countries have accessible national records of adoptions and mental illness...What happened? Here's a little graph I whipped up:Brighter colors are adoptees at "genetic risk", defined as those with at least one biological parent who was hospitalized for a psychotic illness (including schizophrenia but also bipolar disorder.) The outcome measure was being hospitalized for a non-affective psychosis, meaning schizophrenia or similar conditions but not bipolar.As you can see, rates are much higher in those with a genetic risk, but were also higher in those adopted into a less favorable environment. Parental unemployment was worst, followed by single parenthood, which was also quite bad. Living in an apartment as opposed to a house, however, had only a tiny effect.Genetic and environmental risk also interacted. If a biological parent was mentally ill and your adopted parents were unemployed, that was really bad news.But hang on. Adoption studies have been criticized because children don't get adopted at random (there's a story behind every adoption, and it's rarely a happy one), and also adopting families are not picked at random - you're only allowed to adopt if you can convince the authorities that you're going to be good parents.So they also looked at the non-adopted population, i.e. everyone else in Sweden, over the same time period. The results were surprisingly similar. The hazard ratio (increased risk) in those with parental mental illness, but no adverse circumstances, was 4.5, the same as in the adoption study, 4.7.For environment, the ratio was 1.5 for unemployment, and slightly lower for the other two. This is a bit less than in the adoption study (2.0 for unemployment). And the two risks interacted, but much less than they did in the adoption sample.However, one big difference was that the total lifetime rate of illness was 1.8% in the adoptees and just 0.8% in the nonadoptees, despite much higher rates of unemployment etc. in the latter. Unfortunately, the authors don't discuss this odd result. It could be that adopted children have a higher risk of psychosis for whatever reason. But it could also be an artefact: rates of adoption massively declined between 1955 and 1984, so most of the adoptees were born earlier, i.e. they're older on average. That gives them more time in which to become ill.A few more random thoughts:This was Sweden. Sweden is very rich and compared to most other rich countries also very egalitarian with extremely high taxes and welfare spending. In other words, no-one in Sweden is really poor. So the effects of environment might be bigger in other countries.On the other hand this study may overestimate the risk due to environment, because it looked at hospitalizations, not illness per se. Supposing that poorer people are more likely to get hospitalized, this could mean that the true effect of environment on illness is lower than it appears.The outcome measure was hospitalization for "non-affective psychosis". Only 40% of this was diagnosed as "schizophrenia". The rest will have been some kind of similar illness which didn't meet the full criteria for schizophrenia (which are quite narrow, in particular, they require 6 months of symptoms).Parental bipolar disorder was counted as a family history. This does make sense because we know that bipolar disorder and schizophrenia often occur in the same families (and indeed they can be hard to tell apart, many people are diagnosed with both at different times.)Overall, though, this is a solid study and confirms that genes and environment are both relevant to psychosis. Unfortunately, almost all of the research money at the moment goes on genes, with studying environmental factors being unfashionable.Wicks S, Hjern A, & Dalman C (2010). Social Risk or Genetic Liability for Psychosis? A Study of Children Born in Sweden and Reared by Adoptive Parents. The American journal of psychiatry PMID: 20686186... Read more »

  • March 15, 2010
  • 05:52 AM
  • 1,044 views

How to Stop Smoking

by Neuroskeptic in Neuroskeptic

1. Don't smoke.2. See 1.This is essentially what Simon Chapman and Ross MacKenzie suggest in a provocative PloS Medicine paper, The Global Research Neglect of Unassisted Smoking Cessation: Causes and Consequences.Their point is deceptively simple: there is lots of research looking at drugs and other treatments to help people quit smoking tobacco, but little attention is paid to people who quit without any help, despite the fact that the majority (up to 75%) of quitters do just that. This is good news for the pharmaceutical industry and others who sell smoking-cessation aids, but it's not clear that it's good for public health.As they put it,despite the pharmaceutical industry’s efforts to promote pharmacologically mediated cessation and numerous clinical trials demonstrating the efficacy of pharmacotherapy, the most common method used by most people who have successfully topped smoking remains unassisted cessation ... Tobacco use, like other substance use, has become increasingly pathologised as a treatable condition as knowledge about the neurobiology, genetics, and pharmacology of addiction develops. Meanwhile, the massive decline in smoking that occurred before the advent of cessation treatment is often forgotten.Debates over drugs, or other treatments, tend to revolve around the question of whether they work: is this drug better than placebo for this disorder? Chapman and MacKenzie point out that even to frame an issue in these terms is to concede a lot to the medical or pathological approach, which may not be a good idea. Before asking, do the drugs work? We should ask, what have drugs got to do with this?Their argument is not that drugs never help people to quit; nor are they saying that tobacco isn't addictive, or that there is no neurobiology of addiction. Rather, they are saying that the biology is only one aspect of the story. The importance of drugs (and other stop-smoking aids like CBT), and the difficulty of quitting, is systematically exaggerated by the medical literature...Of the 662 papers [about "smoking cessation" published in 2007 or 2008], 511 were studies of cessation interventions. The other 118 were mainly studies of the prevalence of smoking cessation in whole or special populations. Of the intervention papers, 467 (91.4%) reported the effects of assisted cessation and 44 (8.6%) described the impact of unassisted cessation (Figure 1).... Of the papers describing cessation trends, correlates, and predictors in populations, only 13 (11%) contained any data on unassisted cessation.And although pharmaceutical industry funding of research plays a part in this, the fact that medical science tends to focus on treatments rather than on untreated individuals is unsurprising since this is fundamentally how science works:Most tobacco control research is undertaken by individuals trained in positivist scientific traditions. Hierarchies of evidence give experimental evidence more importance than observational evidence; meta-analyses of randomized controlled trials are given the most weight. Cessation studies that focus on discrete proximal variables such as specific cessation interventions provide ‘‘harder’’ causal evidence than those that focus on distal, complex, and interactive influences that coalesce across a smoker’s lifetime to end in cessation.Overall, it's an excellent paper and well worth a read in full (it's short and it's open access). Of course, it is itself only one side of the story and many in the tobacco control community will find it controversial. But I think Chapman and MacKenzie's is a point that needs to be made, and point applies to other areas of medicine, especially, although not exclusively, to mental health. This week, British social care charity Together told us thatSix out of ten of people have had at least one time in their life where they have found it difficult to cope mentally... stress (70%), anxiety (59%) and depression (55%) were the three most common difficulties encountered by the publicWhich was not still not quite as good as rivals Turning Point who last month saidThree quarters of people in the UK experience depression occasionally or regularly yet only a third seek helpThese were opinion surveys, not real peer-reviewed science, but they might as well have been: the best available science says that if you go and ask people, 50-70% of the population report suffering at least one diagnosable DSM-IV mental disorder in their lifetime, and that the majority receive no treatment at all. This leads to papers in major journals such as this one warning that "Depression Care in the United States" is "Too Little for Too Few."But we don't know whether these tens of millions of cases of untreated "mental illness" should be treated, because there is basically no research looking at what happens to such people without treatment. On the other hand, the very fact that they aren't treated, and yet manage to hold down jobs, relationships and so forth, suggests that the situation is not so bad.Of course we must never forget that depression and anxiety can be crippling diseases, but fortunately, such cases are at least comparatively rare. By using the word "depression" to cover everything from waking-up-at-4-am-in-a-suicidal-panic-melancholia to feeling-a-bit-miserable-because-something-bad-just-happened, it's easy to forget that while clinical depression is a serious matter, feeling a bit miserable is normal and resolves without any help 99% of the time. Even though there are no published scientific studies proving this, because it's not the kind of thing scientists study.Incidentally, this issue is a good reminder that there's no one big bad conspiracy behind everything. With smoking, Big Tobacco find themselves in direct opposition to Big Pharma, like in From Dusk Till Dawn when the psychopaths fight the vampires. With depression, the people who are quickest to decry the widespread use of antidepressants often seem to be the ones who are most keen on the idea that depression is common and under-treated, perhaps because it allows them to recommend their own favorite psychotherapy. Big Pharma hands the baton to Big Couch in the race to medicalize life.Chapman S, & MacKenzie R (2010). The global research neglect of unassisted smoking cessation: causes and consequences. PLoS medicine, 7 (2) PMID: 20161722... Read more »

join us!

Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.

If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.

Register Now

Research Blogging is powered by SMG Technology.

To learn more, visit seedmediagroup.com.