According to Mormon author and fruit grower "Dr" Robert O. Young, pretty much all diseases are caused by our bodies being too acidic. By adopting an "alkaline lifestyle" to raise your internal pH (lower pH being more acidic), you'll find that
if you maintain the saliva and the urine pH, ideally at 7.2 or above, you will never get sick. That’s right you will NEVER get sick!
Wow. Important components of the alkaline lifestyle include eating plenty of the right sort of fruits and vegetables, ideally ones grown by Young, and taking plenty of nutritional supplements. These don't come cheap, but when the payoff is being free of all diseases, who could complain?
Young calls his amazing theory the Alkavorian Approach™, aka the New Biology™. Almost everyone else calls it quack medicine and pseudoscience. Because it is quack medicine and pseudoscience. But a paper just published in Cell suggests an interesting role for pH in, of all things, anxiety and panic - The amygdala is a chemosensor that detects carbon dioxide and acidosis to elicit fear behavior.
The authors, Ziemann et al, were interested in a protein called Acid Sensing Ion Channel 1a, ASIC1a, which as the name suggests, is acid-sensitive. Nerve cells expressing ASIC1a are activated when the fluid around them becomes more acidic.
One of the most common causes of acidosis (a fall in body pH) is carbon dioxide, CO2. Breathing is how we get rid of the CO2 produced by our bodies; if breathing is impaired, for example during suffocation, CO2 levels rise, and pH falls as CO2 is converted to carbonic acid in the bloodstream.
In previous work, Ziemann et al found that the amygdala contains lots of ASIC1a. This is intriguing, because the amygdala is a brain region believed to be involved in fear, anxiety and panic, although it has other functions as well. It's long been known that breathing air with added CO2 can trigger anxiety and panic, especially in people vulnerable to panic attacks.
What's unclear is why this happens; various biological and psychological theories have been proposed. Ziemann et al set out to test the idea that ASIC1a in the amygdala mediates anxiety caused by CO2.
In a number of experiments they showed that mice genetically engineered have no ASIC1a (knockouts) were resistant to the anxiety-causing effects of air containing 10% or 20% CO2. Also, unlike normal mice, the knockouts were happy to enter a box with high CO2 levels - normal mice hated it. Injections of a weakly acidic liquid directly into the amygdala caused anxiety in normal mice, but not in the knockouts.
Most interestingly, they found that knockout mice could be made to fear CO2 by giving them ASIC1a in the amygdala. Knockouts injected in the amygdala with a virus containing ASIC1a DNA, which caused their cells to start producing the protein, showed anxiety (freezing behaviour) when breathing CO2. But it only worked if the virus was injected into the amygdala, not nearby regions.
This is a nice series of experiments which shows convincingly that ASIC1a mediates acidosis-related anxiety, at least in mice. What's most interesting however is that it also seems to involved in other kinds of anxiety and fear. The ASIC1a knockout mice were slightly less anxious in general; injections of an alkaline solution prevented CO2-related anxiety, but also reduced anxiety caused by other scary things, such as the smell of a cat.
The authors conclude by proposing that amygdala pH might be involved in fear more generally
Thus, we speculate that when fear-evoking stimuli activate the amygdala, its pH may fall. For example, synaptic vesicles release protons, and intense neural activity is known to lower pH.
But this is, as they say, speculation. The link between CO2, pH and panic attacks seems more solid. As the authors of another recent paper put it
We propose that the shared characteristics of CO2/H+ sensing neurons overlap to a point where threatening disturbances in brain pH homeostasis, such as those produced by CO2 inhalations, elicit a primal emotion that can range from breathlessness to panic.
ResearchBlogging.orgZiemann, A., Allen, J., Dahdaleh, N., Drebot, I., Coryell, M., Wunsch, A., Lynch, C., Faraci, F., Howard III, M., & Welsh, M. (2009). The Amygdala Is a Chemosensor that Detects Carbon Dioxide and Acidosis to Elicit Fear Behavior Cell, 139 (5), 1012-1021 DOI: 10.1016/j.cell.2009.10.029... Read more »
Ziemann, A., Allen, J., Dahdaleh, N., Drebot, I., Coryell, M., Wunsch, A., Lynch, C., Faraci, F., Howard III, M., & Welsh, M. (2009) The Amygdala Is a Chemosensor that Detects Carbon Dioxide and Acidosis to Elicit Fear Behavior. Cell, 139(5), 1012-1021. DOI: 10.1016/j.cell.2009.10.029
Irving Kirsch, best known for that 2008 meta-analysis allegedly showing that "Prozac doesn't work", has hit the headlines again.This time it's a paper claiming that something does work. Actually Kirsch is only a minor author on the paper by Kaptchuck et al: Placebos without Deception.In essence, they asked whether a placebo treatment - a dummy pill with no active ingredients - works even if you know that it's a placebo. Conventional wisdom would say no, because the placebo effect is driven by the patient's belief in the effectiveness of the pill.Kaptchuck et al took 80 patients with Irritable Bowel Syndrome (IBS) and recruited them into a trial of "a novel mind-body management study of IBS". Half of the patients got no treatment at all. The other half got sugar pills, after having been told, truthfully, that the pills contained no active drugs but also having been told to expect improvement in a 15 minute briefing session on the grounds thatplacebo pills, something like sugar pills, have been shown in rigorous clinical testing to produce significant mind-body self-healing processes.Guess what? The placebo group did better than the no treatment group, or at least they reported that they did (all the outcomes were subjective). The article has been much blogged about, and you should read those posts for a more detailed and in some cases skeptical examination, but really, this is entirely unsurprising and doesn't challenge the conventional wisdom about placebos.The folks in this trial believed in the possibility that the pills would make them feel better. They just wouldn't have agreed to take part otherwise. And when those people got the treatment that they expected to work, they felt better. That's just the plain old placebo effect. We already know that the placebo effect is very strong in IBS, a disease which is, at least in many cases, psychosomatic.So the only really new result here is that there are people out there who'll believe that they'll experience improvement from sugar pills, if you give them a 15 minute briefing about the "mind-body self-healing" properties of those pills. That's an interesting addition to the record of human quirkiness, but it doesn't really tell us anything new about placebos.Kaptchuk, T., Friedlander, E., Kelley, J., Sanchez, M., Kokkotou, E., Singer, J., Kowalczykowski, M., Miller, F., Kirsch, I., & Lembo, A. (2010). Placebos without Deception: A Randomized Controlled Trial in Irritable Bowel Syndrome PLoS ONE, 5 (12) DOI: 10.1371/journal.pone.0015591... Read more »
Kaptchuk, T., Friedlander, E., Kelley, J., Sanchez, M., Kokkotou, E., Singer, J., Kowalczykowski, M., Miller, F., Kirsch, I., & Lembo, A. (2010) Placebos without Deception: A Randomized Controlled Trial in Irritable Bowel Syndrome. PLoS ONE, 5(12). DOI: 10.1371/journal.pone.0015591
Have you ever wanted to know whether a mouse is in pain?Of course you have. And now you can, thanks to Langford et al's paper Coding of facial expressions of pain in the laboratory mouse.It turns out that mice, just like people, display a distinctive "Ouch!" facial expression when they're suffering acute pain. It consists of narrowing of the eyes, bulging nose and cheeks, ears pulled back, and whiskers either pulled back or forwards.With the help of a high-definition video camera and a little training, you can reliably and accurately tell how much pain a mouse is feeling. It works for most kinds of mouse pain, although it's not seen in either extremely brief or very long-term pain.Langford et al tried it out on mice with a certain genetic mutation, which causes severe migraines in humans. These mice displayed the pain face even in the absence of external painful stimuli, showing that they were suffering internally. A migraine drug was able to stop the pain.Finally, lesions to a part of the brain called the anterior insula stopped mice from expressing their pain. This is exactly what happens in people as well, suggesting that our displays of suffering are an evolutionary ancient mechanism. Of course this kind of study can't prove that animals consciously feel pain in the same way that we do, but I see no reason to doubt it: we feel pain as a result of neural activity, and mammals have exactly the same brain systems.Langford, D., Bailey, A., Chanda, M., Clarke, S., Drummond, T., Echols, S., Glick, S., Ingrao, J., Klassen-Ross, T., LaCroix-Fralish, M., Matsumiya, L., Sorge, R., Sotocinal, S., Tabaka, J., Wong, D., van den Maagdenberg, A., Ferrari, M., Craig, K., & Mogil, J. (2010). Coding of facial expressions of pain in the laboratory mouse Nature Methods, 7 (6), 447-449 DOI: 10.1038/nmeth.1455... Read more »
Langford, D., Bailey, A., Chanda, M., Clarke, S., Drummond, T., Echols, S., Glick, S., Ingrao, J., Klassen-Ross, T., LaCroix-Fralish, M.... (2010) Coding of facial expressions of pain in the laboratory mouse. Nature Methods, 7(6), 447-449. DOI: 10.1038/nmeth.1455
Transcranial Magnetic Stimulation (TMS) is popular tool in neuroscience. A TMS kit is essentially a portable, powerful electromagnet, called a ‘coil’. Switching on the coil causes it to emit a magnetic pulse, and this magnetic field is strong enough to evoke electrical activity in the brain. So, by placing the TMS coil next to someone’s [...]... Read more »
Duecker F, & Sack AT. (2013) Pre-stimulus sham TMS facilitates target detection. PloS one, 8(3). PMID: 23469232
Could quantum mechanics save the soul? In the light of 20th century physics, is free will plausible? Such as been the hope of some philosophers, scientists (and pretenders to those titles) – but neuroscientist Peter Clarke argues that it’s just not happening, in an interesting new paper: Neuroscience, quantum indeterminism and the Cartesian soul Clarke […]The post Quantum Theory Won’t Save The Soul appeared first on Neuroskeptic.... Read more »
Clarke PG. (2013) Neuroscience, quantum indeterminism and the Cartesian soul. Brain and cognition, 84(1), 109-117. PMID: 24355546
Neuroskeptic readers will know that I'm a big fan of theories. Rather than just poking around (or scanning) the brain under different conditions and seeing what happens, it's always better to have a testable hypothesis.I just found a 2007 paper by Israeli computational neuroscientists Niv et al that puts forward a very interesting theory about dopamine. Dopamine is a neurotransmitter, and dopamine cells are known to fire in phasic bursts - short volleys of spikes over millisecond timescales - in response to something which is either pleasurable in itself, or something that you've learned is associated with pleasure. Dopamine is therefore thought to be involved in learning what to do in order to get pleasurable rewards.But baseline, tonic dopamine levels vary over longer periods as well. The function of this tonic dopamine firing, and its relationship, if any, to phasic dopamine signalling, is less clear. Niv et al's idea is that the tonic dopamine level represents the brain's estimate of the average availability of rewards in the environment, and that it therefore controls how "vigorously" we should do stuff.A high reward availability means that, in general, there's lots of stuff going on, lots of potential gains to be made. So if you're not out there getting some reward, you're missing out. In economic terms, the opportunity cost of not acting, or acting slowly, is high - so you need to hurry up. On the other hand, if there's only minor rewards available, you might as well take things nice and slow, to conserve your energy. Niv et al present a simple mathematical model in which a hypothetical rat must decide how often to press a lever in order to get food, and show that it accounts for the data from animal learning experiments.The distinction between phasic dopamine (a specific reward) vs. tonic dopamine (overall reward availability) is a bit like the distinction between fear vs. anxiety. Fear is what you feel when something scary, i.e. harmful, is right there in front of you. Anxiety is the sense that something harmful could be round the next corner.This theory accounts for the fact that if you give someone a drug that increases dopamine levels, such as amphetamine, they become hyperactive - they do more stuff, faster, or at least try to. That's why they call it speed. This happens to animals too. Yet this hyperactivity starts almost immediately, which means that it can't be a product of learning.It also rings true in human terms. The feeling that everything's incredibly important, and that everyday tasks are really exciting, is one of the main effects of amphetamine. Every speed addict will have a story about the time they stayed up all night cleaning every inch of their house or organizing their wardrobe. This can easily develop into the compulsive, pointless repetition of the same task over and over. People with bipolar disorder often report the same kind of thing during (hypo)mania.What controls tonic dopamine levels? A really brilliantly elegant answer would be: phasic dopamine. Maybe every time phasic dopamine levels spike in response to a reward (or something which you've learned to associate with a reward), some of the dopamine gets left over. If there's lots of phasic dopamine firing, which suggests that the availability of rewards is high, the tonic dopamine levels rise.Unfortunately, it's probably not that simple, as signals from different parts of the brain seem to alter tonic and phasic dopamine firing largely independently, and this would mean that tonic dopamine would only increase after a good few rewards, not pre-emptively, which seems unlikely. The truth is, we don't know what sets the dopamine tone, and we don't really know what it does; but Niv et al's account is the most convincing I've come across...Niv Y, Daw ND, Joel D, & Dayan P (2007). Tonic dopamine: opportunity costs and the control of response vigor. Psychopharmacology, 191 (3), 507-20 PMID: 17031711... Read more »
Niv Y, Daw ND, Joel D, & Dayan P. (2007) Tonic dopamine: opportunity costs and the control of response vigor. Psychopharmacology, 191(3), 507-20. PMID: 17031711
The past decade has been a bad one for antidepressant manufacturers. Quite apart from all the bad press these drugs have been getting lately, there's been a remarkable lack of new antidepressants making it to the market. The only really novel drug to hit the shelves since 2000 has been agomelatine. There were a couple of others that were just minor variants on old molecules, but that's it.This makes "Lu AA21004" rather special. It's a new antidepressant currently in development and by all accounts it's making good progress. It's now in Phase III trials, the last stage before approval. And a large clinical trial has just been published finding that it works.But is it a medical advance or merely a commercial one?Pharmacologically, Lu AA21004 is kind of a new twist on an old classic . Its main mechanism of action is inhibiting the reuptake of serotonin, just like Prozac and other SSRIs. However, unlike them, it also blocks serotonin 5HT3 and 5HT7 receptors, activates 5HT1A receptors and partially agonizes 5HT1B.None of these things cry out "antidepressant" to me, but they do at least make it a bit different.The new trial took 430 depressed people and randomized them to get Lu AA21004, at two different doses, 5mg or 10mg, or the older antidepressant venlafaxine at the high-ish dose of 225 mg, or placebo.It worked. Over 6 weeks, people on the new drug improved more than those on placebo, and equally as well as people on venlafaxine; the lower 5 mg dose was a bit less effective, but not significantly so.The size of the effect was medium, with a benefit over-and-above placebo of about 5 points on the MADRS depression scale, which considering that the baseline scores in this study averaged 34, is not huge, but it compares well to other antidepressant trials.Now we come to the side effects, and this is the most important bit, as we'll see later. The authors did not specifically probe for these, they just relied on spontaneous report, which tends to underestimate adverse events.Basically, the main problem with Lu AA21004 was that it made people sick. Literally - 9% of people on the highest dose suffered vomiting, and 38% got nausea. However, the 5 mg dose was no worse than venlafaxine for nausea, and was relatively vomit-free. Unlike venlafaxine, it didn't cause dry mouth, constipation, or sexual problems.So that's lovely then. Let's get this stuff to market!Hang on.The big selling point for this drug is clearly the lack of side effects. It was no more effective than the (much cheaper, because off-patent) venlafaxine. It was better tolerated, but that's not a great achievement to be honest. Venlafaxine is quite notorious for causing side effects, especially at higher doses.I take venlafaxine 300 mg and the side effects aren't the end of the world, but they're no fun, and the point is, they're well known to be worse than you get with other modern drugs, most notably SSRIs.If you ask me, this study should have compared the new drug to an SSRI, because they're used much more widely than venlafaxine. Which one? How about escitalopram, a drug which is, according to most of the literature, one of the best SSRIs, as effective as venlafaxine, but with fewer side effects.Actually, according to Lundbeck, who make escitalopram, it's even better than venlafaxine. Now, they would say that, given that they make it - but the makers of Lu AA21004 ought to believe them, because, er, they're the same people. "Lu" stands for Lundbeck.The real competitor for this drug, according to Lundbeck, is escitalopram. But no-one wants to be in competition with themselves.This may be why, although there are no fewer than 26 registered clinical trials of Lu AA21004 either ongoing or completed, only one is comparing it to an SSRI. The others either compare it to venlafaxine, or to duloxetine, which has even worse side effects. The one trial that will compare it to escitalopram has a narrow focus (sexual dysfunction).Pharmacologically, remember, this drug is an SSRI with a few "special moves", in terms of hitting some serotonin receptors. The question is - do those extra tricks actually make it better? Or is it just a glorified, and expensive, new SSRI? We don't know and we're not going to find out any time soon.If Lu AA21004 is no more effective, and no better tolerated, than tried-and-tested old escitalopram, anyone who buys it will be paying extra for no real benefit. The only winner, in that case, being Lundbeck.Alvarez E, Perez V, Dragheim M, Loft H, & Artigas F (2011). A double-blind, randomized, placebo-controlled, active reference study of Lu AA21004 in patients with major depressive disorder. The International Journal of Neuropsychopharmacology , 1-12 PMID: 21767441... Read more »
Alvarez E, Perez V, Dragheim M, Loft H, & Artigas F. (2011) A double-blind, randomized, placebo-controlled, active reference study of Lu AA21004 in patients with major depressive disorder. The international journal of neuropsychopharmacology / official scientific journal of the Collegium Internationale Neuropsychopharmacologicum (CINP), 1-12. PMID: 21767441
A few months ago, I asked Why Do We Sleep?That post was about sleep researcher Jerry Siegel, who argues that sleep evolved as a state of "adaptive inactivity". According to this idea, animals sleep because otherwise we'd always be active, and constant activity is a waste of energy. Sleeping for a proportion of the time conserves calories, and also keeps us safe from nocturnal predators etc.Siegel's theory in what we might call minimalist. That's in contrast to other hypotheses which claim that sleep serves some kind of vital restorative biological function, or that it's important for memory formation, or whatever. It's a hotly debated topic.But Siegel wasn't the first sleep minimalist. J. Allan Hobson and Robert McCarley created a storm in 1977 with The Brain As A Dream State Generator; I read somewhere that it provoked more letters to the Editor in the American Journal of Psychiatry than any other paper in that journal.Hobson and McCarley's article was so controversial because they argued that dreams are essentially side-effects of brain activation. This was a direct attack on the Freudian view that we dream as a result of our subconscious desires, and that dreams have hidden meanings. Freudian psychoanalysis was incredibly influential in American psychiatry in the 1970s.Freud believed that dreams exist to fulfil our fantasies, often though not always sexual ones. We dream about what we'd like to do - except we don't dream about it directly, because we find much of our desires shameful, so our minds disguise the wishes behind layers of metaphor etc. "Steep inclines, ladders and stairs, and going up or down them, are symbolic representations of the sexual act..." Interpreting the symbolism of dreams can therefore shed light on the depths of the mind.Hobson and McCarley argued that during REM sleep, our brains are active in a similar way to when we are awake; many of the systems responsible for alertness are switched on, unlike during deep, dreamless, non-REM sleep. But of course during REM there is no sensory input (our eyes are closed), and also, we are paralysed: an inhibitory pathway blocks the spinal cord, preventing us from moving, except for our eyes - hence why it's Rapid Eye Movement sleep.Dreams are simply a result of the "awake-like" forebrain - the "higher" perceptual, cognitive and emotional areas - trying to make sense of the input that it's receiving as a result of waves of activation arising from the brainstem. A dream is the forebrain's "best guess" at making a meaningful story out of the assortment of sensations (mostly visual) and concepts activated by these periodic waves. There's no attempt to disguise the shameful parts; the bizarreness of dreams simply reflects the fact that the input is pretty much random.Hobson and McCarley proposed a complex physiological model in which the activation is driven by the giant cells of the pontine tegmentum. These cells fire in bursts according to a genetically hard-wired rhythm of excitation and inhibition.The details of this model are rather less important than the fact that it reduces dreaming to a neurological side effect. This doesn't mean that the REM state has no function; maybe it does, but whatever it is, the subjective experience of dreams serves no purpose.A lot has changed since 1977, but Hobson seems to have stuck by the basic tenets of this theory. A good recent review came out in Nature Neuroscience last year, REM sleep and dreaming. In this paper Hobson proposes that the function of REM sleep is to act as a kind of training system for the developing brain.The internally-generated signals that arise from the brainstem (now called PGO waves) during REM help the forebrain to learn how to process information. This explains why we spend more time in REM early in life; newborns have much more REM than adults; in the womb, we are in REM almost all the time. However, these are not dreams per se because children don't start reporting experiencing dreams until about the age of 5.Protoconscious REM sleep could therefore provide a virtual world model, complete with an emergent imaginary agent (the protoself) that moves (via fixed action patterns) through a fictive space (the internally engendered environment) and experiences strong emotion as it does so.This is a fascinating hypothesis, although very difficult to test, and it begs the question of how useful "training" based on random, meaningless input is.While Hobson's theory is minimalist in that it reduces dreams, at any rate in adulthood, to the status of a by-product, it doesn't leave them uninteresting. Freudian dream re-interpretation is probably ruled out ("That train represents your penis and that cat was your mother", etc.), but if dreams are our brains processing random noise, then they still provide an insight into how our brains process information. Dreams are our brains working away on their own, with the real world temporarily removed.Of course most dreams are not going to give up life-changing insights. A few months back I had a dream which was essentially a scene-for-scene replay of the horror movie Cloverfield. It was a good dream, scarier than the movie itself, because I didn't know it was a movie. But I think all it tells me is that I was paying attention when I watched Cloverfield.On the other hand, I have had several dreams that have made me realize important things about myself and my situation at the time. By paying attention to your dreams, you can work out how you really think, and feel, about things, what your preconceptions and preoccupations are. Sometimes.Hobson JA, & McCarley RW (1977). The brain as a dream state generator: an activation-synthesis hypothesis of the dream process. The American journal of psychiatry, 134 (12), 1335-48 PMID: 21570Hobson, J. (2009). REM sleep and dreaming: towards a theory of protoconsciousness Nature Reviews Neuroscience, 10 (11), 803-813 DOI: 10.1038/nrn2716... Read more »
Hobson JA, & McCarley RW. (1977) The brain as a dream state generator: an activation-synthesis hypothesis of the dream process. The American journal of psychiatry, 134(12), 1335-48. PMID: 21570
Hobson, J. (2009) REM sleep and dreaming: towards a theory of protoconsciousness. Nature Reviews Neuroscience, 10(11), 803-813. DOI: 10.1038/nrn2716
What if there was a drug that didn't just affect the levels of chemicals in your brain, it turned off genes in your brain? That possibility - either exciting or sinister depending on how you look at it - could be remarkably close, according to a report just out from a Spanish group.The authors took an antidepressant, sertraline, and chemically welded it to a small interfering RNA (siRNA). A siRNA is kind of like a pair of genetic handcuffs. It selectively blocks the expression of a particular gene, by binding to and interfering with RNA messengers. In this case, the target was the serotonin 5HT1A receptor.The authors injected their molecule into the brains of some mice. The sertraline was there to target the siRNA at specific cell types. Sertraline works by binding to and blocking the serotonin transporter (SERT), and this is only expressed on cells that release serotonin; so only these cells were subject to the 5HT1A silencing.The idea is that this receptor acts as a kind of automatic off-switch for these cells, making them reduce their firing in response to their own output, to keep them from firing too fast. There's a theory that this feedback can be a bad thing, because it stops antidepressants from being able to boost serotonin levels very much, although this is debated.Anyway, it worked. The treated mice showed a strong and selective reduction in the density of the 5HT1A receptor in the target area (the Raphe nuclei containing serotonin cells), but not in the rest of the brain.Note that this isn't genetic modification as such. The gene wasn't deleted, it was just silenced, temporarily one hopes; the effect persisted for at least 3 days, but they didn't investigate just how long it lasted.That's remarkable enough, but what's more, it also worked when they administered the drug via the intranasal route. In many siRNA experiments, the payload is injected directly into the brain. That's fine for lab mice, but not very practical for humans. Intranasal administration, however, is popular and easy.So siRNA-sertraline, and who knows what other drugs built along these lines, may be closer to being ready for human consumption than anyone would have predicted. However... the mouse's brain is a lot closer to its nose than the human brain is, so it might not go quite as smoothly.The mind boggles at the potential. If you could selectively alter the gene expression of selective neurons, you could do things to the brain that are currently impossible. Existing drugs hit the whole brain, yet there are many reasons why you'd prefer to only affect certain areas. And editing gene expression would allow much more detailed control over those cells than is currently possible.Currently available drugs are shotguns and sledgehammers. These approaches could provide sniper rifles and scalpels. But whether it will prove to be safe remains to be seen. I certainly wouldn't want to be first one to snort this particular drug.Bortolozzi, A., Castañé, A., Semakova, J., Santana, N., Alvarado, G., Cortés, R., Ferrés-Coy, A., Fernández, G., Carmona, M., Toth, M., Perales, J., Montefeltro, A., & Artigas, F. (2011). Selective siRNA-mediated suppression of 5-HT1A autoreceptors evokes strong anti-depressant-like effects Molecular Psychiatry DOI: 10.1038/mp.2011.92... Read more »
Bortolozzi, A., Castañé, A., Semakova, J., Santana, N., Alvarado, G., Cortés, R., Ferrés-Coy, A., Fernández, G., Carmona, M., Toth, M.... (2011) Selective siRNA-mediated suppression of 5-HT1A autoreceptors evokes strong anti-depressant-like effects. Molecular Psychiatry. DOI: 10.1038/mp.2011.92
Breaking news from the BBC -Testosterone link to aggression 'all in the mind' Work in Nature magazine suggests the mind can win over hormones... Testosterone induces anti-social behaviour in humans, but only because of our own prejudices about its effect rather than its biological activity, suggest the authors. The researchers, led by Ernst Fehr of the University of Zurich, Switzerland, said the results suggested a case of "mind over matter" with the brain overriding body chemistry. "Whereas other animals may be predominantly under the influence of biological factors such as hormones, biology seems to exert less control over human behaviour," they said. Phew, that's a relief - for a minute back there I was worried we didn't have free will. But look a little closer at the study, and it turns out that all is not as it seems. The experiment (Eisenegger et al) involved giving healthy women 0.5 mg testosterone, or placebo, in a randomized double-blind manner, and then getting them to take part in the "Ultimatum Game".This is a game for two players. One, the Proposer, is given some money, and then has to offer to give a certain proportion of it to the other player, the Receiver. If the Receiver accepts the offer, both players get the agreed-upon amount of money. If they reject it, however, no-one gets anything.The Proposer is basically faced with the choice of making a "fair" offer, e.g. giving away 50%, or a greedy one, say offering 10% and keeping 90% for themselves. Receivers generally accept fair offers, but most people get annoyed or insulted by unfair ones, and reject them, even though this means they lose money (10% of the money is still more than 0%).What happened? Testosterone affected behaviour. It had no effect on women playing the role of the Receivers, but the Proposers given testosterone made significantly fairer offers on average, compared to those given placebo. That's not mind over matter, that's matter over mind - give someone a hormone and their behaviour changes.The direction of the effect is quite interesting - if testosterone increased aggression, as popular belief has it, you might expect it to decrease fair offers. Or, you might not. I suppose it depends on your understanding of "aggression". For their part, Eisenegger et al interpret this finding as suggesting that testosterone doesn't increase aggression per se, but rather increases our motivation to achieve "status", which leads to Proposers making fairer offers, so as to appear nicer. Hmm. Maybe.But where did the BBC get the whole "all in the mind" thing from? Well, after the testing was over, the authors asked the women whether they thought they had taken testosterone or placebo. The results showed that the women couldn't actually tell which they'd had - they were no more accurate than if they were guessing - but women who believed they'd got testosterone made more unfair offers than women who believed they got placebo. The size of this effect was bigger than the effect of testosterone.Is that "mind over matter"? Do beliefs about testosterone exert a more powerful effect on behaviour than testosterone itself? Maybe they do, but these data don't tell us anything about that. The women's beliefs weren't manipulated in any way in this trial, so as an experiment it couldn't investigate belief effects. In order to show that belief alters behaviour, you'd need to control beliefs. You could randomly assign some subjects to be told they were taking testosterone, and compare them to others told they were on placebo, say.This study didn't do anything like that. Beliefs about testosterone were only correlated with behaviour, and unless someone's changed the rules recently, correlation isn't causation. It's like finding that people with brown skin are more likely to be Hindus than people with white skin, and concluding that belief in Brahma alters pigmentation. It could even be that the behaviour drove the belief, because subjects were quizzed about their testosterone status after the Ultimatum Game - maybe women who, for whatever reason, behaved selfishly, decided that this meant they had taken testosterone!Overall, this study provides quite interesting data about hormonal effects on behaviour, but tells us nothing about the effects of beliefs about hormones. On that issue, the way the media have covered this experiment is rather more informative than the experiment itself.Eisenegger, C., Naef, M., Snozzi, R., Heinrichs, M., & Fehr, E. (2009). Prejudice and truth about the effect of testosterone on human bargaining behaviour Nature DOI: 10.1038/nature08711... Read more »
Eisenegger, C., Naef, M., Snozzi, R., Heinrichs, M., & Fehr, E. (2009) Prejudice and truth about the effect of testosterone on human bargaining behaviour. Nature. DOI: 10.1038/nature08711
Capitalists beware. No less a journal than Nature has just published a paper proving conclusively that the human brain is a Communist, and that it's plotting the overthrow of the bourgeois order and its replacement by the revolutionary Dictatorship of the Proletariat even as we speak.Kind of. The article, Neural evidence for inequality-averse social preferences, doesn't mention the C word, but it does claim to have found evidence that people's brains display more egalitarianism than people themselves admit to.Tricomi et al took 20 pairs of men. At the start of the study, both men got a $30 payment, but one member of each pair was then randomly chosen to get a $50 bonus. Thus, one guy was "rich", while the other was "poor". Both men then had fMRI scans, during which they were offered various sums of money and saw their partner being offered money too. They rated how "appealing" these money transfers were on a 10 point scale.What happened? Unsurprisingly both "rich" and "poor" said that they were pleased at the prospect of getting more cash for themselves, the poor somewhat more so, but people also had opinions about payments to the other guy:the low-pay group disliked falling farther behind the high-pay group (‘disadvantageous inequality aversion’), because they rated positive transfers to the high-pay participants negatively, even though these transfers had no effect on their own earnings. Conversely, the high-pay group seemed to value transfers [to the poor person] that closed the gap between their earnings and those of the low-pay group (‘advantageous inequality aversion’)What about the brain? When people received money for themselves, activity in the ventromedial prefrontal cortex (vmPFC) and the ventral striatum correlated with the size of their gain.However, when presented with a payment to the other person, these areas seemed to be rather egalitarian. Activity rose in rich people when their poor colleagues got money. In fact, it was greater in that case than when they got money themselves, which means the "rich" people's neural activity was more egalitarian than their subjective ratings were. Whereas in "poor" people, the vmPFC and the ventral striatum only responded to getting money, not to seeing the rich getting even richer.The authors conclude that thisindicates that basic reward structures in the brain may reflect even stronger equity considerations than is necessarily expressed or acted on at the behavioural level... Our results provide direct neurobiological evidence in support of the existence of inequality-averse social preferences in the human brain.Notice that this is essentially a claim about psychology, not neuroscience, even though the authors used neuroimaging in this study. They started out by assuming some neuroscience - in this case, that activity in the vmPFC and the ventral striatum indicates reward i.e. pleasure or liking - and then used this to investigate psychology, in this case, the idea that people value equality per se, as opposed to the alternative idea, that "dislike for unequal outcomes could also be explained by concerns for social image or reciprocity, which do not require a direct aversion towards inequality."This is known as reverse inference, i.e. inference from data about the brain to theories about the mind. It's very common in neuroimaging papers - we've all done it - but it is problematic. In this case, the problem is that the argument relies on the idea that activity in the vmPFC and ventral striatum is evidence for liking.But while there's certainly plenty of evidence that these areas are activated by reward, and the authors confirmed that activity here correlated with monetary gain, that doesn't mean that they only respond to reward. They could also respond to other things. For example, there's evidence that the vmPFC is also activated by looking at angry and sad faces.Or to put it another way: seeing someone you find attractive makes your pupils dilate. If you were to be confronted by a lion, your pupils would dilate. Fortunately, that doesn't mean you find lions attractive - because fear also causes pupil dilation.So while Tricomi et al argue that people, or brains, like equality, on the basis of these results, I remain to be fully convinced. As Russell Poldrack noted in 2006caution should be exercised in the use of reverse inference... In my opinion, reverse inference should be viewed as another tool (albeit an imperfect one) with which to advance our understanding of the mind and brain. In particular, reverse inferences can suggest novel hypotheses that can then be tested in subsequent experiments.Tricomi E, Rangel A, Camerer CF, & O'Doherty JP (2010). Neural evidence for inequality-averse social preferences. Nature, 463 (7284), 1089-91 PMID: 20182511... Read more »
Tricomi E, Rangel A, Camerer CF, & O'Doherty JP. (2010) Neural evidence for inequality-averse social preferences. Nature, 463(7284), 1089-91. PMID: 20182511
There's a lot of talk, much of it rather speculative, about "neuroethics" nowadays.But there's one all too real ethical dilemma, a direct consequence of modern neuroscience, that gets very little attention. This is the problem of incidental findings on MRI scans.An "incidental finding" is when you scan someone's brain for research purposes, and, unexpectedly, notice that something looks wrong with it. This is surprisingly common: estimates range from 2–8% of the general population. It will happen to you if you regularly use MRI or fMRI for research purposes, and when it does, it's a shock. Especially when the brain in question belongs to someone you know. Friends, family and colleagues are often the first to be recruited for MRI studies.This is why it's vital to have a system in place for dealing with incidental findings. Any responsible MRI scanning centre will have one, and as a researcher you ought to be familiar with it. But what system is best?Broadly speaking there are two extreme positions:Research scans are not designed for diagnosis, and 99% of MRI researchers are not qualified to make a diagnosis. What looks "abnormal" to Joe Neuroscientist BSc or even Dr Bob Psychiatrist is rarely a sign of illness, and likewise they can easily miss real diseases. So, we should ignore incidental findings, pretend the scan never happened, because for all clinical purposes, it didn't.You have to do whatever you can with an incidental finding. You have the scans, like it or not, and if you ignore them, you're putting lives at risk. No, they're not clinical scans, they can still detect many diseases. So all scans should be examined by a qualified neuroradiologist, and any abnormalities which are possibly pathological should be followed-up.Neither of these extremes is very satisfactory. Ignoring incidental findings sounds nice and easy, until you actually have to do it, especially if it's your girlfriend's brain. On the other hand, to get every single scan properly checked by a neuroradiologist would be expensive and time-consuming. Also, it would effectively turn your study into a disease screening program - yet we know that screening programs can cause more harm than good, so this is not necessarily a good idea.Most places adopt a middle-of-the-road approach. Scans aren't routinely checked by an expert, but if a researcher spots something weird, they can refer the scan to a qualified clinician to follow up. Almost always, there's no underlying disease. Even large, OMG-he-has-a-golf-ball-in-his-brain findings can be benign. But not always.This is fine but it doesn't always work smoothly. The details are everything. Who's the go-to expert for your study, and what are their professional obligations? Are they checking your scan "in a personal capacity", or is this a formal clinical referral? What's their e-mail address? What format should you send the file in? If they're on holiday, who's the backup? At what point should you inform the volunteer about what's happening?Like fire escapes, these things are incredibly boring, until the day when they're suddenly not.A new paper from the University of California Irvine describes a computerized system that made it easy for researchers to refer scans to a neuroradiologist. A secure website was set up and publicized in University neuroscience community.Suspect scans could be uploaded, in one of two common formats. They were then anonymized and automatically forwarded to the Department of Radiology for an expert opinion. Email notifications kept everyone up to date with the progress of each scan.This seems like a very good idea, partially because of the technical advantages, but also because of the "placebo effect" - the fact that there's an electronic system in place sends the message: we're serious about this, please use this system.Out about 5,000 research scans over 5 years, there were 27 referrals. Most were deemed benign... except one which turned out to be potentially very serious - suspected hydrocephalus, increased fluid pressure in the brain, which prompted an urgent referral to hospital for further tests.There's no ideal solution to the problem of incidental findings, because by their very nature, research scans are kind of clinical and kind of not. But this system seems as good as any.Cramer SC, Wu J, Hanson JA, Nouri S, Karnani D, Chuang TM, & Le V (2011). A system for addressing incidental findings in neuroimaging research. NeuroImage PMID: 21224007... Read more »
Cramer SC, Wu J, Hanson JA, Nouri S, Karnani D, Chuang TM, & Le V. (2011) A system for addressing incidental findings in neuroimaging research. NeuroImage. PMID: 21224007
The philosophical zombie, or p-zombie, is a hypothetical creature which is indistinguishable from a normal human, except that it has no conscious experience. Whether a p-zombie could exist, and whether it even makes sense to ask that question, are popular dinner-table topics of conversation amongst philosophers of mind. A new case report from Swiss neurologists […]The post The Colorful Case of the Philosophical Zombie? appeared first on Neuroskeptic.... Read more »
Carota A, & Calabrese P. (2013) The achromatic 'philosophical zombie', a syndrome of cerebral achromatopsia with color anopsognosia. Case reports in neurology, 5(1), 98-103. PMID: 23687498
"Prevention is better than cure", so they say. And in most branches of medicine, preventing diseases, or detecting early signs and treating them pre-emptively before the symptoms appear, is an important art.Not in psychiatry. At least not yet. But the prospect of predicting the onset of psychotic illnesses like schizophrenia, and of "early intervention" to try to prevent them, is a hot topic at the moment.Schizophrenia and similar illnesses usually begin with a period of months or years, generally during adolescence, during which subtle symptoms gradually appear. This is called the "prodrome" or "at risk mental state". The full-blown disorder then hits later. If we could detect the prodromal phase and successfully treat it, we could save people from developing the illness. That's the plan anyway.But many kids have "prodromal symptoms" during adolescence and never go on to get ill, so treating everyone with mild symptoms of psychosis would mean unnecessarily treating a lot of people. There's also the question of whether we can successfully prevent progression to illness at all, and there have been only a few very small trials looking at whether treatments work for that - but that's another story.Stephan Ruhrmann et al. claim to have found a good way of predicting who'll go on to develop psychosis in their paper Prediction of Psychosis in Adolescents and Young Adults at High Risk. This is based on the European Prediction of Psychosis Study (EPOS) which was run at a number of early detection clinics in Britain and Europe. People were referred to the clinics through various channels if someone was worried they seemed a bit, well, prodromalReferral sources included psychiatrists, psychologists, general practitioners, outreach clinics, counseling services, and teachers; patients also initiated contact. Knowledge about early warning signs (eg, concentration and attention disturbances, unexplained functional decline) and inclusion criteria was disseminated to mental health professionals as well as institutions and persons who might be contacted by at-risk persons seeking help.245 people consented to take part in the study and met the inclusion criteria meaning they were at "high risk of psychosis" according to at least one of two different systems, the Ultra High Risk (UHR) or the COGDIS criteria. Both class you as being at risk if you show short lived or mild symptoms a bit like those seen in schizophrenia i.e.COGDIS: inability to divide attention; thought interference, pressure, and blockage; and disturbances of receptive and expressive speech, disturbance of abstract thinking, unstable ideas of reference, and captivation of attention by details of the visual field...UHR: unusual thought content/ delusional ideas, suspiciousness/persecutory ideas, grandiosity, perceptual abnormalities/hallucinations, disorganized communication, and odd behavior/appearance... Brief limited intermittent psychotic symptoms (BLIPS) i.e. hallucinations, delusions, or formal thought disorders that occurred resolved spontaneously within 1 week...Then they followed up the 245 kids for 18 months and saw what happened to them.What happened was that 37 of them developed full-blown psychosis: 23 suffered schizophrenia according to DSM-IV criteria, indicating severe and prolonged symptoms; 6 had mood disorders, i.e depression or bipolar disorder, with psychotic features, and the rest mostly had psychotic episodes too short to be classed as schizophrenia. 37 people is 19% of the 183 for whom full 18 month data was available; the others dropped out of the study, or went missing for some reason.Is 19% high or low? Well, it's much higher than the rate you'd see in randomly selected people, because the risk of getting schizophrenia is less than 1% lifetime and this was only 18 months; the risk of a random person developing psychosis in any given year has been estimated at 0.035% in Britain. So the UHR and COGDIS criteria are a lot better than nothing.On the other hand 19% is far from being "all": 4 out of 5 of the supposedly "high risk" kids in this study didn't in fact get ill, although some of them probably developed illness after the 18 month period was over.The authors also came up with a fancy algorithm for predicting risk based on your score on various symptom rating scales, and they claim that this can predict psychosis much better, with 80% accuracy. As this graph shows, the rate of developing psychosis in those scoring highly on their Prognostic Index is really high. (In case you were wondering the Prognostic Index is [1.571 x SIPS-Positive score 16] + [0.865 x bizarre thinking score] + [0.793 x sleep disturbances score] + [1.037 x SPD score] + [0.033 x (highest GAF-M score in the past year – 34.64)] + [0.250 x (years of education – 12.52)]. Use it on your friends for hours of psychiatric fun!)However they came up with the algorithm by putting all of their dozens of variables into a big mathematical model, crunching the numbers and picking the ones that were most highly correlated with later psychosis - so they've specifically selected the variables that best predict illness in their sample, but that doesn't mean they'll do so in any other case. This is basically the non-independence problem that has so troubled fMRI, although the authors, to their credit, recognize this and issue the appropriate cautions.So overall, we can predict psychosis, a bit, but far from perfectly. More research is needed. One of the proposed additions to the new DSM-V psychiatric classification system is "Psychosis Risk Syndrome" i.e. the prodrome; it's not currently a disorder in DSM-IV. This idea has been attacked as an invitation to push antipsychotic drugs on kids who aren't actually ill and don't need them. On the other hand though, we shouldn't forget that we're talking about terrible illnesses here: if we could successfully predict and prevent psychosis, we'd be doing a lot of good.Ruhrmann, S., Schultze-Lutter, F., Salokangas, R., Heinimaa, M., Linszen, D., Dingemans, P., Birchwood, M., Patterson, P., Juckel, G., Heinz, A., Morrison, A., Lewis, S., Graf von Reventlow, H., & Klosterkotter, J. (2010). Prediction of Psychosis in Adolescents and Young Adults at High Risk: Results From the Prospective European Prediction of Psychosis Study ... Read more »
Ruhrmann, S., Schultze-Lutter, F., Salokangas, R., Heinimaa, M., Linszen, D., Dingemans, P., Birchwood, M., Patterson, P., Juckel, G., Heinz, A.... (2010) Prediction of Psychosis in Adolescents and Young Adults at High Risk: Results From the Prospective European Prediction of Psychosis Study. Archives of General Psychiatry, 67(3), 241-251. DOI: 10.1001/archgenpsychiatry.2009.206
According to the holonomic brain theory,Cognitive function is guided by a matrix of neurological wave interference patterns situated temporally between holographic Gestalt perception and discrete, affective, quantum vectors derived from reward anticipation potentials.Well, I don't know about that, but a group of neuroscientists have just reported on using holograms as a tool for studying brain function: Three-dimensional holographic photostimulation of the dendritic arbor.A while ago, scientists worked out how to "cage" interesting compounds, such as neurotransmitters, inside large, inert molecules. Then, by shining laser light of the right wavelength at the cages, it's possible to break them and release what's inside. This is very useful because it allows you to, say, selectively release neurotransmitters in particular places, just by pointing the laser at them.There's a problem though. The uncaging doesn't happen immediately: the laser has to be pointing at the same point for a certain fixed time. This makes it very difficult to simultaneously stimulate many different points - which is, ideally, what you'd want to do, because in the real brain, everything happens at the same time: a given cell might be receiving input from dozens of others, and sending output to the same number.One solution is to simply split and block the beam into several smaller, parallell beams. This allows you to hit several spots simultaneously but it suffers from the problem that all the spots have to lie in the same 2D "slice". A bit like how, if you taped several laser pointers together, you could project a complex series of dots onto the wall, but not a 3D one.This is where holograms come in. As everyone knows, holograms appear to be 3D images. By adopting the same kind of algorithms as are used in the construction of holograms, the authors were able to use a single laser to generate a series of stimulation spots within 3D space. The image above shows that they were able to stimulate a single dendritic spine of a single neuron by uncaging glutamate.Then they moved on to a real experiment: stimulating several branches of a single cell. What they found was that if you stimulate several branches simultaneously, the overall excitation produced is less than the sum of the individual stimulations.The bottom graph shows this: the grey line is what you'd expect if it was simply summed. Interestingly, a drug called 4-AP, which is used to provoke epileptic seizures in experimental animals, blocked this effect and made cells respond in a linear fashion.This is clearly an extremely promising method. I've previously blogged about how it's possible to visualize individual dendritic branches in the living brain using another laser-based method, two-photon microscopy. In theory, therefore, it might be possible to both see, and manipulate, the brain on a microscopic level, all without physically touching it at all.Yang S, Papagiakoumou E, Guillon M, de Sars V, Tang CM, & Emiliani V (2011). Three-dimensional holographic photostimulation of the dendritic arbor. Journal of neural engineering, 8 (4) PMID: 21623008... Read more »
Yang S, Papagiakoumou E, Guillon M, de Sars V, Tang CM, & Emiliani V. (2011) Three-dimensional holographic photostimulation of the dendritic arbor. Journal of neural engineering, 8(4), 46002. PMID: 21623008
Antidepressant sales are rising in most Western countries, and they have been for at least a decade. Recently, we learned that the proportion of Americans taking antidepressants in any given year nearly doubled from 1996 to 2005.The situation has been thought to be similar in the UK. But a hot-off-the-press paper in the British Medical Journal reveals some surprising facts about the issue: Explaining the rise in antidepressant prescribing.The authors examined medical records from 1.7 million British patients in primary care (General Practice, i.e. family doctors.) They found that antidepressant sales rose strongly between 1993 and 2005, not because more people are taking these drugs, but entirely because of an increase in the duration of treatment amongst the antidepressant users. It's not that more people are taking them, it's that people are taking them for longer.In fact, the number of people being diagnosed with depression and prescribed antidepressants has actually fallen over time. The rate of diagnosed depression remained steady from 1993 to about 2001, and then fell markedly, by about a third, up to 2005. This trend was seen in both men and women, but there were age differences. In 18-30 year olds, there was a gradual increase in diagnoses before the decrease. (Note that these graphs show the number of people getting their first ever diagnosis of depression in each year.)The likelihood of being given antidepressants for a diagnosis of depression stayed roughly constant, at about 75-80% across the years. However, the average duration of treatment increased over time -The change doesn't look like much, but remember that even a small change in the number of long-term users translates into a large effect on the total number of sales, because each long-term user takes a lot of pills. The authors concludeAntidepressant prescribing nearly doubled during the study period—the average number of prescriptions issued per patient increased from 2.8 in 1993 to 5.6 in 2004. ... the rise in antidepressant prescribing is mainly explained by small changes in the proportion of patients receiving long term treatment.Wow. I didn't see that coming, I'll admit. A lot of people, myself included, had assumed that rising antidepressant use was caused by people becoming more willing to seek treatment for depression. Or maybe that doctors were becoming more eager to prescribe drugs. Others believed that rates of clinical depression were rising.There's no evidence for either of these theories in this British data-set. The recent fall in clinical depression diagnoses, following an increase in young people over the course of the 1990s, is especially surprising. This conflicts with the only British population survey of mental health, the APMS. The APMS found that rates of depression and mixed anxiety/depression increased between 1993 and 2000 in most age groups but least of all in the young, and little change 2000 to 2007. I trust this new data more, because population surveys almost certainly overestimate mental illness.How does this result compare to elsewhere? In the USA, the average number of antidepressant prescriptions per patient per year rose from "5.60 in 1996 to 6.93 in 2005" according to a recent estimate. In this study yearly "prescriptions issued per patient increased from 2.8 in 1993 to 5.6 in 2004." So there's a major trans-Atlantic difference. In Britain, the length of use increased greatly, while in the US it only rose slightly, but from a higher baseline.Finally, why has this happened? We can only speculate. Maybe doctors have become more keen on long-term treatment to prevent depressive relapse. Or maybe users have become more willing to take antidepressants long-term. Modern drugs generally have milder side effects than older ones, so this makes sense, although some people would say that this is just further proof that modern antidepressants are "addictive"...Moore M, Yuen HM, Dunn N, Mullee MA, Maskell J, & Kendrick T (2009). Explaining the rise in antidepressant prescribing: a descriptive study using the general practice research database. BMJ (Clinical research ed.), 339 PMID: 19833707... Read more »
Moore M, Yuen HM, Dunn N, Mullee MA, Maskell J, & Kendrick T. (2009) Explaining the rise in antidepressant prescribing: a descriptive study using the general practice research database. BMJ (Clinical research ed.). PMID: 19833707
Last month I wrote about how electrical stimulation of the hippocampus causes temporary amnesia - Zapping Memories Away.Now Toronto neurologists Laxton et al have tried to use deep brain stimulation (DBS) to improve memory in people with Alzheimer's disease. Progressive loss of memory is the best-known symptom of this disorder, and while some drugs are available, they provide partial relief at best.This study stems from a chance discovery by the same Toronto group. In 2008, they reported that stimulation of the hypothalamus caused vivid memory recollections a 50 year old man. In that case, the effect was entirely unintended and unexpected. The patient was being given DBS to try to curb his appetite (he weighed 420 pounds.) The hypothalamus is involved in regulating appetite, not memory - but the fornix, a nerve bundle that passes through that area, is. It's the main pathway connecting the hippocampus to the rest of the brain, and the hippocampus is vital for memory.In this new study, Laxton et al implanted electrodes to stimulate the fornix in 6 patients with mild (early-stage) Alzheimer's. What happened? The results, unfortunately, were quite messy. On average, the patients symptoms got worse over the course of the year. Alzheimer's is a progressive degenerative disease, so this is what you'd expect to happen without treatment. The authors say that the decline was a bit slower than you'd expect in these kinds of patients, but to be honest, it's impossible to tell because there was no control group.However, two patients did show memory improvements, and these were the same two who reported vivid recollections when the electrodes were first implanted (similar to the original obese guy):Two of the 6 patients reported stimulation induced experiential phenomena. Patient 2 reported having the sensation of being in her garden, tending to the plants on a sunny day... Patient 4 reported having the memory of being fishing on a boat on a wavy blue colored lake with his sons and catching a large green and white fish. On later questioning in both patients, these events were autobiographical, had actually occurred in the past, and were accurately reported according to the patient’s spouse.Also, the stimulation caused brain activation, generally switching "on" the areas that are turned "off" in Alzheimer's, and this lasted for a year (the length of the study so far). And there were no major side-effects. That's all good.Overall, these results are extremely interesting, but we don't know how well the treatment really works, and we won't know until someone does a randomized controlled trial with a longer follow-up period; something which is, unfortunately, true of a lot of the latest DBS studies.Link: The Neurocritic on the original 2008 paper.Laxton AW, Tang-Wai DF, McAndrews MP, Zumsteg D, Wennberg R, Keren R, Wherrett J, Naglie G, Hamani C, Smith GS, & Lozano AM (2010). A phase I trial of deep brain stimulation of memory circuits in Alzheimer's disease. Annals of neurology PMID: 20687206... Read more »
Laxton AW, Tang-Wai DF, McAndrews MP, Zumsteg D, Wennberg R, Keren R, Wherrett J, Naglie G, Hamani C, Smith GS.... (2010) A phase I trial of deep brain stimulation of memory circuits in Alzheimer's disease. Annals of neurology. PMID: 20687206
Schizophrenia is generally thought of as the "most genetic" of all psychiatric disorders and in the past 10 years there have been heroic efforts to find the genes responsible for it, with not much success so far.A new study reminds us that there's more to it than genes alone: Social Risk or Genetic Liability for Psychosis? The authors decided to look at adopted children, because this is one of the best ways of disentangling genes and environment.If you find that the children of people with schizophrenia are at an increased risk of schizophrenia (they are), that doesn't tell you whether the risk is due to genetics, or environment, because we share both with our parents. Only in adoption is the link between genes and environment broken.Wicks et al looked at all of the kids born in Sweden and then adopted by another Swedish family, over several decades (births 1955-1984). To make sure genes and environment were independent, they excluded those who were adopted by their own relatives (i.e. grandparents), and those lived with their biological parents between the ages of 1 and 15. This is the kind of study you can only do in Scandinavia, because only those countries have accessible national records of adoptions and mental illness...What happened? Here's a little graph I whipped up:Brighter colors are adoptees at "genetic risk", defined as those with at least one biological parent who was hospitalized for a psychotic illness (including schizophrenia but also bipolar disorder.) The outcome measure was being hospitalized for a non-affective psychosis, meaning schizophrenia or similar conditions but not bipolar.As you can see, rates are much higher in those with a genetic risk, but were also higher in those adopted into a less favorable environment. Parental unemployment was worst, followed by single parenthood, which was also quite bad. Living in an apartment as opposed to a house, however, had only a tiny effect.Genetic and environmental risk also interacted. If a biological parent was mentally ill and your adopted parents were unemployed, that was really bad news.But hang on. Adoption studies have been criticized because children don't get adopted at random (there's a story behind every adoption, and it's rarely a happy one), and also adopting families are not picked at random - you're only allowed to adopt if you can convince the authorities that you're going to be good parents.So they also looked at the non-adopted population, i.e. everyone else in Sweden, over the same time period. The results were surprisingly similar. The hazard ratio (increased risk) in those with parental mental illness, but no adverse circumstances, was 4.5, the same as in the adoption study, 4.7.For environment, the ratio was 1.5 for unemployment, and slightly lower for the other two. This is a bit less than in the adoption study (2.0 for unemployment). And the two risks interacted, but much less than they did in the adoption sample.However, one big difference was that the total lifetime rate of illness was 1.8% in the adoptees and just 0.8% in the nonadoptees, despite much higher rates of unemployment etc. in the latter. Unfortunately, the authors don't discuss this odd result. It could be that adopted children have a higher risk of psychosis for whatever reason. But it could also be an artefact: rates of adoption massively declined between 1955 and 1984, so most of the adoptees were born earlier, i.e. they're older on average. That gives them more time in which to become ill.A few more random thoughts:This was Sweden. Sweden is very rich and compared to most other rich countries also very egalitarian with extremely high taxes and welfare spending. In other words, no-one in Sweden is really poor. So the effects of environment might be bigger in other countries.On the other hand this study may overestimate the risk due to environment, because it looked at hospitalizations, not illness per se. Supposing that poorer people are more likely to get hospitalized, this could mean that the true effect of environment on illness is lower than it appears.The outcome measure was hospitalization for "non-affective psychosis". Only 40% of this was diagnosed as "schizophrenia". The rest will have been some kind of similar illness which didn't meet the full criteria for schizophrenia (which are quite narrow, in particular, they require 6 months of symptoms).Parental bipolar disorder was counted as a family history. This does make sense because we know that bipolar disorder and schizophrenia often occur in the same families (and indeed they can be hard to tell apart, many people are diagnosed with both at different times.)Overall, though, this is a solid study and confirms that genes and environment are both relevant to psychosis. Unfortunately, almost all of the research money at the moment goes on genes, with studying environmental factors being unfashionable.Wicks S, Hjern A, & Dalman C (2010). Social Risk or Genetic Liability for Psychosis? A Study of Children Born in Sweden and Reared by Adoptive Parents. The American journal of psychiatry PMID: 20686186... Read more »
Wicks S, Hjern A, & Dalman C. (2010) Social Risk or Genetic Liability for Psychosis? A Study of Children Born in Sweden and Reared by Adoptive Parents. The American journal of psychiatry. PMID: 20686186
You've just finished doing some research using fMRI to measure brain activity. You designed the study, recruited the volunteers, and did all the scans. Phew. Is that it? Can you publish the findings yet?Unfortunately, no. You still need to do the analysis, and this is often the most trickiest stage. The raw data produced during an fMRI experiment are meaningless - in most cases, each scan will give you a few hundred almost-identical grey pictures of the person's brain. Making sense of them requires some complex statistical analysis.The very first step is choosing which software to use. Just as some people swear by Firefox while others prefer Internet Explorer for browsing the web, neuroscientists have various options to choose from in terms of image analysis software. Everyone's got a favourite. In Britain, the most popular are FSL (developed at Oxford) and SPM (London), while in the USA BrainVoyager sees a lot of use.These three all do pretty much the same thing, give or take a few minor technical differences, so which one you use ultimately makes little difference. But just as there's more than one way to skin a cat, there's more than one way to analyze a brain. A paper from Fusar-Poli et al compares the results you get with SPM to the results obtained using XBAM, a program which uses a quite different statistical approach.Here's what happened, according to SPM, when 15 volunteers looked at pictures of faces expressing the emotion of fear, and their brain activity was compared to when they were just looking at a boring "X" on the screen (I think - either that it's compared to looking at neutral faces; the paper isn't clear, but given the size of the blobs I doubt it's that.)Various bits of the brain were more activated by the scared face pics, as you can see by the huge, fiery blobs. The activation is mostly at the back of the brain, in occipital cortex areas which deal with vision, which is as you'd expect. The cerebellum was also strongly activated, which is a bit less expected.Now, here's what happens if you analyze exactly the same data using XBAM, setting the statistical threshold at the same level (i.e. in theory being no more or less "strict") -You get the same visual system blobs, but you also see activation in a number of other areas. Or as Fusar-Poli et al put it -Analysis using both programs revealed that during the processing of emotional faces, as compared to the baseline stimulus, there was an increased activation in the visual areas (occipital, fusiform and lingual gyri), in the cerebellum, in the parietal cortex [etc] ... Conversely, the temporal regions, insula and putamen were found to be activated using the XBAM analysis software only.*This begs two questions: why the difference, and which way is right?The difference must be a product of the different methods used. SPM uses a technique called statistical parametric mapping (hence the name) based on the assumption of normality. FSL and BrainVoyager do too. XBAM, on the other hand, differs from more orthodox software in a number of other ways; the most basic difference is that it uses non-parametric statistics but this document lists no less than five major innovations -"not to assume normality but to use permutation testing to construct the null distribution used to make inference about the probability of an "activation" under the null hypothesis.""recognizing the existence of correlation in the residuals after fitting a statistical model to the data."using "a mixed effects analysis of group level fMRI data by taking into account both intra and inter subject variances."using "3D cluster level statistics based on cluster mass (the sum of all the statistical values in the cluster) rather than cluster area (number of voxels)."using "a wavelet-based time series permutation approach that permitted the handling of complex noise processes in fMRI data rather than simple stationary autocorrelation."Phew. Which combination of these are responsible for the difference is impossible to say.The biggest question, though, is: should we all be using XBAM? Is it "better" than SPM? This is where things get tricky. The truth is that there's no right way to statistically analyze any data, let alone fMRI data. There are lots of wrong ways, but even if you avoid making any mistakes, there are still various options as to which statistical methods to use, and which method you use depends on which assumptions you're making. XBAM rests of different assumptions from SPM.Whether XBAM's assumptions are more appropriate than those of SPM is a difficult question. The people who wrote XBAM think so, and they're very smart people. But so are the people who wrote SPM. The point is, it's a very complex issue, the mathematical details of which go far beyond the understanding of most fMRI users (myself included).My worry about this paper is that the average Joe Neuroscientist will decide that, because XBAM produces more activation than SPM, it must be "better". The authors are careful not to say this, but for fMRI researchers working in the publish-or-perish world of modern science, and whose greatest fear is that they'll run an analysis and end up with no blobs at all, the temptation to think "the more blobs the merrier" is a powerful one.Fusar-Poli, P., Bhattacharyya, S., Allen, P., Crippa, J., Borgwardt, S., Martin-Santos, R., Seal, M., O’Carroll, C., Atakan, Z., & Zuardi, A. (2010). Effect of image analysis software on neurofunctional activation during processing of emotional human faces Journal of Clinical Neuroscience DOI: 10.1016/j.jocn.2009.06.027... Read more »
Fusar-Poli, P., Bhattacharyya, S., Allen, P., Crippa, J., Borgwardt, S., Martin-Santos, R., Seal, M., O’Carroll, C., Atakan, Z., & Zuardi, A. (2010) Effect of image analysis software on neurofunctional activation during processing of emotional human faces. Journal of Clinical Neuroscience. DOI: 10.1016/j.jocn.2009.06.027
1. Don't smoke.2. See 1.This is essentially what Simon Chapman and Ross MacKenzie suggest in a provocative PloS Medicine paper, The Global Research Neglect of Unassisted Smoking Cessation: Causes and Consequences.Their point is deceptively simple: there is lots of research looking at drugs and other treatments to help people quit smoking tobacco, but little attention is paid to people who quit without any help, despite the fact that the majority (up to 75%) of quitters do just that. This is good news for the pharmaceutical industry and others who sell smoking-cessation aids, but it's not clear that it's good for public health.As they put it,despite the pharmaceutical industry’s efforts to promote pharmacologically mediated cessation and numerous clinical trials demonstrating the efficacy of pharmacotherapy, the most common method used by most people who have successfully topped smoking remains unassisted cessation ... Tobacco use, like other substance use, has become increasingly pathologised as a treatable condition as knowledge about the neurobiology, genetics, and pharmacology of addiction develops. Meanwhile, the massive decline in smoking that occurred before the advent of cessation treatment is often forgotten.Debates over drugs, or other treatments, tend to revolve around the question of whether they work: is this drug better than placebo for this disorder? Chapman and MacKenzie point out that even to frame an issue in these terms is to concede a lot to the medical or pathological approach, which may not be a good idea. Before asking, do the drugs work? We should ask, what have drugs got to do with this?Their argument is not that drugs never help people to quit; nor are they saying that tobacco isn't addictive, or that there is no neurobiology of addiction. Rather, they are saying that the biology is only one aspect of the story. The importance of drugs (and other stop-smoking aids like CBT), and the difficulty of quitting, is systematically exaggerated by the medical literature...Of the 662 papers [about "smoking cessation" published in 2007 or 2008], 511 were studies of cessation interventions. The other 118 were mainly studies of the prevalence of smoking cessation in whole or special populations. Of the intervention papers, 467 (91.4%) reported the effects of assisted cessation and 44 (8.6%) described the impact of unassisted cessation (Figure 1).... Of the papers describing cessation trends, correlates, and predictors in populations, only 13 (11%) contained any data on unassisted cessation.And although pharmaceutical industry funding of research plays a part in this, the fact that medical science tends to focus on treatments rather than on untreated individuals is unsurprising since this is fundamentally how science works:Most tobacco control research is undertaken by individuals trained in positivist scientific traditions. Hierarchies of evidence give experimental evidence more importance than observational evidence; meta-analyses of randomized controlled trials are given the most weight. Cessation studies that focus on discrete proximal variables such as specific cessation interventions provide ‘‘harder’’ causal evidence than those that focus on distal, complex, and interactive influences that coalesce across a smoker’s lifetime to end in cessation.Overall, it's an excellent paper and well worth a read in full (it's short and it's open access). Of course, it is itself only one side of the story and many in the tobacco control community will find it controversial. But I think Chapman and MacKenzie's is a point that needs to be made, and point applies to other areas of medicine, especially, although not exclusively, to mental health. This week, British social care charity Together told us thatSix out of ten of people have had at least one time in their life where they have found it difficult to cope mentally... stress (70%), anxiety (59%) and depression (55%) were the three most common difficulties encountered by the publicWhich was not still not quite as good as rivals Turning Point who last month saidThree quarters of people in the UK experience depression occasionally or regularly yet only a third seek helpThese were opinion surveys, not real peer-reviewed science, but they might as well have been: the best available science says that if you go and ask people, 50-70% of the population report suffering at least one diagnosable DSM-IV mental disorder in their lifetime, and that the majority receive no treatment at all. This leads to papers in major journals such as this one warning that "Depression Care in the United States" is "Too Little for Too Few."But we don't know whether these tens of millions of cases of untreated "mental illness" should be treated, because there is basically no research looking at what happens to such people without treatment. On the other hand, the very fact that they aren't treated, and yet manage to hold down jobs, relationships and so forth, suggests that the situation is not so bad.Of course we must never forget that depression and anxiety can be crippling diseases, but fortunately, such cases are at least comparatively rare. By using the word "depression" to cover everything from waking-up-at-4-am-in-a-suicidal-panic-melancholia to feeling-a-bit-miserable-because-something-bad-just-happened, it's easy to forget that while clinical depression is a serious matter, feeling a bit miserable is normal and resolves without any help 99% of the time. Even though there are no published scientific studies proving this, because it's not the kind of thing scientists study.Incidentally, this issue is a good reminder that there's no one big bad conspiracy behind everything. With smoking, Big Tobacco find themselves in direct opposition to Big Pharma, like in From Dusk Till Dawn when the psychopaths fight the vampires. With depression, the people who are quickest to decry the widespread use of antidepressants often seem to be the ones who are most keen on the idea that depression is common and under-treated, perhaps because it allows them to recommend their own favorite psychotherapy. Big Pharma hands the baton to Big Couch in the race to medicalize life.Chapman S, & MacKenzie R (2010). The global research neglect of unassisted smoking cessation: causes and consequences. PLoS medicine, 7 (2) PMID: 20161722... Read more »
Chapman S, & MacKenzie R. (2010) The global research neglect of unassisted smoking cessation: causes and consequences. PLoS medicine, 7(2). PMID: 20161722
Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.
If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.
Research Blogging is powered by SMG Technology.
To learn more, visit seedmediagroup.com.