According to Mormon author and fruit grower "Dr" Robert O. Young, pretty much all diseases are caused by our bodies being too acidic. By adopting an "alkaline lifestyle" to raise your internal pH (lower pH being more acidic), you'll find that
if you maintain the saliva and the urine pH, ideally at 7.2 or above, you will never get sick. That’s right you will NEVER get sick!
Wow. Important components of the alkaline lifestyle include eating plenty of the right sort of fruits and vegetables, ideally ones grown by Young, and taking plenty of nutritional supplements. These don't come cheap, but when the payoff is being free of all diseases, who could complain?
Young calls his amazing theory the Alkavorian Approach™, aka the New Biology™. Almost everyone else calls it quack medicine and pseudoscience. Because it is quack medicine and pseudoscience. But a paper just published in Cell suggests an interesting role for pH in, of all things, anxiety and panic - The amygdala is a chemosensor that detects carbon dioxide and acidosis to elicit fear behavior.
The authors, Ziemann et al, were interested in a protein called Acid Sensing Ion Channel 1a, ASIC1a, which as the name suggests, is acid-sensitive. Nerve cells expressing ASIC1a are activated when the fluid around them becomes more acidic.
One of the most common causes of acidosis (a fall in body pH) is carbon dioxide, CO2. Breathing is how we get rid of the CO2 produced by our bodies; if breathing is impaired, for example during suffocation, CO2 levels rise, and pH falls as CO2 is converted to carbonic acid in the bloodstream.
In previous work, Ziemann et al found that the amygdala contains lots of ASIC1a. This is intriguing, because the amygdala is a brain region believed to be involved in fear, anxiety and panic, although it has other functions as well. It's long been known that breathing air with added CO2 can trigger anxiety and panic, especially in people vulnerable to panic attacks.
What's unclear is why this happens; various biological and psychological theories have been proposed. Ziemann et al set out to test the idea that ASIC1a in the amygdala mediates anxiety caused by CO2.
In a number of experiments they showed that mice genetically engineered have no ASIC1a (knockouts) were resistant to the anxiety-causing effects of air containing 10% or 20% CO2. Also, unlike normal mice, the knockouts were happy to enter a box with high CO2 levels - normal mice hated it. Injections of a weakly acidic liquid directly into the amygdala caused anxiety in normal mice, but not in the knockouts.
Most interestingly, they found that knockout mice could be made to fear CO2 by giving them ASIC1a in the amygdala. Knockouts injected in the amygdala with a virus containing ASIC1a DNA, which caused their cells to start producing the protein, showed anxiety (freezing behaviour) when breathing CO2. But it only worked if the virus was injected into the amygdala, not nearby regions.
This is a nice series of experiments which shows convincingly that ASIC1a mediates acidosis-related anxiety, at least in mice. What's most interesting however is that it also seems to involved in other kinds of anxiety and fear. The ASIC1a knockout mice were slightly less anxious in general; injections of an alkaline solution prevented CO2-related anxiety, but also reduced anxiety caused by other scary things, such as the smell of a cat.
The authors conclude by proposing that amygdala pH might be involved in fear more generally
Thus, we speculate that when fear-evoking stimuli activate the amygdala, its pH may fall. For example, synaptic vesicles release protons, and intense neural activity is known to lower pH.
But this is, as they say, speculation. The link between CO2, pH and panic attacks seems more solid. As the authors of another recent paper put it
We propose that the shared characteristics of CO2/H+ sensing neurons overlap to a point where threatening disturbances in brain pH homeostasis, such as those produced by CO2 inhalations, elicit a primal emotion that can range from breathlessness to panic.
ResearchBlogging.orgZiemann, A., Allen, J., Dahdaleh, N., Drebot, I., Coryell, M., Wunsch, A., Lynch, C., Faraci, F., Howard III, M., & Welsh, M. (2009). The Amygdala Is a Chemosensor that Detects Carbon Dioxide and Acidosis to Elicit Fear Behavior Cell, 139 (5), 1012-1021 DOI: 10.1016/j.cell.2009.10.029... Read more »
Ziemann, A., Allen, J., Dahdaleh, N., Drebot, I., Coryell, M., Wunsch, A., Lynch, C., Faraci, F., Howard III, M., & Welsh, M. (2009) The Amygdala Is a Chemosensor that Detects Carbon Dioxide and Acidosis to Elicit Fear Behavior. Cell, 139(5), 1012-1021. DOI: 10.1016/j.cell.2009.10.029
Capitalists beware. No less a journal than Nature has just published a paper proving conclusively that the human brain is a Communist, and that it's plotting the overthrow of the bourgeois order and its replacement by the revolutionary Dictatorship of the Proletariat even as we speak.Kind of. The article, Neural evidence for inequality-averse social preferences, doesn't mention the C word, but it does claim to have found evidence that people's brains display more egalitarianism than people themselves admit to.Tricomi et al took 20 pairs of men. At the start of the study, both men got a $30 payment, but one member of each pair was then randomly chosen to get a $50 bonus. Thus, one guy was "rich", while the other was "poor". Both men then had fMRI scans, during which they were offered various sums of money and saw their partner being offered money too. They rated how "appealing" these money transfers were on a 10 point scale.What happened? Unsurprisingly both "rich" and "poor" said that they were pleased at the prospect of getting more cash for themselves, the poor somewhat more so, but people also had opinions about payments to the other guy:the low-pay group disliked falling farther behind the high-pay group (‘disadvantageous inequality aversion’), because they rated positive transfers to the high-pay participants negatively, even though these transfers had no effect on their own earnings. Conversely, the high-pay group seemed to value transfers [to the poor person] that closed the gap between their earnings and those of the low-pay group (‘advantageous inequality aversion’)What about the brain? When people received money for themselves, activity in the ventromedial prefrontal cortex (vmPFC) and the ventral striatum correlated with the size of their gain.However, when presented with a payment to the other person, these areas seemed to be rather egalitarian. Activity rose in rich people when their poor colleagues got money. In fact, it was greater in that case than when they got money themselves, which means the "rich" people's neural activity was more egalitarian than their subjective ratings were. Whereas in "poor" people, the vmPFC and the ventral striatum only responded to getting money, not to seeing the rich getting even richer.The authors conclude that thisindicates that basic reward structures in the brain may reflect even stronger equity considerations than is necessarily expressed or acted on at the behavioural level... Our results provide direct neurobiological evidence in support of the existence of inequality-averse social preferences in the human brain.Notice that this is essentially a claim about psychology, not neuroscience, even though the authors used neuroimaging in this study. They started out by assuming some neuroscience - in this case, that activity in the vmPFC and the ventral striatum indicates reward i.e. pleasure or liking - and then used this to investigate psychology, in this case, the idea that people value equality per se, as opposed to the alternative idea, that "dislike for unequal outcomes could also be explained by concerns for social image or reciprocity, which do not require a direct aversion towards inequality."This is known as reverse inference, i.e. inference from data about the brain to theories about the mind. It's very common in neuroimaging papers - we've all done it - but it is problematic. In this case, the problem is that the argument relies on the idea that activity in the vmPFC and ventral striatum is evidence for liking.But while there's certainly plenty of evidence that these areas are activated by reward, and the authors confirmed that activity here correlated with monetary gain, that doesn't mean that they only respond to reward. They could also respond to other things. For example, there's evidence that the vmPFC is also activated by looking at angry and sad faces.Or to put it another way: seeing someone you find attractive makes your pupils dilate. If you were to be confronted by a lion, your pupils would dilate. Fortunately, that doesn't mean you find lions attractive - because fear also causes pupil dilation.So while Tricomi et al argue that people, or brains, like equality, on the basis of these results, I remain to be fully convinced. As Russell Poldrack noted in 2006caution should be exercised in the use of reverse inference... In my opinion, reverse inference should be viewed as another tool (albeit an imperfect one) with which to advance our understanding of the mind and brain. In particular, reverse inferences can suggest novel hypotheses that can then be tested in subsequent experiments.Tricomi E, Rangel A, Camerer CF, & O'Doherty JP (2010). Neural evidence for inequality-averse social preferences. Nature, 463 (7284), 1089-91 PMID: 20182511... Read more »
Tricomi E, Rangel A, Camerer CF, & O'Doherty JP. (2010) Neural evidence for inequality-averse social preferences. Nature, 463(7284), 1089-91. PMID: 20182511
"Prevention is better than cure", so they say. And in most branches of medicine, preventing diseases, or detecting early signs and treating them pre-emptively before the symptoms appear, is an important art.Not in psychiatry. At least not yet. But the prospect of predicting the onset of psychotic illnesses like schizophrenia, and of "early intervention" to try to prevent them, is a hot topic at the moment.Schizophrenia and similar illnesses usually begin with a period of months or years, generally during adolescence, during which subtle symptoms gradually appear. This is called the "prodrome" or "at risk mental state". The full-blown disorder then hits later. If we could detect the prodromal phase and successfully treat it, we could save people from developing the illness. That's the plan anyway.But many kids have "prodromal symptoms" during adolescence and never go on to get ill, so treating everyone with mild symptoms of psychosis would mean unnecessarily treating a lot of people. There's also the question of whether we can successfully prevent progression to illness at all, and there have been only a few very small trials looking at whether treatments work for that - but that's another story.Stephan Ruhrmann et al. claim to have found a good way of predicting who'll go on to develop psychosis in their paper Prediction of Psychosis in Adolescents and Young Adults at High Risk. This is based on the European Prediction of Psychosis Study (EPOS) which was run at a number of early detection clinics in Britain and Europe. People were referred to the clinics through various channels if someone was worried they seemed a bit, well, prodromalReferral sources included psychiatrists, psychologists, general practitioners, outreach clinics, counseling services, and teachers; patients also initiated contact. Knowledge about early warning signs (eg, concentration and attention disturbances, unexplained functional decline) and inclusion criteria was disseminated to mental health professionals as well as institutions and persons who might be contacted by at-risk persons seeking help.245 people consented to take part in the study and met the inclusion criteria meaning they were at "high risk of psychosis" according to at least one of two different systems, the Ultra High Risk (UHR) or the COGDIS criteria. Both class you as being at risk if you show short lived or mild symptoms a bit like those seen in schizophrenia i.e.COGDIS: inability to divide attention; thought interference, pressure, and blockage; and disturbances of receptive and expressive speech, disturbance of abstract thinking, unstable ideas of reference, and captivation of attention by details of the visual field...UHR: unusual thought content/ delusional ideas, suspiciousness/persecutory ideas, grandiosity, perceptual abnormalities/hallucinations, disorganized communication, and odd behavior/appearance... Brief limited intermittent psychotic symptoms (BLIPS) i.e. hallucinations, delusions, or formal thought disorders that occurred resolved spontaneously within 1 week...Then they followed up the 245 kids for 18 months and saw what happened to them.What happened was that 37 of them developed full-blown psychosis: 23 suffered schizophrenia according to DSM-IV criteria, indicating severe and prolonged symptoms; 6 had mood disorders, i.e depression or bipolar disorder, with psychotic features, and the rest mostly had psychotic episodes too short to be classed as schizophrenia. 37 people is 19% of the 183 for whom full 18 month data was available; the others dropped out of the study, or went missing for some reason.Is 19% high or low? Well, it's much higher than the rate you'd see in randomly selected people, because the risk of getting schizophrenia is less than 1% lifetime and this was only 18 months; the risk of a random person developing psychosis in any given year has been estimated at 0.035% in Britain. So the UHR and COGDIS criteria are a lot better than nothing.On the other hand 19% is far from being "all": 4 out of 5 of the supposedly "high risk" kids in this study didn't in fact get ill, although some of them probably developed illness after the 18 month period was over.The authors also came up with a fancy algorithm for predicting risk based on your score on various symptom rating scales, and they claim that this can predict psychosis much better, with 80% accuracy. As this graph shows, the rate of developing psychosis in those scoring highly on their Prognostic Index is really high. (In case you were wondering the Prognostic Index is [1.571 x SIPS-Positive score 16] + [0.865 x bizarre thinking score] + [0.793 x sleep disturbances score] + [1.037 x SPD score] + [0.033 x (highest GAF-M score in the past year – 34.64)] + [0.250 x (years of education – 12.52)]. Use it on your friends for hours of psychiatric fun!)However they came up with the algorithm by putting all of their dozens of variables into a big mathematical model, crunching the numbers and picking the ones that were most highly correlated with later psychosis - so they've specifically selected the variables that best predict illness in their sample, but that doesn't mean they'll do so in any other case. This is basically the non-independence problem that has so troubled fMRI, although the authors, to their credit, recognize this and issue the appropriate cautions.So overall, we can predict psychosis, a bit, but far from perfectly. More research is needed. One of the proposed additions to the new DSM-V psychiatric classification system is "Psychosis Risk Syndrome" i.e. the prodrome; it's not currently a disorder in DSM-IV. This idea has been attacked as an invitation to push antipsychotic drugs on kids who aren't actually ill and don't need them. On the other hand though, we shouldn't forget that we're talking about terrible illnesses here: if we could successfully predict and prevent psychosis, we'd be doing a lot of good.Ruhrmann, S., Schultze-Lutter, F., Salokangas, R., Heinimaa, M., Linszen, D., Dingemans, P., Birchwood, M., Patterson, P., Juckel, G., Heinz, A., Morrison, A., Lewis, S., Graf von Reventlow, H., & Klosterkotter, J. (2010). Prediction of Psychosis in Adolescents and Young Adults at High Risk: Results From the Prospective European Prediction of Psychosis Study ... Read more »
Ruhrmann, S., Schultze-Lutter, F., Salokangas, R., Heinimaa, M., Linszen, D., Dingemans, P., Birchwood, M., Patterson, P., Juckel, G., Heinz, A.... (2010) Prediction of Psychosis in Adolescents and Young Adults at High Risk: Results From the Prospective European Prediction of Psychosis Study. Archives of General Psychiatry, 67(3), 241-251. DOI: 10.1001/archgenpsychiatry.2009.206
The past decade has been a bad one for antidepressant manufacturers. Quite apart from all the bad press these drugs have been getting lately, there's been a remarkable lack of new antidepressants making it to the market. The only really novel drug to hit the shelves since 2000 has been agomelatine. There were a couple of others that were just minor variants on old molecules, but that's it.This makes "Lu AA21004" rather special. It's a new antidepressant currently in development and by all accounts it's making good progress. It's now in Phase III trials, the last stage before approval. And a large clinical trial has just been published finding that it works.But is it a medical advance or merely a commercial one?Pharmacologically, Lu AA21004 is kind of a new twist on an old classic . Its main mechanism of action is inhibiting the reuptake of serotonin, just like Prozac and other SSRIs. However, unlike them, it also blocks serotonin 5HT3 and 5HT7 receptors, activates 5HT1A receptors and partially agonizes 5HT1B.None of these things cry out "antidepressant" to me, but they do at least make it a bit different.The new trial took 430 depressed people and randomized them to get Lu AA21004, at two different doses, 5mg or 10mg, or the older antidepressant venlafaxine at the high-ish dose of 225 mg, or placebo.It worked. Over 6 weeks, people on the new drug improved more than those on placebo, and equally as well as people on venlafaxine; the lower 5 mg dose was a bit less effective, but not significantly so.The size of the effect was medium, with a benefit over-and-above placebo of about 5 points on the MADRS depression scale, which considering that the baseline scores in this study averaged 34, is not huge, but it compares well to other antidepressant trials.Now we come to the side effects, and this is the most important bit, as we'll see later. The authors did not specifically probe for these, they just relied on spontaneous report, which tends to underestimate adverse events.Basically, the main problem with Lu AA21004 was that it made people sick. Literally - 9% of people on the highest dose suffered vomiting, and 38% got nausea. However, the 5 mg dose was no worse than venlafaxine for nausea, and was relatively vomit-free. Unlike venlafaxine, it didn't cause dry mouth, constipation, or sexual problems.So that's lovely then. Let's get this stuff to market!Hang on.The big selling point for this drug is clearly the lack of side effects. It was no more effective than the (much cheaper, because off-patent) venlafaxine. It was better tolerated, but that's not a great achievement to be honest. Venlafaxine is quite notorious for causing side effects, especially at higher doses.I take venlafaxine 300 mg and the side effects aren't the end of the world, but they're no fun, and the point is, they're well known to be worse than you get with other modern drugs, most notably SSRIs.If you ask me, this study should have compared the new drug to an SSRI, because they're used much more widely than venlafaxine. Which one? How about escitalopram, a drug which is, according to most of the literature, one of the best SSRIs, as effective as venlafaxine, but with fewer side effects.Actually, according to Lundbeck, who make escitalopram, it's even better than venlafaxine. Now, they would say that, given that they make it - but the makers of Lu AA21004 ought to believe them, because, er, they're the same people. "Lu" stands for Lundbeck.The real competitor for this drug, according to Lundbeck, is escitalopram. But no-one wants to be in competition with themselves.This may be why, although there are no fewer than 26 registered clinical trials of Lu AA21004 either ongoing or completed, only one is comparing it to an SSRI. The others either compare it to venlafaxine, or to duloxetine, which has even worse side effects. The one trial that will compare it to escitalopram has a narrow focus (sexual dysfunction).Pharmacologically, remember, this drug is an SSRI with a few "special moves", in terms of hitting some serotonin receptors. The question is - do those extra tricks actually make it better? Or is it just a glorified, and expensive, new SSRI? We don't know and we're not going to find out any time soon.If Lu AA21004 is no more effective, and no better tolerated, than tried-and-tested old escitalopram, anyone who buys it will be paying extra for no real benefit. The only winner, in that case, being Lundbeck.Alvarez E, Perez V, Dragheim M, Loft H, & Artigas F (2011). A double-blind, randomized, placebo-controlled, active reference study of Lu AA21004 in patients with major depressive disorder. The International Journal of Neuropsychopharmacology , 1-12 PMID: 21767441... Read more »
Alvarez E, Perez V, Dragheim M, Loft H, & Artigas F. (2011) A double-blind, randomized, placebo-controlled, active reference study of Lu AA21004 in patients with major depressive disorder. The international journal of neuropsychopharmacology / official scientific journal of the Collegium Internationale Neuropsychopharmacologicum (CINP), 1-12. PMID: 21767441
Neuroskeptic readers will know that I'm a big fan of theories. Rather than just poking around (or scanning) the brain under different conditions and seeing what happens, it's always better to have a testable hypothesis.I just found a 2007 paper by Israeli computational neuroscientists Niv et al that puts forward a very interesting theory about dopamine. Dopamine is a neurotransmitter, and dopamine cells are known to fire in phasic bursts - short volleys of spikes over millisecond timescales - in response to something which is either pleasurable in itself, or something that you've learned is associated with pleasure. Dopamine is therefore thought to be involved in learning what to do in order to get pleasurable rewards.But baseline, tonic dopamine levels vary over longer periods as well. The function of this tonic dopamine firing, and its relationship, if any, to phasic dopamine signalling, is less clear. Niv et al's idea is that the tonic dopamine level represents the brain's estimate of the average availability of rewards in the environment, and that it therefore controls how "vigorously" we should do stuff.A high reward availability means that, in general, there's lots of stuff going on, lots of potential gains to be made. So if you're not out there getting some reward, you're missing out. In economic terms, the opportunity cost of not acting, or acting slowly, is high - so you need to hurry up. On the other hand, if there's only minor rewards available, you might as well take things nice and slow, to conserve your energy. Niv et al present a simple mathematical model in which a hypothetical rat must decide how often to press a lever in order to get food, and show that it accounts for the data from animal learning experiments.The distinction between phasic dopamine (a specific reward) vs. tonic dopamine (overall reward availability) is a bit like the distinction between fear vs. anxiety. Fear is what you feel when something scary, i.e. harmful, is right there in front of you. Anxiety is the sense that something harmful could be round the next corner.This theory accounts for the fact that if you give someone a drug that increases dopamine levels, such as amphetamine, they become hyperactive - they do more stuff, faster, or at least try to. That's why they call it speed. This happens to animals too. Yet this hyperactivity starts almost immediately, which means that it can't be a product of learning.It also rings true in human terms. The feeling that everything's incredibly important, and that everyday tasks are really exciting, is one of the main effects of amphetamine. Every speed addict will have a story about the time they stayed up all night cleaning every inch of their house or organizing their wardrobe. This can easily develop into the compulsive, pointless repetition of the same task over and over. People with bipolar disorder often report the same kind of thing during (hypo)mania.What controls tonic dopamine levels? A really brilliantly elegant answer would be: phasic dopamine. Maybe every time phasic dopamine levels spike in response to a reward (or something which you've learned to associate with a reward), some of the dopamine gets left over. If there's lots of phasic dopamine firing, which suggests that the availability of rewards is high, the tonic dopamine levels rise.Unfortunately, it's probably not that simple, as signals from different parts of the brain seem to alter tonic and phasic dopamine firing largely independently, and this would mean that tonic dopamine would only increase after a good few rewards, not pre-emptively, which seems unlikely. The truth is, we don't know what sets the dopamine tone, and we don't really know what it does; but Niv et al's account is the most convincing I've come across...Niv Y, Daw ND, Joel D, & Dayan P (2007). Tonic dopamine: opportunity costs and the control of response vigor. Psychopharmacology, 191 (3), 507-20 PMID: 17031711... Read more »
Niv Y, Daw ND, Joel D, & Dayan P. (2007) Tonic dopamine: opportunity costs and the control of response vigor. Psychopharmacology, 191(3), 507-20. PMID: 17031711
Absinthe is a spirit. It's very strong, and very green. But is it something more?I used to think so, until I came across this paper taking a skeptical look at the history and science of the drink, Padosch et al's Absinthism a fictitious 19th century syndrome with present impactAbsinthe is prepared by crushing and dissolving the herb wormwood in unflavoured neutral alcohol and then distilling the result; other herbs and spices are added later for taste and colour.It became extremely popular in the late 19th century, especially in France, but it developed a reputation as a dangerous and hallucinogenic drug. Overuse was said to cause insanity, "absinthism", much worse than regular alcoholism. Eventually, absinthe was banned in the USA and most but not all European countries.Much of the concern over absinthe came from animal experiments. Wormwood oil was found to cause hyperactivity and seizures in cats and rodents, whereas normal alcohol just made them drunk. But, Padosch et al explain, the relevance of these experiments to drinkers is unclear, because they involved high doses of pure wormwood extract, whereas absinthe is much more dilute. The fact that authors at the time used the word absinthe to refer to both the drink and the pure extract added to the confusion.It's now known that wormwood, or at least some varieties of it, contains thujone, which can indeed cause seizures, and death, due to being a GABA antagonist. Until a few years ago it was thought that old-style absinthe might have contained up to 260 mg of thujone per litre, a substantial dose.But that was based on the assumption that all of the thujone in the wormwood ended up in the drink prepared from it. Chemical analysis of actual absinthe has repeatedly found that it contains no more than about 6 mg/L thujone. The alcohol in absinthe would kill you long before you drank enough to get any other effects. As the saying goes, "the dose makes the poison", something that is easily forgotten.As Padosch et al point out, it's possible that there are other undiscovered psychoactive compounds in absinthe, or that long-term exposure to low doses of thujone does cause "absinthism". But there is no evidence for that so far. Rather, they say, absinthism was just chronic alcoholism, and absinthe was no more or less dangerous than any other spirit.I'm not sure why, but drinks seem to attract more than their fair share of urban myths. Amongst many others I've heard that the flakes of gold in Goldschläger cause cuts which let alcohol into your blood faster; Aftershock crystallizes in your stomach, so if you drink water the morning afterwards, you get drunk again; and that the little worm you get at the bottom of some tequilas apparently contains especially concentrated alcohol, or hallucinogens, or even cocaine maybe.Slightly more serious is the theory that drinking different kinds of drinks instead of sticking to just one gets you drunk faster, or gives you a worse hangover, or something, especially if you do it in a certain order. Almost everyone I know believes this, although in my drinking experience it's not true, but I'm not sure that it's completely bogus, as I have heard somewhat plausible explanations i.e. drinking spirits alongside beer leads to a concentration of alcohol in your stomach that's optimal for absorption into the bloodstream... maybe.Link: Not specifically related to this but The Poison Review is an excellent blog I've recently discovered all about poisons, toxins, drugs, and such fun stuff.Padosch SA, Lachenmeier DW, & Kröner LU (2006). Absinthism: a fictitious 19th century syndrome with present impact. Substance abuse treatment, prevention, and policy, 1 (1) PMID: 16722551... Read more »
Padosch SA, Lachenmeier DW, & Kröner LU. (2006) Absinthism: a fictitious 19th century syndrome with present impact. Substance abuse treatment, prevention, and policy, 1(1), 14. PMID: 16722551
Irving Kirsch, best known for that 2008 meta-analysis allegedly showing that "Prozac doesn't work", has hit the headlines again.This time it's a paper claiming that something does work. Actually Kirsch is only a minor author on the paper by Kaptchuck et al: Placebos without Deception.In essence, they asked whether a placebo treatment - a dummy pill with no active ingredients - works even if you know that it's a placebo. Conventional wisdom would say no, because the placebo effect is driven by the patient's belief in the effectiveness of the pill.Kaptchuck et al took 80 patients with Irritable Bowel Syndrome (IBS) and recruited them into a trial of "a novel mind-body management study of IBS". Half of the patients got no treatment at all. The other half got sugar pills, after having been told, truthfully, that the pills contained no active drugs but also having been told to expect improvement in a 15 minute briefing session on the grounds thatplacebo pills, something like sugar pills, have been shown in rigorous clinical testing to produce significant mind-body self-healing processes.Guess what? The placebo group did better than the no treatment group, or at least they reported that they did (all the outcomes were subjective). The article has been much blogged about, and you should read those posts for a more detailed and in some cases skeptical examination, but really, this is entirely unsurprising and doesn't challenge the conventional wisdom about placebos.The folks in this trial believed in the possibility that the pills would make them feel better. They just wouldn't have agreed to take part otherwise. And when those people got the treatment that they expected to work, they felt better. That's just the plain old placebo effect. We already know that the placebo effect is very strong in IBS, a disease which is, at least in many cases, psychosomatic.So the only really new result here is that there are people out there who'll believe that they'll experience improvement from sugar pills, if you give them a 15 minute briefing about the "mind-body self-healing" properties of those pills. That's an interesting addition to the record of human quirkiness, but it doesn't really tell us anything new about placebos.Kaptchuk, T., Friedlander, E., Kelley, J., Sanchez, M., Kokkotou, E., Singer, J., Kowalczykowski, M., Miller, F., Kirsch, I., & Lembo, A. (2010). Placebos without Deception: A Randomized Controlled Trial in Irritable Bowel Syndrome PLoS ONE, 5 (12) DOI: 10.1371/journal.pone.0015591... Read more »
Kaptchuk, T., Friedlander, E., Kelley, J., Sanchez, M., Kokkotou, E., Singer, J., Kowalczykowski, M., Miller, F., Kirsch, I., & Lembo, A. (2010) Placebos without Deception: A Randomized Controlled Trial in Irritable Bowel Syndrome. PLoS ONE, 5(12). DOI: 10.1371/journal.pone.0015591
It's a cliché, but it's true - "schizophrenia genes" are the Holy Grail of modern psychiatry.Were they to be discovered, such genes would provide clues towards a better understanding of the biology of the disease, and that could lead directly to the development of better medications. It might also allow "genetic counselling" for parents concerned about their children's risk of schizophrenia.Perhaps most importantly for psychiatrists, the definitive identification of genes for a mental illness would provide cast-iron proof that psychiatric disorders are "real diseases", and that biological psychiatry is a branch of medicine like any other. Schizophrenia, generally thought of as the most purely "biological" of all mental disorders, is the best bet.With this in mind, let's look at three articles (1,2,3) published in Nature last month to much excited fanfare along the lines of 'Schizophrenia genes discovered!' All three were based on genome-wide association studies (GWAS). In a GWAS, you examine a huge number of genetic variants in the hope that some of them are associated with the disease or trait you're interested in. Several hundred thousand variants per study is standard at the moment. This is the genetic equivalent of trying to find the person responsible for a crime by fingerprinting everyone in town.The Nature papers were based on three seperate large GWAS projects - the SGENE-plus, the MGS, and the ICS. In total, there were over 8,000 schizophrenia patients and 19,000 healthy controls in these studies - enormous samples by the standards of human genetics research, and large enough that if there were any common genetic variants with even a modest effect on schizophrenia risk, they would probably have found them.What did they find? On the face of it, not much. The MGS(1) "did not produce genome-wide significant findings...power was adequate in the European-ancestry sample to detect very common risk alleles (30–60% frequency) with genotypic relative risks of approximately 1.3 ...The results indicate that there are few or no single common loci with such large effects on risk." In the SGENE-plus(2), likewise, "None of the markers gave P values smaller than our genome-wide significance threshold".The ISC study(3) did find one significantly associated variant in the Major Histocompatability Complex (MHC) region on chromosome 6. The MHC is known to be involved in immune function. When the data from all three studies were pooled together, several variants in the same region were also found to be significantly associated with schizophrenia.Somewhat confusingly, all three papers did this pooling, although they each did it in slightly different ways - the only area in which all three analyses found a result was the MHC region. The SGENE team's analysis, which was larger, also implicated two other, unrelated variants, which were not found in other two papers.To summarize, three very large studies found just one "schizophrenia gene" even after pooling their data. The variant, or possibly cluster of related ones, is presumably involved in the immune system. Although the authors of the Nature papers made much of this finding, the main news here is that there is at most one common variant which raises the relative risk of schizophrenia by even just 20%. Given that the baseline risk of schizophrenia is about 1%, there is at most one common gene which raises your risk to more than 1.2%. That's it.So, what does this mean? There are three possibilites. First, it could be that schizophrenia genes are not "common". This possibility is getting a lot of attention at the moment, thanks to a report from a few months back, Walsh et al, suggesting that some cases of schizophrenia are caused by just one rare, high-impact mutation, but a different mutation in each case. In other words, each case of schizophrenia could be genetically almost unique. GWAS studies would be unable to detect such effects.Second, there could be lots of common variants, each with an effect on risk so tiny that it wasn't found even in these three large projects. The only way to identify them would be to do even bigger studies. The ISC team's paper claims that this is true, on the basis of this graph: They took all of the variants which were more common in schizophrenics than in controls, even if they were only slightly more common, and totalled up the number of "slight risk" variants each person has.The graph shows that these "slight risk" markers were more common in people with schizophrenia from two entirely seperate studies, and are also more common in people with bipolar disorder, but were not associated with five medical illnesses like diabetes. This is an interesting result, but these variants must have such a tiny effect on risk that finding them would involve spending an awful lot of time (and money) for questionable benefit.The third and final possibility is that "schizophrenia" is just less genetic than most psychiatrists think, because the true causes of the disorder are not genetic, and/or because "schizophrenia" is an umbrella term for many different diseases with different causes. This possibility is not talked about much in respectable circles, but if genetics doesn't start giving solid results soon, it may be.Purcell, S., & et Al (2009). Common polygenic variation contributes to risk of schizophrenia and bipolar disorder Nature DOI: 10.1038/nature08185Shi, J., & et Al (2009). Common variants on chromosome 6p22.1 are associated with schizophrenia Nature DOI: 10.1038/nature08192... Read more »
Purcell, S., & et Al. (2009) Common polygenic variation contributes to risk of schizophrenia and bipolar disorder. Nature. DOI: 10.1038/nature08185
Shi, J., & et Al. (2009) Common variants on chromosome 6p22.1 are associated with schizophrenia. Nature. DOI: 10.1038/nature08192
There's a lot of talk, much of it rather speculative, about "neuroethics" nowadays.But there's one all too real ethical dilemma, a direct consequence of modern neuroscience, that gets very little attention. This is the problem of incidental findings on MRI scans.An "incidental finding" is when you scan someone's brain for research purposes, and, unexpectedly, notice that something looks wrong with it. This is surprisingly common: estimates range from 2–8% of the general population. It will happen to you if you regularly use MRI or fMRI for research purposes, and when it does, it's a shock. Especially when the brain in question belongs to someone you know. Friends, family and colleagues are often the first to be recruited for MRI studies.This is why it's vital to have a system in place for dealing with incidental findings. Any responsible MRI scanning centre will have one, and as a researcher you ought to be familiar with it. But what system is best?Broadly speaking there are two extreme positions:Research scans are not designed for diagnosis, and 99% of MRI researchers are not qualified to make a diagnosis. What looks "abnormal" to Joe Neuroscientist BSc or even Dr Bob Psychiatrist is rarely a sign of illness, and likewise they can easily miss real diseases. So, we should ignore incidental findings, pretend the scan never happened, because for all clinical purposes, it didn't.You have to do whatever you can with an incidental finding. You have the scans, like it or not, and if you ignore them, you're putting lives at risk. No, they're not clinical scans, they can still detect many diseases. So all scans should be examined by a qualified neuroradiologist, and any abnormalities which are possibly pathological should be followed-up.Neither of these extremes is very satisfactory. Ignoring incidental findings sounds nice and easy, until you actually have to do it, especially if it's your girlfriend's brain. On the other hand, to get every single scan properly checked by a neuroradiologist would be expensive and time-consuming. Also, it would effectively turn your study into a disease screening program - yet we know that screening programs can cause more harm than good, so this is not necessarily a good idea.Most places adopt a middle-of-the-road approach. Scans aren't routinely checked by an expert, but if a researcher spots something weird, they can refer the scan to a qualified clinician to follow up. Almost always, there's no underlying disease. Even large, OMG-he-has-a-golf-ball-in-his-brain findings can be benign. But not always.This is fine but it doesn't always work smoothly. The details are everything. Who's the go-to expert for your study, and what are their professional obligations? Are they checking your scan "in a personal capacity", or is this a formal clinical referral? What's their e-mail address? What format should you send the file in? If they're on holiday, who's the backup? At what point should you inform the volunteer about what's happening?Like fire escapes, these things are incredibly boring, until the day when they're suddenly not.A new paper from the University of California Irvine describes a computerized system that made it easy for researchers to refer scans to a neuroradiologist. A secure website was set up and publicized in University neuroscience community.Suspect scans could be uploaded, in one of two common formats. They were then anonymized and automatically forwarded to the Department of Radiology for an expert opinion. Email notifications kept everyone up to date with the progress of each scan.This seems like a very good idea, partially because of the technical advantages, but also because of the "placebo effect" - the fact that there's an electronic system in place sends the message: we're serious about this, please use this system.Out about 5,000 research scans over 5 years, there were 27 referrals. Most were deemed benign... except one which turned out to be potentially very serious - suspected hydrocephalus, increased fluid pressure in the brain, which prompted an urgent referral to hospital for further tests.There's no ideal solution to the problem of incidental findings, because by their very nature, research scans are kind of clinical and kind of not. But this system seems as good as any.Cramer SC, Wu J, Hanson JA, Nouri S, Karnani D, Chuang TM, & Le V (2011). A system for addressing incidental findings in neuroimaging research. NeuroImage PMID: 21224007... Read more »
Cramer SC, Wu J, Hanson JA, Nouri S, Karnani D, Chuang TM, & Le V. (2011) A system for addressing incidental findings in neuroimaging research. NeuroImage. PMID: 21224007
Breaking news from the BBC -Testosterone link to aggression 'all in the mind' Work in Nature magazine suggests the mind can win over hormones... Testosterone induces anti-social behaviour in humans, but only because of our own prejudices about its effect rather than its biological activity, suggest the authors. The researchers, led by Ernst Fehr of the University of Zurich, Switzerland, said the results suggested a case of "mind over matter" with the brain overriding body chemistry. "Whereas other animals may be predominantly under the influence of biological factors such as hormones, biology seems to exert less control over human behaviour," they said. Phew, that's a relief - for a minute back there I was worried we didn't have free will. But look a little closer at the study, and it turns out that all is not as it seems. The experiment (Eisenegger et al) involved giving healthy women 0.5 mg testosterone, or placebo, in a randomized double-blind manner, and then getting them to take part in the "Ultimatum Game".This is a game for two players. One, the Proposer, is given some money, and then has to offer to give a certain proportion of it to the other player, the Receiver. If the Receiver accepts the offer, both players get the agreed-upon amount of money. If they reject it, however, no-one gets anything.The Proposer is basically faced with the choice of making a "fair" offer, e.g. giving away 50%, or a greedy one, say offering 10% and keeping 90% for themselves. Receivers generally accept fair offers, but most people get annoyed or insulted by unfair ones, and reject them, even though this means they lose money (10% of the money is still more than 0%).What happened? Testosterone affected behaviour. It had no effect on women playing the role of the Receivers, but the Proposers given testosterone made significantly fairer offers on average, compared to those given placebo. That's not mind over matter, that's matter over mind - give someone a hormone and their behaviour changes.The direction of the effect is quite interesting - if testosterone increased aggression, as popular belief has it, you might expect it to decrease fair offers. Or, you might not. I suppose it depends on your understanding of "aggression". For their part, Eisenegger et al interpret this finding as suggesting that testosterone doesn't increase aggression per se, but rather increases our motivation to achieve "status", which leads to Proposers making fairer offers, so as to appear nicer. Hmm. Maybe.But where did the BBC get the whole "all in the mind" thing from? Well, after the testing was over, the authors asked the women whether they thought they had taken testosterone or placebo. The results showed that the women couldn't actually tell which they'd had - they were no more accurate than if they were guessing - but women who believed they'd got testosterone made more unfair offers than women who believed they got placebo. The size of this effect was bigger than the effect of testosterone.Is that "mind over matter"? Do beliefs about testosterone exert a more powerful effect on behaviour than testosterone itself? Maybe they do, but these data don't tell us anything about that. The women's beliefs weren't manipulated in any way in this trial, so as an experiment it couldn't investigate belief effects. In order to show that belief alters behaviour, you'd need to control beliefs. You could randomly assign some subjects to be told they were taking testosterone, and compare them to others told they were on placebo, say.This study didn't do anything like that. Beliefs about testosterone were only correlated with behaviour, and unless someone's changed the rules recently, correlation isn't causation. It's like finding that people with brown skin are more likely to be Hindus than people with white skin, and concluding that belief in Brahma alters pigmentation. It could even be that the behaviour drove the belief, because subjects were quizzed about their testosterone status after the Ultimatum Game - maybe women who, for whatever reason, behaved selfishly, decided that this meant they had taken testosterone!Overall, this study provides quite interesting data about hormonal effects on behaviour, but tells us nothing about the effects of beliefs about hormones. On that issue, the way the media have covered this experiment is rather more informative than the experiment itself.Eisenegger, C., Naef, M., Snozzi, R., Heinrichs, M., & Fehr, E. (2009). Prejudice and truth about the effect of testosterone on human bargaining behaviour Nature DOI: 10.1038/nature08711... Read more »
Eisenegger, C., Naef, M., Snozzi, R., Heinrichs, M., & Fehr, E. (2009) Prejudice and truth about the effect of testosterone on human bargaining behaviour. Nature. DOI: 10.1038/nature08711
Schizophrenia is generally thought of as the "most genetic" of all psychiatric disorders and in the past 10 years there have been heroic efforts to find the genes responsible for it, with not much success so far.A new study reminds us that there's more to it than genes alone: Social Risk or Genetic Liability for Psychosis? The authors decided to look at adopted children, because this is one of the best ways of disentangling genes and environment.If you find that the children of people with schizophrenia are at an increased risk of schizophrenia (they are), that doesn't tell you whether the risk is due to genetics, or environment, because we share both with our parents. Only in adoption is the link between genes and environment broken.Wicks et al looked at all of the kids born in Sweden and then adopted by another Swedish family, over several decades (births 1955-1984). To make sure genes and environment were independent, they excluded those who were adopted by their own relatives (i.e. grandparents), and those lived with their biological parents between the ages of 1 and 15. This is the kind of study you can only do in Scandinavia, because only those countries have accessible national records of adoptions and mental illness...What happened? Here's a little graph I whipped up:Brighter colors are adoptees at "genetic risk", defined as those with at least one biological parent who was hospitalized for a psychotic illness (including schizophrenia but also bipolar disorder.) The outcome measure was being hospitalized for a non-affective psychosis, meaning schizophrenia or similar conditions but not bipolar.As you can see, rates are much higher in those with a genetic risk, but were also higher in those adopted into a less favorable environment. Parental unemployment was worst, followed by single parenthood, which was also quite bad. Living in an apartment as opposed to a house, however, had only a tiny effect.Genetic and environmental risk also interacted. If a biological parent was mentally ill and your adopted parents were unemployed, that was really bad news.But hang on. Adoption studies have been criticized because children don't get adopted at random (there's a story behind every adoption, and it's rarely a happy one), and also adopting families are not picked at random - you're only allowed to adopt if you can convince the authorities that you're going to be good parents.So they also looked at the non-adopted population, i.e. everyone else in Sweden, over the same time period. The results were surprisingly similar. The hazard ratio (increased risk) in those with parental mental illness, but no adverse circumstances, was 4.5, the same as in the adoption study, 4.7.For environment, the ratio was 1.5 for unemployment, and slightly lower for the other two. This is a bit less than in the adoption study (2.0 for unemployment). And the two risks interacted, but much less than they did in the adoption sample.However, one big difference was that the total lifetime rate of illness was 1.8% in the adoptees and just 0.8% in the nonadoptees, despite much higher rates of unemployment etc. in the latter. Unfortunately, the authors don't discuss this odd result. It could be that adopted children have a higher risk of psychosis for whatever reason. But it could also be an artefact: rates of adoption massively declined between 1955 and 1984, so most of the adoptees were born earlier, i.e. they're older on average. That gives them more time in which to become ill.A few more random thoughts:This was Sweden. Sweden is very rich and compared to most other rich countries also very egalitarian with extremely high taxes and welfare spending. In other words, no-one in Sweden is really poor. So the effects of environment might be bigger in other countries.On the other hand this study may overestimate the risk due to environment, because it looked at hospitalizations, not illness per se. Supposing that poorer people are more likely to get hospitalized, this could mean that the true effect of environment on illness is lower than it appears.The outcome measure was hospitalization for "non-affective psychosis". Only 40% of this was diagnosed as "schizophrenia". The rest will have been some kind of similar illness which didn't meet the full criteria for schizophrenia (which are quite narrow, in particular, they require 6 months of symptoms).Parental bipolar disorder was counted as a family history. This does make sense because we know that bipolar disorder and schizophrenia often occur in the same families (and indeed they can be hard to tell apart, many people are diagnosed with both at different times.)Overall, though, this is a solid study and confirms that genes and environment are both relevant to psychosis. Unfortunately, almost all of the research money at the moment goes on genes, with studying environmental factors being unfashionable.Wicks S, Hjern A, & Dalman C (2010). Social Risk or Genetic Liability for Psychosis? A Study of Children Born in Sweden and Reared by Adoptive Parents. The American journal of psychiatry PMID: 20686186... Read more »
Wicks S, Hjern A, & Dalman C. (2010) Social Risk or Genetic Liability for Psychosis? A Study of Children Born in Sweden and Reared by Adoptive Parents. The American journal of psychiatry. PMID: 20686186
Last month I wrote about how electrical stimulation of the hippocampus causes temporary amnesia - Zapping Memories Away.Now Toronto neurologists Laxton et al have tried to use deep brain stimulation (DBS) to improve memory in people with Alzheimer's disease. Progressive loss of memory is the best-known symptom of this disorder, and while some drugs are available, they provide partial relief at best.This study stems from a chance discovery by the same Toronto group. In 2008, they reported that stimulation of the hypothalamus caused vivid memory recollections a 50 year old man. In that case, the effect was entirely unintended and unexpected. The patient was being given DBS to try to curb his appetite (he weighed 420 pounds.) The hypothalamus is involved in regulating appetite, not memory - but the fornix, a nerve bundle that passes through that area, is. It's the main pathway connecting the hippocampus to the rest of the brain, and the hippocampus is vital for memory.In this new study, Laxton et al implanted electrodes to stimulate the fornix in 6 patients with mild (early-stage) Alzheimer's. What happened? The results, unfortunately, were quite messy. On average, the patients symptoms got worse over the course of the year. Alzheimer's is a progressive degenerative disease, so this is what you'd expect to happen without treatment. The authors say that the decline was a bit slower than you'd expect in these kinds of patients, but to be honest, it's impossible to tell because there was no control group.However, two patients did show memory improvements, and these were the same two who reported vivid recollections when the electrodes were first implanted (similar to the original obese guy):Two of the 6 patients reported stimulation induced experiential phenomena. Patient 2 reported having the sensation of being in her garden, tending to the plants on a sunny day... Patient 4 reported having the memory of being fishing on a boat on a wavy blue colored lake with his sons and catching a large green and white fish. On later questioning in both patients, these events were autobiographical, had actually occurred in the past, and were accurately reported according to the patient’s spouse.Also, the stimulation caused brain activation, generally switching "on" the areas that are turned "off" in Alzheimer's, and this lasted for a year (the length of the study so far). And there were no major side-effects. That's all good.Overall, these results are extremely interesting, but we don't know how well the treatment really works, and we won't know until someone does a randomized controlled trial with a longer follow-up period; something which is, unfortunately, true of a lot of the latest DBS studies.Link: The Neurocritic on the original 2008 paper.Laxton AW, Tang-Wai DF, McAndrews MP, Zumsteg D, Wennberg R, Keren R, Wherrett J, Naglie G, Hamani C, Smith GS, & Lozano AM (2010). A phase I trial of deep brain stimulation of memory circuits in Alzheimer's disease. Annals of neurology PMID: 20687206... Read more »
Laxton AW, Tang-Wai DF, McAndrews MP, Zumsteg D, Wennberg R, Keren R, Wherrett J, Naglie G, Hamani C, Smith GS.... (2010) A phase I trial of deep brain stimulation of memory circuits in Alzheimer's disease. Annals of neurology. PMID: 20687206
According to the holonomic brain theory,Cognitive function is guided by a matrix of neurological wave interference patterns situated temporally between holographic Gestalt perception and discrete, affective, quantum vectors derived from reward anticipation potentials.Well, I don't know about that, but a group of neuroscientists have just reported on using holograms as a tool for studying brain function: Three-dimensional holographic photostimulation of the dendritic arbor.A while ago, scientists worked out how to "cage" interesting compounds, such as neurotransmitters, inside large, inert molecules. Then, by shining laser light of the right wavelength at the cages, it's possible to break them and release what's inside. This is very useful because it allows you to, say, selectively release neurotransmitters in particular places, just by pointing the laser at them.There's a problem though. The uncaging doesn't happen immediately: the laser has to be pointing at the same point for a certain fixed time. This makes it very difficult to simultaneously stimulate many different points - which is, ideally, what you'd want to do, because in the real brain, everything happens at the same time: a given cell might be receiving input from dozens of others, and sending output to the same number.One solution is to simply split and block the beam into several smaller, parallell beams. This allows you to hit several spots simultaneously but it suffers from the problem that all the spots have to lie in the same 2D "slice". A bit like how, if you taped several laser pointers together, you could project a complex series of dots onto the wall, but not a 3D one.This is where holograms come in. As everyone knows, holograms appear to be 3D images. By adopting the same kind of algorithms as are used in the construction of holograms, the authors were able to use a single laser to generate a series of stimulation spots within 3D space. The image above shows that they were able to stimulate a single dendritic spine of a single neuron by uncaging glutamate.Then they moved on to a real experiment: stimulating several branches of a single cell. What they found was that if you stimulate several branches simultaneously, the overall excitation produced is less than the sum of the individual stimulations.The bottom graph shows this: the grey line is what you'd expect if it was simply summed. Interestingly, a drug called 4-AP, which is used to provoke epileptic seizures in experimental animals, blocked this effect and made cells respond in a linear fashion.This is clearly an extremely promising method. I've previously blogged about how it's possible to visualize individual dendritic branches in the living brain using another laser-based method, two-photon microscopy. In theory, therefore, it might be possible to both see, and manipulate, the brain on a microscopic level, all without physically touching it at all.Yang S, Papagiakoumou E, Guillon M, de Sars V, Tang CM, & Emiliani V (2011). Three-dimensional holographic photostimulation of the dendritic arbor. Journal of neural engineering, 8 (4) PMID: 21623008... Read more »
Yang S, Papagiakoumou E, Guillon M, de Sars V, Tang CM, & Emiliani V. (2011) Three-dimensional holographic photostimulation of the dendritic arbor. Journal of neural engineering, 8(4), 46002. PMID: 21623008
You've just finished doing some research using fMRI to measure brain activity. You designed the study, recruited the volunteers, and did all the scans. Phew. Is that it? Can you publish the findings yet?Unfortunately, no. You still need to do the analysis, and this is often the most trickiest stage. The raw data produced during an fMRI experiment are meaningless - in most cases, each scan will give you a few hundred almost-identical grey pictures of the person's brain. Making sense of them requires some complex statistical analysis.The very first step is choosing which software to use. Just as some people swear by Firefox while others prefer Internet Explorer for browsing the web, neuroscientists have various options to choose from in terms of image analysis software. Everyone's got a favourite. In Britain, the most popular are FSL (developed at Oxford) and SPM (London), while in the USA BrainVoyager sees a lot of use.These three all do pretty much the same thing, give or take a few minor technical differences, so which one you use ultimately makes little difference. But just as there's more than one way to skin a cat, there's more than one way to analyze a brain. A paper from Fusar-Poli et al compares the results you get with SPM to the results obtained using XBAM, a program which uses a quite different statistical approach.Here's what happened, according to SPM, when 15 volunteers looked at pictures of faces expressing the emotion of fear, and their brain activity was compared to when they were just looking at a boring "X" on the screen (I think - either that it's compared to looking at neutral faces; the paper isn't clear, but given the size of the blobs I doubt it's that.)Various bits of the brain were more activated by the scared face pics, as you can see by the huge, fiery blobs. The activation is mostly at the back of the brain, in occipital cortex areas which deal with vision, which is as you'd expect. The cerebellum was also strongly activated, which is a bit less expected.Now, here's what happens if you analyze exactly the same data using XBAM, setting the statistical threshold at the same level (i.e. in theory being no more or less "strict") -You get the same visual system blobs, but you also see activation in a number of other areas. Or as Fusar-Poli et al put it -Analysis using both programs revealed that during the processing of emotional faces, as compared to the baseline stimulus, there was an increased activation in the visual areas (occipital, fusiform and lingual gyri), in the cerebellum, in the parietal cortex [etc] ... Conversely, the temporal regions, insula and putamen were found to be activated using the XBAM analysis software only.*This begs two questions: why the difference, and which way is right?The difference must be a product of the different methods used. SPM uses a technique called statistical parametric mapping (hence the name) based on the assumption of normality. FSL and BrainVoyager do too. XBAM, on the other hand, differs from more orthodox software in a number of other ways; the most basic difference is that it uses non-parametric statistics but this document lists no less than five major innovations -"not to assume normality but to use permutation testing to construct the null distribution used to make inference about the probability of an "activation" under the null hypothesis.""recognizing the existence of correlation in the residuals after fitting a statistical model to the data."using "a mixed effects analysis of group level fMRI data by taking into account both intra and inter subject variances."using "3D cluster level statistics based on cluster mass (the sum of all the statistical values in the cluster) rather than cluster area (number of voxels)."using "a wavelet-based time series permutation approach that permitted the handling of complex noise processes in fMRI data rather than simple stationary autocorrelation."Phew. Which combination of these are responsible for the difference is impossible to say.The biggest question, though, is: should we all be using XBAM? Is it "better" than SPM? This is where things get tricky. The truth is that there's no right way to statistically analyze any data, let alone fMRI data. There are lots of wrong ways, but even if you avoid making any mistakes, there are still various options as to which statistical methods to use, and which method you use depends on which assumptions you're making. XBAM rests of different assumptions from SPM.Whether XBAM's assumptions are more appropriate than those of SPM is a difficult question. The people who wrote XBAM think so, and they're very smart people. But so are the people who wrote SPM. The point is, it's a very complex issue, the mathematical details of which go far beyond the understanding of most fMRI users (myself included).My worry about this paper is that the average Joe Neuroscientist will decide that, because XBAM produces more activation than SPM, it must be "better". The authors are careful not to say this, but for fMRI researchers working in the publish-or-perish world of modern science, and whose greatest fear is that they'll run an analysis and end up with no blobs at all, the temptation to think "the more blobs the merrier" is a powerful one.Fusar-Poli, P., Bhattacharyya, S., Allen, P., Crippa, J., Borgwardt, S., Martin-Santos, R., Seal, M., O’Carroll, C., Atakan, Z., & Zuardi, A. (2010). Effect of image analysis software on neurofunctional activation during processing of emotional human faces Journal of Clinical Neuroscience DOI: 10.1016/j.jocn.2009.06.027... Read more »
Fusar-Poli, P., Bhattacharyya, S., Allen, P., Crippa, J., Borgwardt, S., Martin-Santos, R., Seal, M., O’Carroll, C., Atakan, Z., & Zuardi, A. (2010) Effect of image analysis software on neurofunctional activation during processing of emotional human faces. Journal of Clinical Neuroscience. DOI: 10.1016/j.jocn.2009.06.027
When you drink alcohol and get drunk, are you getting drunk on alcohol?Well, obviously, you might think, and so did I. But it turns out that some people claim that the alcohol (ethanol) in drinks isn't the only thing responsible for their effects - they say that acetaldehyde may be important, perhaps even more so.South Korean researchers Kim et al report that it's acetaldehyde, rather than ethanol, which explains alcohol's immediate effects on cognitive and motor skills. During the metabolism of ethanol in the body, it's first converted into acetaldehyde, which then gets converted into acetate and excreted. Acetaldehyde build-up is popularly renowned as a cause of hangovers (although it's unclear how true this is), but could it also be involved in the acute effects?Kim et al gave 24 male volunteers a range of doses of ethanol (in the form of vodka and orange juice). Half of them carried a genetic variant (ALDH2*2) which impairs the breakdown of acetaldehyde in the body. About 50% of people of East Asian origin, e.g. Koreans, carry this variant, which is rare in other parts of the world.As expected, compared to the others, the ALDH2*2 carriers had much higher blood acetaldehyde levels after drinking alcohol, while there was little or no difference in their blood ethanol levels.Interestingly, though, the ALDH2*2 group also showed much more impairment of cognitive and motor skills, such as reaction time or a simulated driving task. On most measures, the non-carriers showed very little effect of alcohol, while the carriers were strongly affected, especially at high doses. Blood acetaldehyde was more strongly correlated with poor performance than blood alcohol was.So the authors concluded that:Acetaldehyde might be more important than alcohol in determining the effects on human psychomotor function and skills.So is acetaldehyde to blame when you spend half an hour trying and failing to unlock your front door after a hard nights drinking? Should we be breathalyzing drivers for it? Maybe: this is an interesting finding, and there's quite a lot of animal evidence that acetaldehyde has acute sedative, hypnotic and amnesic effects, amongst others.Still, there's another explanation for these results: maybe the ALDH2*2 carriers just weren't paying much attention to the tasks, because they felt ill, as ALDH2*2 carriers generally do after drinking, as a result of acetaldehyde build-up. No-one's going to be operating at peak performance if they're suffering the notorious flush reaction or "Asian glow", which includes skin flushing, nausea, headache, and increased pulse...Kim SW, Bae KY, Shin HY, Kim JM, Shin IS, Youn T, Kim J, Kim JK, & Yoon JS (2009). The Role of Acetaldehyde in Human Psychomotor Function: A Double-Blind Placebo-Controlled Crossover Study. Biological psychiatry PMID: 19914598... Read more »
Kim SW, Bae KY, Shin HY, Kim JM, Shin IS, Youn T, Kim J, Kim JK, & Yoon JS. (2009) The Role of Acetaldehyde in Human Psychomotor Function: A Double-Blind Placebo-Controlled Crossover Study. Biological psychiatry. PMID: 19914598
According to a new paper in the prestigous journal PNAS, High-field MRI reveals an acute impact on brain function in survivors of the magnitude 8.0 earthquake in China.The earthquake, you'll remember, happened on 12th May last year in central China. Over 60,000 people died. The authors of this paper took 44 earthquake survivors, and 32 control volunteers who had not experienced the disaster.The volunteers underwent a "resting state" fMRI scan; survivors were scanned between 13 and 25 days after the earthquake. Resting state fMRI is simply a scan conducted while lying in the scanner, not doing anything in particular. Previous work has shown that fMRI can be used to measure resting state neural activity in the form of low-frequency oscillations.The authors found differences in the resting state low-frequency activity (ALFF) between the trauma survivors and the controls. In survivors, resting state activity was increased in several areas:"The whole-brain analysis indicated that, vs. controls, survivors showed significantly increased ALFF in the left prefrontal cortex and the left precentral gyrus, extending medially to the left presupplementary motor area... [and] region of interest (ROI) analyses revealed significantly increased ALFF in bilateral insula and caudate and the left putamen in the survivor group..."They also reported correlations between resting activity in some of these areas and self-reported anxiety and depression symptoms in the survivors.Finally, survivors showed reduced functional connectivity between a wide range of areas ("a distributed network that included the bilateral amygdala, hippocampus, caudate, putamen, insula, anterior cingulate cortex, and cerebellum.") Functional connectivity analysis measures the correlation in activity across different areas of the brain - whether the areas tend to activate at the same time or not.Now - what does all this mean? And does it help us understand the brain?The fact that there are differences between the two groups is not very informative or surprising. "Resting state" neural activity presumably reflects whatever is going through a person's mind. Recent earthquake survivors are going to be thinking about rather different things compared to luckier people who didn't experience such trauma. It doesn't take a brain scan to tell you that, but that's all these scans really tell us.But these weren't just any differences - they were particular differences in particular brain regions. Does that make knowing about them more interesting and useful?Not as such, because we don't know what they represent, or what causes them. So living through an earthquake gives you "Increased ALFF in the left prefrontal cortex" - but what does that mean? It could mean almost anything. The left prefrontal cortex is a big chunk of the brain, and its functions probably include most complex cognitive processes. Ditto for the other areas mentioned.The authors link their findings to previous work with frankly vague statements such as "The increased regional activity and reduced functional connectivity in frontolimbic and striatal regions occurred in areas known to be important for emotion processing". But anatomically speaking, most of the brain is either "fronto-limbic" or "striatal", and almost everywhere is involved in "emotion processing" in one way or another.So I don't think we understand the brain much better for reading this paper. Further work, building on these results, might give insights. We might, say, learn that decreased connectivity between Regions X and Y is because trauma decreases serotonin levels, which prevents signals being communicated between these areas, which is why trauma victims can't use X to deliberately stop recalling traumatic memories, which is what Y does.I just made that up. But that's a theory which could be tested. Much of today's neuroimaging research doesn't involve testable theories - it is merely the exploratory search for neural differences between two groups. Neuroimaging technology is powerful, and more advanced techniques are always being developed. What with resting state, functional connectivity, pattern-classification analysis, and other fancy methods, the scope for finding differences between groups is enormous and growing. So I'm being rather unfair in criticizing this paper; there are hundreds like it. I picked this one because it was published last week in a good journal.Exploratory work can be useful as a starting point, but at least in my opinion, there is too much of it. If you want to understand the brain, as opposed to simply getting published papers to your name, you need a theory sooner or later. That's what science is about.Lui, S., Huang, X., Chen, L., Tang, H., Zhang, T., Li, X., Li, D., Kuang, W., Chan, R., Mechelli, A., Sweeney, J., & Gong, Q. (2009). High-field MRI reveals an acute impact on brain function in survivors of the magnitude 8.0 earthquake in China Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.0812751106... Read more »
Lui, S., Huang, X., Chen, L., Tang, H., Zhang, T., Li, X., Li, D., Kuang, W., Chan, R., Mechelli, A.... (2009) High-field MRI reveals an acute impact on brain function in survivors of the magnitude 8.0 earthquake in China. Proceedings of the National Academy of Sciences. DOI: 10.1073/pnas.0812751106
Deep-brain stimulation (DBS) is probably the most exciting emerging treatment in psychiatry. DBS is the use of high-frequency electrical current to alter the function of specific areas of the brain. Originally developed for Parkinson's disease, over the past five years DBS has been used experimentally in severe clinical depression, OCD, Tourette's syndrome, alcoholism, and more.Reports of the effects have frequently been remarkable, but there have been few scientifically rigorous studies, and the number of psychiatric patients treated to date is just dozens. So the true usefulness of the technique is unclear. How DBS works is also a mystery. Even the most basic questions - such as whether high-frequency stimulation switches the brain "on" or "off" - are still being debated.Recent data from rodents sheds some important light on the issue: Antidepressant-Like Effects of Medial Prefrontal Cortex Deep Brain Stimulation in Rats. The authors took rats, and implanted DBS electrodes in the infralimbic cortex. This area is part of the vmPFC. It's believed to be the rat equivalent of the human region BA25, the subgenual cingulate cortex, which is the most common target for DBS in depression. The current settings (100 microA, 130 Hz, 90 microsec) were chosen to be similar to the ones used in humans.In a standard rat model of depression, the forced-swim test, infralimbic DBS exerted antidepressant-like effects. DBS was equally as effective as imipramine, a potent antidepressant, in terms of reducing "depression-like" behaviours, namely immobility.This is not all that surprising. Almost everything which treats depression in humans also reduces immobility in this test (along with few things which don't treat it). Much more interesting is what did and did not block the effects of DBS in these rats.First off, DBS worked even when the rat's infralimbic cortex had been destroyed by the toxin ibotenic acid. This strongly suggests that DBS does not work simply by activating the infralimbic cortex, even though this is where the electrodes were implanted.Crucially, infralimbic lesions did not have an antidepressant effect per se, which also rules out the theory that DBS works by inactivating this region. (Infralimbic lesions produced by other methods did have a mild antidepressant effect, but it was smaller than the effect of DBS. This may still be important, however.)What did block the effects of DBS was the depletion of serotonin (5HT). Serotonin is known to its friends as the brain's "happy chemical", although it's a bit more complicated than that. Most antidepressants target serotonin. And rats whose serotonin systems had been lesioned got no benefit from DBS in this study.So this suggests that DBS might work by affecting serotonin, and indeed, DBS turned out to greatly increase serotonin release, even in a distant part of the brain (the hippocampus). Interestingly this lasted for nearly two hours after the electrodes were switched off.Depletion of another neurotransmitter, noradrenaline, did not alter the effects of DBS.Overall, it seems that infralimbic DBS works by increasing serotonin release, but that this is not because it activates or inactivates the infralimbic cortex itself. Rather, nearby structures must be involved. The most likely explanation is that DBS affects nearby white-matter tracts carrying signals between other areas of the brain; the infralimbic cortex might just happen to be "by the roadside". Many researchers believe that this is how DBS works in humans, but this is the first hard evidence for this.Of course, evidence from rats is never all that hard when it comes to human mental illness. We need to know whether the same thing is true in people. As luck would have it, you can temporarily reduce human serotonin levels with a technique called acute tryptophan depletion This reverses the effects of antidepressants in many people. If this rat data is right, it should also temporarily reverse the benefits of DBS. Someone should do this experiment as soon as possible - I'd like to do it myself, but I'm British, and all the DBS research happens in America. Bah, humbug, old bean.There's a couple of others things to note here. In other behavioural tests, infralimbic DBS also had antidepressant-like effects: it seemed to reduce anxiety, and it made rats more resistant to the stress of having electrical shocks (although only slightly.) Finally, DBS in another region, the striatum, had no antidepressant effect at all. That's a bit odd because DBS of the striatum does seem to treat depression in humans - but the part of the striatum targeted here, the caudate-putamen, is quite separate to the one targeted in human depression, the nucleus accumbens.Hamani, C., Diwan, M., Macedo, C., Brandão, M., Shumake, J., Gonzalez-Lima, F., Raymond, R., Lozano, A., Fletcher, P., & Nobrega, J. (2009). Antidepressant-Like Effects of Medial Prefrontal Cortex Deep Brain Stimulation in Rats Biological Psychiatry DOI: ... Read more »
Hamani, C., Diwan, M., Macedo, C., Brandão, M., Shumake, J., Gonzalez-Lima, F., Raymond, R., Lozano, A., Fletcher, P., & Nobrega, J. (2009) Antidepressant-Like Effects of Medial Prefrontal Cortex Deep Brain Stimulation in Rats. Biological Psychiatry. DOI: 10.1016/j.biopsych.2009.08.025
Coffee contains caffeine, and as everyone knows, caffeine is a stimulant. We all know how a good cup of coffee wakes you up, makes you more alert, and helps you concentrate - thanks to caffeine.Or does it? Are the benefits of coffee really due to the caffeine, or are there placebo effects at work? Numerous experiments have tried to answer this question, but a paper published today goes into more detail than most. (It caught my eye just as I was taking my first sip this morning, so I had to blog about it.)The authors took 60 coffee-loving and gave them either placebo decaffeinated coffee, or coffee containing 280 mg caffeine. That's quite a lot, roughly equivalent to three normal cups. 30 minutes later, they a difficult button-pressing task requiring concentration and sustained effort, plus a task involving mashing buttons as fast as possible for a minute.The catch was that the experimenters lied to the volunteers. Everyone was told that they were getting real coffee. Half of them were told that the coffee would enhance their performance on the tasks, while the other half were told it would impair it. If the placebo effect was at work, these misleading instructions should have affected how the volunteers felt and acted.Several interesting things happened. First, the caffeine enhanced performance on the cognitive tasks - it wasn't just a placebo effect. Bear in mind, though, that these people were all regular coffee drinkers who hadn't drunk any caffeine that day. The benefit could have been a reversal of caffeine withdrawl symptoms.Second, there was a small effect of expectancy on task performance - but it worked in reverse. People who were told that the coffee would make them do worse actually did better than those who expected the coffee to help them. Presumably, this is because they put in extra effort to try to overcome the supposedly negative effects. This paradoxical placebo response reminds us that there's more to "the placebo effect" than meets the eye.Finally, no-one who got the decaf noticed that it didn't actually contain caffeine, and the volunteer's ratings of their alertness and mood didn't differ between the caffeine and placebo groups. So, this suggests that if you were to secretly someone's favorite blend with decaf, they wouldn't notice - although their performance would nevertheless decline. Bear that in mind when considering pranks to play on colleagues or flatmates.It looks like science has just confirmed another piece of The Wisdom of Seinfeld:Elaine: Jerry likes Morning Thunder.George: Jerry drinks Morning Thunder? Morning Thunder has caffeine in it. Jerry doesn't drink caffeine.Elaine: Jerry doesn't know Morning Thunder has caffeine in it.George: You don't tell him?Elaine: No. And you should see him. Man, he gets all hyper, he doesn't even know why! He loves it. He walks around going, "God, I feel great!"- Seinfeld, "The Dog"Harrell PT, & Juliano LM (2009). Caffeine expectancies influence the subjective and behavioral effects of caffeine. Psychopharmacology PMID: 19760283... Read more »
Harrell PT, & Juliano LM. (2009) Caffeine expectancies influence the subjective and behavioral effects of caffeine. Psychopharmacology. PMID: 19760283
What's your anti-drug? Well, it might well be hemopressin. At least, that's probably your anti-marijuana.Hemopressin is a small protein that was discovered in the brains of rodents in 2003: its name comes from the fact that it's a breakdown product of hemoglobin and that it can lower blood pressure.No-one seems to have looked to see whether hemopressin is found in humans, yet, but it seems very likely. Almost everything that's in your brain is in a mouse's brain, and vice versa.Pharmacologically, hemopressin's literally an anti-marijuana molecule: it's an inverse agonist at CB1 receptors, which are the ones targeted by the psychoactive compounds in marijuana, and also by the neurotransmitters known as endocannabinoids. Cannabinoids turn CB1 receptors on, hemopressin turns them off.Artificial CB1 blockers were developed as weight loss drugs, and one of them, rimonabant, made it onto the market - but it was banned after it turned out that it caused depression and anxiety in many people.So hemopressin is Nature's rimonabant: in which case, it ought to do what rimonabant does, which is to reduce appetite. And indeed a Journal of Neuroscience paper just out from Godd et al shows that it does just that, in rats and mice: injections of hemopressin reduced feeding.Interestingly, this worked even when it was injected by the standard route under the skin - many proteins can't enter the brain if they're given this way, because they can't cross the blood-brain barrier, meaning that they have to be injected directly into the brain, which makes researching them much harder. So hemopressin, with any luck, will be pretty easy to study. Any volunteers for the first human trial...?Dodd, G., Mancini, G., Lutz, B., & Luckman, S. (2010). The Peptide Hemopressin Acts through CB1 Cannabinoid Receptors to Reduce Food Intake in Rats and Mice Journal of Neuroscience, 30 (21), 7369-7376 DOI: 10.1523/JNEUROSCI.5455-09.2010... Read more »
Dodd, G., Mancini, G., Lutz, B., & Luckman, S. (2010) The Peptide Hemopressin Acts through CB1 Cannabinoid Receptors to Reduce Food Intake in Rats and Mice. Journal of Neuroscience, 30(21), 7369-7376. DOI: 10.1523/JNEUROSCI.5455-09.2010
Wouldn't it be cool if you could measure brain activation with fMRI... right as it happens?You could lie there in the scanner and watch your brain light up. Then you could watch your brain light up some more in response to seeing your brain light up, and watch it light up even more upon seeing your brain light up in response to seeing itself light up... like putting your brain between two mirrors and getting an infinite tunnel of activations.Ok, that would probably get boring, eventually. But there'd be some useful applications too. Apart from the obvious research interest, it would allow you to attempt fMRI neurofeedback: training yourself to be able to activate or deactivate parts of your brain. Neurofeedback has a long (and controversial) history, but so far it's only been feasible using EEG because that's the only neuroimaging method that gives real-time results. EEG is unfortunately not very good at localizing activity to specific areas.Now MIT neuroscientists Hinds et al present a new way of doing right-now fMRI: Computing moment to moment BOLD activation for real-time neurofeedback. It's not in fact the first such method, but they argue that it's the only one that provides reliable, truly real-time signals.Essentially the approach is closely related to standard fMRI analysis processes, except instead of waiting for all of the data to come in before starting to analyze it, it incrementally estimates neural activation every time a new scan of the brain arrives, while accounting for various forms of noise. They first show that it works well on some simulated data, and then discuss the results of a real experiment in which 16 people were asked to alternately increase or decrease their own neural response to hearing the noise of the MRI scanner (they are very noisy). Neurofeedback was given by showing them a "thermometer" representing activity in their auditory cortex.The real-time estimates of activation turned out to be highly correlated with the estimates given by conventional analysis after the experiment was over - though we're not told how well people were able to use the neurofeedback to regulate their own brains.Unfortunately, we're not given all of the technical details of the method, so you won't be able to jump into the nearest scanner and look into your brain quite yet, though they do promise that "this method will be made publicly available as part of a real-time functional imaging software package."Hinds, O., Ghosh, S., Thompson, T., Yoo, J., Whitfield-Gabrieli, S., Triantafyllou, C., & Gabrieli, J. (2010). Computing moment to moment BOLD activation for real-time neurofeedback NeuroImage DOI: 10.1016/j.neuroimage.2010.07.060... Read more »
Hinds, O., Ghosh, S., Thompson, T., Yoo, J., Whitfield-Gabrieli, S., Triantafyllou, C., & Gabrieli, J. (2010) Computing moment to moment BOLD activation for real-time neurofeedback. NeuroImage. DOI: 10.1016/j.neuroimage.2010.07.060
Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.
If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.
Research Blogging is powered by SMG Technology.
To learn more, visit seedmediagroup.com.