Neuroskeptic , Neuroskeptic

572 posts · 371,426 views

Neuroskeptic
440 posts

Sort by Latest Post, Most Popular

View by Condensed, Full

  • December 9, 2009
  • 08:08 AM
  • 865 views

Testosterone, Aggression... Confusion

by Neuroskeptic in Neuroskeptic

Breaking news from the BBC -Testosterone link to aggression 'all in the mind' Work in Nature magazine suggests the mind can win over hormones... Testosterone induces anti-social behaviour in humans, but only because of our own prejudices about its effect rather than its biological activity, suggest the authors. The researchers, led by Ernst Fehr of the University of Zurich, Switzerland, said the results suggested a case of "mind over matter" with the brain overriding body chemistry. "Whereas other animals may be predominantly under the influence of biological factors such as hormones, biology seems to exert less control over human behaviour," they said. Phew, that's a relief - for a minute back there I was worried we didn't have free will. But look a little closer at the study, and it turns out that all is not as it seems. The experiment (Eisenegger et al) involved giving healthy women 0.5 mg testosterone, or placebo, in a randomized double-blind manner, and then getting them to take part in the "Ultimatum Game".This is a game for two players. One, the Proposer, is given some money, and then has to offer to give a certain proportion of it to the other player, the Receiver. If the Receiver accepts the offer, both players get the agreed-upon amount of money. If they reject it, however, no-one gets anything.The Proposer is basically faced with the choice of making a "fair" offer, e.g. giving away 50%, or a greedy one, say offering 10% and keeping 90% for themselves. Receivers generally accept fair offers, but most people get annoyed or insulted by unfair ones, and reject them, even though this means they lose money (10% of the money is still more than 0%).What happened? Testosterone affected behaviour. It had no effect on women playing the role of the Receivers, but the Proposers given testosterone made significantly fairer offers on average, compared to those given placebo. That's not mind over matter, that's matter over mind - give someone a hormone and their behaviour changes.The direction of the effect is quite interesting - if testosterone increased aggression, as popular belief has it, you might expect it to decrease fair offers. Or, you might not. I suppose it depends on your understanding of "aggression". For their part, Eisenegger et al interpret this finding as suggesting that testosterone doesn't increase aggression per se, but rather increases our motivation to achieve "status", which leads to Proposers making fairer offers, so as to appear nicer. Hmm. Maybe.But where did the BBC get the whole "all in the mind" thing from? Well, after the testing was over, the authors asked the women whether they thought they had taken testosterone or placebo. The results showed that the women couldn't actually tell which they'd had - they were no more accurate than if they were guessing - but women who believed they'd got testosterone made more unfair offers than women who believed they got placebo. The size of this effect was bigger than the effect of testosterone.Is that "mind over matter"? Do beliefs about testosterone exert a more powerful effect on behaviour than testosterone itself? Maybe they do, but these data don't tell us anything about that. The women's beliefs weren't manipulated in any way in this trial, so as an experiment it couldn't investigate belief effects. In order to show that belief alters behaviour, you'd need to control beliefs. You could randomly assign some subjects to be told they were taking testosterone, and compare them to others told they were on placebo, say.This study didn't do anything like that. Beliefs about testosterone were only correlated with behaviour, and unless someone's changed the rules recently, correlation isn't causation. It's like finding that people with brown skin are more likely to be Hindus than people with white skin, and concluding that belief in Brahma alters pigmentation. It could even be that the behaviour drove the belief, because subjects were quizzed about their testosterone status after the Ultimatum Game - maybe women who, for whatever reason, behaved selfishly, decided that this meant they had taken testosterone!Overall, this study provides quite interesting data about hormonal effects on behaviour, but tells us nothing about the effects of beliefs about hormones. On that issue, the way the media have covered this experiment is rather more informative than the experiment itself.Eisenegger, C., Naef, M., Snozzi, R., Heinrichs, M., & Fehr, E. (2009). Prejudice and truth about the effect of testosterone on human bargaining behaviour Nature DOI: 10.1038/nature08711... Read more »

  • July 22, 2011
  • 11:54 AM
  • 865 views

New Antidepressant - Old Tricks

by Neuroskeptic in Neuroskeptic

The past decade has been a bad one for antidepressant manufacturers. Quite apart from all the bad press these drugs have been getting lately, there's been a remarkable lack of new antidepressants making it to the market. The only really novel drug to hit the shelves since 2000 has been agomelatine. There were a couple of others that were just minor variants on old molecules, but that's it.This makes "Lu AA21004" rather special. It's a new antidepressant currently in development and by all accounts it's making good progress. It's now in Phase III trials, the last stage before approval. And a large clinical trial has just been published finding that it works.But is it a medical advance or merely a commercial one?Pharmacologically, Lu AA21004 is kind of a new twist on an old classic . Its main mechanism of action is inhibiting the reuptake of serotonin, just like Prozac and other SSRIs. However, unlike them, it also blocks serotonin 5HT3 and 5HT7 receptors, activates 5HT1A receptors and partially agonizes 5HT1B.None of these things cry out "antidepressant" to me, but they do at least make it a bit different.The new trial took 430 depressed people and randomized them to get Lu AA21004, at two different doses, 5mg or 10mg, or the older antidepressant venlafaxine at the high-ish dose of 225 mg, or placebo.It worked. Over 6 weeks, people on the new drug improved more than those on placebo, and equally as well as people on venlafaxine; the lower 5 mg dose was a bit less effective, but not significantly so.The size of the effect was medium, with a benefit over-and-above placebo of about 5 points on the MADRS depression scale, which considering that the baseline scores in this study averaged 34, is not huge, but it compares well to other antidepressant trials.Now we come to the side effects, and this is the most important bit, as we'll see later. The authors did not specifically probe for these, they just relied on spontaneous report, which tends to underestimate adverse events.Basically, the main problem with Lu AA21004 was that it made people sick. Literally - 9% of people on the highest dose suffered vomiting, and 38% got nausea. However, the 5 mg dose was no worse than venlafaxine for nausea, and was relatively vomit-free. Unlike venlafaxine, it didn't cause dry mouth, constipation, or sexual problems.So that's lovely then. Let's get this stuff to market!Hang on.The big selling point for this drug is clearly the lack of side effects. It was no more effective than the (much cheaper, because off-patent) venlafaxine. It was better tolerated, but that's not a great achievement to be honest. Venlafaxine is quite notorious for causing side effects, especially at higher doses.I take venlafaxine 300 mg and the side effects aren't the end of the world, but they're no fun, and the point is, they're well known to be worse than you get with other modern drugs, most notably SSRIs.If you ask me, this study should have compared the new drug to an SSRI, because they're used much more widely than venlafaxine. Which one? How about escitalopram, a drug which is, according to most of the literature, one of the best SSRIs, as effective as venlafaxine, but with fewer side effects.Actually, according to Lundbeck, who make escitalopram, it's even better than venlafaxine. Now, they would say that, given that they make it - but the makers of Lu AA21004 ought to believe them, because, er, they're the same people. "Lu" stands for Lundbeck.The real competitor for this drug, according to Lundbeck, is escitalopram. But no-one wants to be in competition with themselves.This may be why, although there are no fewer than 26 registered clinical trials of Lu AA21004 either ongoing or completed, only one is comparing it to an SSRI. The others either compare it to venlafaxine, or to duloxetine, which has even worse side effects. The one trial that will compare it to escitalopram has a narrow focus (sexual dysfunction).Pharmacologically, remember, this drug is an SSRI with a few "special moves", in terms of hitting some serotonin receptors. The question is - do those extra tricks actually make it better? Or is it just a glorified, and expensive, new SSRI? We don't know and we're not going to find out any time soon.If Lu AA21004 is no more effective, and no better tolerated, than tried-and-tested old escitalopram, anyone who buys it will be paying extra for no real benefit. The only winner, in that case, being Lundbeck.Alvarez E, Perez V, Dragheim M, Loft H, & Artigas F (2011). A double-blind, randomized, placebo-controlled, active reference study of Lu AA21004 in patients with major depressive disorder. The International Journal of Neuropsychopharmacology , 1-12 PMID: 21767441... Read more »

Alvarez E, Perez V, Dragheim M, Loft H, & Artigas F. (2011) A double-blind, randomized, placebo-controlled, active reference study of Lu AA21004 in patients with major depressive disorder. The international journal of neuropsychopharmacology / official scientific journal of the Collegium Internationale Neuropsychopharmacologicum (CINP), 1-12. PMID: 21767441  

  • March 20, 2010
  • 03:00 PM
  • 864 views

Absinthe Fact and Fiction

by Neuroskeptic in Neuroskeptic

Absinthe is a spirit. It's very strong, and very green. But is it something more?I used to think so, until I came across this paper taking a skeptical look at the history and science of the drink, Padosch et al's Absinthism a fictitious 19th century syndrome with present impactAbsinthe is prepared by crushing and dissolving the herb wormwood in unflavoured neutral alcohol and then distilling the result; other herbs and spices are added later for taste and colour.It became extremely popular in the late 19th century, especially in France, but it developed a reputation as a dangerous and hallucinogenic drug. Overuse was said to cause insanity, "absinthism", much worse than regular alcoholism. Eventually, absinthe was banned in the USA and most but not all European countries.Much of the concern over absinthe came from animal experiments. Wormwood oil was found to cause hyperactivity and seizures in cats and rodents, whereas normal alcohol just made them drunk. But, Padosch et al explain, the relevance of these experiments to drinkers is unclear, because they involved high doses of pure wormwood extract, whereas absinthe is much more dilute. The fact that authors at the time used the word absinthe to refer to both the drink and the pure extract added to the confusion.It's now known that wormwood, or at least some varieties of it, contains thujone, which can indeed cause seizures, and death, due to being a GABA antagonist. Until a few years ago it was thought that old-style absinthe might have contained up to 260 mg of thujone per litre, a substantial dose.But that was based on the assumption that all of the thujone in the wormwood ended up in the drink prepared from it. Chemical analysis of actual absinthe has repeatedly found that it contains no more than about 6 mg/L thujone. The alcohol in absinthe would kill you long before you drank enough to get any other effects. As the saying goes, "the dose makes the poison", something that is easily forgotten.As Padosch et al point out, it's possible that there are other undiscovered psychoactive compounds in absinthe, or that long-term exposure to low doses of thujone does cause "absinthism". But there is no evidence for that so far. Rather, they say, absinthism was just chronic alcoholism, and absinthe was no more or less dangerous than any other spirit.I'm not sure why, but drinks seem to attract more than their fair share of urban myths. Amongst many others I've heard that the flakes of gold in Goldschläger cause cuts which let alcohol into your blood faster; Aftershock crystallizes in your stomach, so if you drink water the morning afterwards, you get drunk again; and that the little worm you get at the bottom of some tequilas apparently contains especially concentrated alcohol, or hallucinogens, or even cocaine maybe.Slightly more serious is the theory that drinking different kinds of drinks instead of sticking to just one gets you drunk faster, or gives you a worse hangover, or something, especially if you do it in a certain order. Almost everyone I know believes this, although in my drinking experience it's not true, but I'm not sure that it's completely bogus, as I have heard somewhat plausible explanations i.e. drinking spirits alongside beer leads to a concentration of alcohol in your stomach that's optimal for absorption into the bloodstream... maybe.Link: Not specifically related to this but The Poison Review is an excellent blog I've recently discovered all about poisons, toxins, drugs, and such fun stuff.Padosch SA, Lachenmeier DW, & Kröner LU (2006). Absinthism: a fictitious 19th century syndrome with present impact. Substance abuse treatment, prevention, and policy, 1 (1) PMID: 16722551... Read more »

Padosch SA, Lachenmeier DW, & Kröner LU. (2006) Absinthism: a fictitious 19th century syndrome with present impact. Substance abuse treatment, prevention, and policy, 1(1), 14. PMID: 16722551  

  • March 31, 2010
  • 09:18 AM
  • 864 views

Predicting Psychosis

by Neuroskeptic in Neuroskeptic

"Prevention is better than cure", so they say. And in most branches of medicine, preventing diseases, or detecting early signs and treating them pre-emptively before the symptoms appear, is an important art.Not in psychiatry. At least not yet. But the prospect of predicting the onset of psychotic illnesses like schizophrenia, and of "early intervention" to try to prevent them, is a hot topic at the moment.Schizophrenia and similar illnesses usually begin with a period of months or years, generally during adolescence, during which subtle symptoms gradually appear. This is called the "prodrome" or "at risk mental state". The full-blown disorder then hits later. If we could detect the prodromal phase and successfully treat it, we could save people from developing the illness. That's the plan anyway.But many kids have "prodromal symptoms" during adolescence and never go on to get ill, so treating everyone with mild symptoms of psychosis would mean unnecessarily treating a lot of people. There's also the question of whether we can successfully prevent progression to illness at all, and there have been only a few very small trials looking at whether treatments work for that - but that's another story.Stephan Ruhrmann et al. claim to have found a good way of predicting who'll go on to develop psychosis in their paper Prediction of Psychosis in Adolescents and Young Adults at High Risk. This is based on the European Prediction of Psychosis Study (EPOS) which was run at a number of early detection clinics in Britain and Europe. People were referred to the clinics through various channels if someone was worried they seemed a bit, well, prodromalReferral sources included psychiatrists, psychologists, general practitioners, outreach clinics, counseling services, and teachers; patients also initiated contact. Knowledge about early warning signs (eg, concentration and attention disturbances, unexplained functional decline) and inclusion criteria was disseminated to mental health professionals as well as institutions and persons who might be contacted by at-risk persons seeking help.245 people consented to take part in the study and met the inclusion criteria meaning they were at "high risk of psychosis" according to at least one of two different systems, the Ultra High Risk (UHR) or the COGDIS criteria. Both class you as being at risk if you show short lived or mild symptoms a bit like those seen in schizophrenia i.e.COGDIS: inability to divide attention; thought interference, pressure, and blockage; and disturbances of receptive and expressive speech, disturbance of abstract thinking, unstable ideas of reference, and captivation of attention by details of the visual field...UHR: unusual thought content/ delusional ideas, suspiciousness/persecutory ideas, grandiosity, perceptual abnormalities/hallucinations, disorganized communication, and odd behavior/appearance... Brief limited intermittent psychotic symptoms (BLIPS) i.e. hallucinations, delusions, or formal thought disorders that occurred resolved spontaneously within 1 week...Then they followed up the 245 kids for 18 months and saw what happened to them.What happened was that 37 of them developed full-blown psychosis: 23 suffered schizophrenia according to DSM-IV criteria, indicating severe and prolonged symptoms; 6 had mood disorders, i.e depression or bipolar disorder, with psychotic features, and the rest mostly had psychotic episodes too short to be classed as schizophrenia. 37 people is 19% of the 183 for whom full 18 month data was available; the others dropped out of the study, or went missing for some reason.Is 19% high or low? Well, it's much higher than the rate you'd see in randomly selected people, because the risk of getting schizophrenia is less than 1% lifetime and this was only 18 months; the risk of a random person developing psychosis in any given year has been estimated at 0.035% in Britain. So the UHR and COGDIS criteria are a lot better than nothing.On the other hand 19% is far from being "all": 4 out of 5 of the supposedly "high risk" kids in this study didn't in fact get ill, although some of them probably developed illness after the 18 month period was over.The authors also came up with a fancy algorithm for predicting risk based on your score on various symptom rating scales, and they claim that this can predict psychosis much better, with 80% accuracy. As this graph shows, the rate of developing psychosis in those scoring highly on their Prognostic Index is really high. (In case you were wondering the Prognostic Index is [1.571 x SIPS-Positive score 16] + [0.865 x bizarre thinking score] + [0.793 x sleep disturbances score] + [1.037 x SPD score] + [0.033 x (highest GAF-M score in the past year – 34.64)] + [0.250 x (years of education – 12.52)]. Use it on your friends for hours of psychiatric fun!)However they came up with the algorithm by putting all of their dozens of variables into a big mathematical model, crunching the numbers and picking the ones that were most highly correlated with later psychosis - so they've specifically selected the variables that best predict illness in their sample, but that doesn't mean they'll do so in any other case. This is basically the non-independence problem that has so troubled fMRI, although the authors, to their credit, recognize this and issue the appropriate cautions.So overall, we can predict psychosis, a bit, but far from perfectly. More research is needed. One of the proposed additions to the new DSM-V psychiatric classification system is "Psychosis Risk Syndrome" i.e. the prodrome; it's not currently a disorder in DSM-IV. This idea has been attacked as an invitation to push antipsychotic drugs on kids who aren't actually ill and don't need them. On the other hand though, we shouldn't forget that we're talking about terrible illnesses here: if we could successfully predict and prevent psychosis, we'd be doing a lot of good.Ruhrmann, S., Schultze-Lutter, F., Salokangas, R., Heinimaa, M., Linszen, D., Dingemans, P., Birchwood, M., Patterson, P., Juckel, G., Heinz, A., Morrison, A., Lewis, S., Graf von Reventlow, H., & Klosterkotter, J. (2010). Prediction of Psychosis in Adolescents and Young Adults at High Risk: Results From the Prospective European Prediction of Psychosis Study ... Read more »

Ruhrmann, S., Schultze-Lutter, F., Salokangas, R., Heinimaa, M., Linszen, D., Dingemans, P., Birchwood, M., Patterson, P., Juckel, G., Heinz, A.... (2010) Prediction of Psychosis in Adolescents and Young Adults at High Risk: Results From the Prospective European Prediction of Psychosis Study. Archives of General Psychiatry, 67(3), 241-251. DOI: 10.1001/archgenpsychiatry.2009.206  

  • August 4, 2011
  • 04:27 AM
  • 864 views

Brain-Modifying Drugs

by Neuroskeptic in Neuroskeptic

What if there was a drug that didn't just affect the levels of chemicals in your brain, it turned off genes in your brain? That possibility - either exciting or sinister depending on how you look at it - could be remarkably close, according to a report just out from a Spanish group.The authors took an antidepressant, sertraline, and chemically welded it to a small interfering RNA (siRNA). A siRNA is kind of like a pair of genetic handcuffs. It selectively blocks the expression of a particular gene, by binding to and interfering with RNA messengers. In this case, the target was the serotonin 5HT1A receptor.The authors injected their molecule into the brains of some mice. The sertraline was there to target the siRNA at specific cell types. Sertraline works by binding to and blocking the serotonin transporter (SERT), and this is only expressed on cells that release serotonin; so only these cells were subject to the 5HT1A silencing.The idea is that this receptor acts as a kind of automatic off-switch for these cells, making them reduce their firing in response to their own output, to keep them from firing too fast. There's a theory that this feedback can be a bad thing, because it stops antidepressants from being able to boost serotonin levels very much, although this is debated.Anyway, it worked. The treated mice showed a strong and selective reduction in the density of the 5HT1A receptor in the target area (the Raphe nuclei containing serotonin cells), but not in the rest of the brain.Note that this isn't genetic modification as such. The gene wasn't deleted, it was just silenced, temporarily one hopes; the effect persisted for at least 3 days, but they didn't investigate just how long it lasted.That's remarkable enough, but what's more, it also worked when they administered the drug via the intranasal route. In many siRNA experiments, the payload is injected directly into the brain. That's fine for lab mice, but not very practical for humans. Intranasal administration, however, is popular and easy.So siRNA-sertraline, and who knows what other drugs built along these lines, may be closer to being ready for human consumption than anyone would have predicted. However... the mouse's brain is a lot closer to its nose than the human brain is, so it might not go quite as smoothly.The mind boggles at the potential. If you could selectively alter the gene expression of selective neurons, you could do things to the brain that are currently impossible. Existing drugs hit the whole brain, yet there are many reasons why you'd prefer to only affect certain areas. And editing gene expression would allow much more detailed control over those cells than is currently possible.Currently available drugs are shotguns and sledgehammers. These approaches could provide sniper rifles and scalpels. But whether it will prove to be safe remains to be seen. I certainly wouldn't want to be first one to snort this particular drug.Bortolozzi, A., Castañé, A., Semakova, J., Santana, N., Alvarado, G., Cortés, R., Ferrés-Coy, A., Fernández, G., Carmona, M., Toth, M., Perales, J., Montefeltro, A., & Artigas, F. (2011). Selective siRNA-mediated suppression of 5-HT1A autoreceptors evokes strong anti-depressant-like effects Molecular Psychiatry DOI: 10.1038/mp.2011.92... Read more »

Bortolozzi, A., Castañé, A., Semakova, J., Santana, N., Alvarado, G., Cortés, R., Ferrés-Coy, A., Fernández, G., Carmona, M., Toth, M.... (2011) Selective siRNA-mediated suppression of 5-HT1A autoreceptors evokes strong anti-depressant-like effects. Molecular Psychiatry. DOI: 10.1038/mp.2011.92  

  • February 19, 2010
  • 12:03 PM
  • 859 views

Drunk on Alcohol?

by Neuroskeptic in Neuroskeptic

When you drink alcohol and get drunk, are you getting drunk on alcohol?Well, obviously, you might think, and so did I. But it turns out that some people claim that the alcohol (ethanol) in drinks isn't the only thing responsible for their effects - they say that acetaldehyde may be important, perhaps even more so.South Korean researchers Kim et al report that it's acetaldehyde, rather than ethanol, which explains alcohol's immediate effects on cognitive and motor skills. During the metabolism of ethanol in the body, it's first converted into acetaldehyde, which then gets converted into acetate and excreted. Acetaldehyde build-up is popularly renowned as a cause of hangovers (although it's unclear how true this is), but could it also be involved in the acute effects?Kim et al gave 24 male volunteers a range of doses of ethanol (in the form of vodka and orange juice). Half of them carried a genetic variant (ALDH2*2) which impairs the breakdown of acetaldehyde in the body. About 50% of people of East Asian origin, e.g. Koreans, carry this variant, which is rare in other parts of the world.As expected, compared to the others, the ALDH2*2 carriers had much higher blood acetaldehyde levels after drinking alcohol, while there was little or no difference in their blood ethanol levels.Interestingly, though, the ALDH2*2 group also showed much more impairment of cognitive and motor skills, such as reaction time or a simulated driving task. On most measures, the non-carriers showed very little effect of alcohol, while the carriers were strongly affected, especially at high doses. Blood acetaldehyde was more strongly correlated with poor performance than blood alcohol was.So the authors concluded that:Acetaldehyde might be more important than alcohol in determining the effects on human psychomotor function and skills.So is acetaldehyde to blame when you spend half an hour trying and failing to unlock your front door after a hard nights drinking? Should we be breathalyzing drivers for it? Maybe: this is an interesting finding, and there's quite a lot of animal evidence that acetaldehyde has acute sedative, hypnotic and amnesic effects, amongst others.Still, there's another explanation for these results: maybe the ALDH2*2 carriers just weren't paying much attention to the tasks, because they felt ill, as ALDH2*2 carriers generally do after drinking, as a result of acetaldehyde build-up. No-one's going to be operating at peak performance if they're suffering the notorious flush reaction or "Asian glow", which includes skin flushing, nausea, headache, and increased pulse...Kim SW, Bae KY, Shin HY, Kim JM, Shin IS, Youn T, Kim J, Kim JK, & Yoon JS (2009). The Role of Acetaldehyde in Human Psychomotor Function: A Double-Blind Placebo-Controlled Crossover Study. Biological psychiatry PMID: 19914598... Read more »

  • August 20, 2010
  • 10:02 AM
  • 859 views

Schizophrenia, Genes and Environment

by Neuroskeptic in Neuroskeptic

Schizophrenia is generally thought of as the "most genetic" of all psychiatric disorders and in the past 10 years there have been heroic efforts to find the genes responsible for it, with not much success so far.A new study reminds us that there's more to it than genes alone: Social Risk or Genetic Liability for Psychosis? The authors decided to look at adopted children, because this is one of the best ways of disentangling genes and environment.If you find that the children of people with schizophrenia are at an increased risk of schizophrenia (they are), that doesn't tell you whether the risk is due to genetics, or environment, because we share both with our parents. Only in adoption is the link between genes and environment broken.Wicks et al looked at all of the kids born in Sweden and then adopted by another Swedish family, over several decades (births 1955-1984). To make sure genes and environment were independent, they excluded those who were adopted by their own relatives (i.e. grandparents), and those lived with their biological parents between the ages of 1 and 15. This is the kind of study you can only do in Scandinavia, because only those countries have accessible national records of adoptions and mental illness...What happened? Here's a little graph I whipped up:Brighter colors are adoptees at "genetic risk", defined as those with at least one biological parent who was hospitalized for a psychotic illness (including schizophrenia but also bipolar disorder.) The outcome measure was being hospitalized for a non-affective psychosis, meaning schizophrenia or similar conditions but not bipolar.As you can see, rates are much higher in those with a genetic risk, but were also higher in those adopted into a less favorable environment. Parental unemployment was worst, followed by single parenthood, which was also quite bad. Living in an apartment as opposed to a house, however, had only a tiny effect.Genetic and environmental risk also interacted. If a biological parent was mentally ill and your adopted parents were unemployed, that was really bad news.But hang on. Adoption studies have been criticized because children don't get adopted at random (there's a story behind every adoption, and it's rarely a happy one), and also adopting families are not picked at random - you're only allowed to adopt if you can convince the authorities that you're going to be good parents.So they also looked at the non-adopted population, i.e. everyone else in Sweden, over the same time period. The results were surprisingly similar. The hazard ratio (increased risk) in those with parental mental illness, but no adverse circumstances, was 4.5, the same as in the adoption study, 4.7.For environment, the ratio was 1.5 for unemployment, and slightly lower for the other two. This is a bit less than in the adoption study (2.0 for unemployment). And the two risks interacted, but much less than they did in the adoption sample.However, one big difference was that the total lifetime rate of illness was 1.8% in the adoptees and just 0.8% in the nonadoptees, despite much higher rates of unemployment etc. in the latter. Unfortunately, the authors don't discuss this odd result. It could be that adopted children have a higher risk of psychosis for whatever reason. But it could also be an artefact: rates of adoption massively declined between 1955 and 1984, so most of the adoptees were born earlier, i.e. they're older on average. That gives them more time in which to become ill.A few more random thoughts:This was Sweden. Sweden is very rich and compared to most other rich countries also very egalitarian with extremely high taxes and welfare spending. In other words, no-one in Sweden is really poor. So the effects of environment might be bigger in other countries.On the other hand this study may overestimate the risk due to environment, because it looked at hospitalizations, not illness per se. Supposing that poorer people are more likely to get hospitalized, this could mean that the true effect of environment on illness is lower than it appears.The outcome measure was hospitalization for "non-affective psychosis". Only 40% of this was diagnosed as "schizophrenia". The rest will have been some kind of similar illness which didn't meet the full criteria for schizophrenia (which are quite narrow, in particular, they require 6 months of symptoms).Parental bipolar disorder was counted as a family history. This does make sense because we know that bipolar disorder and schizophrenia often occur in the same families (and indeed they can be hard to tell apart, many people are diagnosed with both at different times.)Overall, though, this is a solid study and confirms that genes and environment are both relevant to psychosis. Unfortunately, almost all of the research money at the moment goes on genes, with studying environmental factors being unfashionable.Wicks S, Hjern A, & Dalman C (2010). Social Risk or Genetic Liability for Psychosis? A Study of Children Born in Sweden and Reared by Adoptive Parents. The American journal of psychiatry PMID: 20686186... Read more »

  • April 7, 2010
  • 08:48 AM
  • 855 views

Why Do We Dream?

by Neuroskeptic in Neuroskeptic

A few months ago, I asked Why Do We Sleep?That post was about sleep researcher Jerry Siegel, who argues that sleep evolved as a state of "adaptive inactivity". According to this idea, animals sleep because otherwise we'd always be active, and constant activity is a waste of energy. Sleeping for a proportion of the time conserves calories, and also keeps us safe from nocturnal predators etc.Siegel's theory in what we might call minimalist. That's in contrast to other hypotheses which claim that sleep serves some kind of vital restorative biological function, or that it's important for memory formation, or whatever. It's a hotly debated topic.But Siegel wasn't the first sleep minimalist. J. Allan Hobson and Robert McCarley created a storm in 1977 with The Brain As A Dream State Generator; I read somewhere that it provoked more letters to the Editor in the American Journal of Psychiatry than any other paper in that journal.Hobson and McCarley's article was so controversial because they argued that dreams are essentially side-effects of brain activation. This was a direct attack on the Freudian view that we dream as a result of our subconscious desires, and that dreams have hidden meanings. Freudian psychoanalysis was incredibly influential in American psychiatry in the 1970s.Freud believed that dreams exist to fulfil our fantasies, often though not always sexual ones. We dream about what we'd like to do - except we don't dream about it directly, because we find much of our desires shameful, so our minds disguise the wishes behind layers of metaphor etc. "Steep inclines, ladders and stairs, and going up or down them, are symbolic representations of the sexual act..." Interpreting the symbolism of dreams can therefore shed light on the depths of the mind.Hobson and McCarley argued that during REM sleep, our brains are active in a similar way to when we are awake; many of the systems responsible for alertness are switched on, unlike during deep, dreamless, non-REM sleep. But of course during REM there is no sensory input (our eyes are closed), and also, we are paralysed: an inhibitory pathway blocks the spinal cord, preventing us from moving, except for our eyes - hence why it's Rapid Eye Movement sleep.Dreams are simply a result of the "awake-like" forebrain - the "higher" perceptual, cognitive and emotional areas - trying to make sense of the input that it's receiving as a result of waves of activation arising from the brainstem. A dream is the forebrain's "best guess" at making a meaningful story out of the assortment of sensations (mostly visual) and concepts activated by these periodic waves. There's no attempt to disguise the shameful parts; the bizarreness of dreams simply reflects the fact that the input is pretty much random.Hobson and McCarley proposed a complex physiological model in which the activation is driven by the giant cells of the pontine tegmentum. These cells fire in bursts according to a genetically hard-wired rhythm of excitation and inhibition.The details of this model are rather less important than the fact that it reduces dreaming to a neurological side effect. This doesn't mean that the REM state has no function; maybe it does, but whatever it is, the subjective experience of dreams serves no purpose.A lot has changed since 1977, but Hobson seems to have stuck by the basic tenets of this theory. A good recent review came out in Nature Neuroscience last year, REM sleep and dreaming. In this paper Hobson proposes that the function of REM sleep is to act as a kind of training system for the developing brain.The internally-generated signals that arise from the brainstem (now called PGO waves) during REM help the forebrain to learn how to process information. This explains why we spend more time in REM early in life; newborns have much more REM than adults; in the womb, we are in REM almost all the time. However, these are not dreams per se because children don't start reporting experiencing dreams until about the age of 5.Protoconscious REM sleep could therefore provide a virtual world model, complete with an emergent imaginary agent (the protoself) that moves (via fixed action patterns) through a fictive space (the internally engendered environment) and experiences strong emotion as it does so.This is a fascinating hypothesis, although very difficult to test, and it begs the question of how useful "training" based on random, meaningless input is.While Hobson's theory is minimalist in that it reduces dreams, at any rate in adulthood, to the status of a by-product, it doesn't leave them uninteresting. Freudian dream re-interpretation is probably ruled out ("That train represents your penis and that cat was your mother", etc.), but if dreams are our brains processing random noise, then they still provide an insight into how our brains process information. Dreams are our brains working away on their own, with the real world temporarily removed.Of course most dreams are not going to give up life-changing insights. A few months back I had a dream which was essentially a scene-for-scene replay of the horror movie Cloverfield. It was a good dream, scarier than the movie itself, because I didn't know it was a movie. But I think all it tells me is that I was paying attention when I watched Cloverfield.On the other hand, I have had several dreams that have made me realize important things about myself and my situation at the time. By paying attention to your dreams, you can work out how you really think, and feel, about things, what your preconceptions and preoccupations are. Sometimes.Hobson JA, & McCarley RW (1977). The brain as a dream state generator: an activation-synthesis hypothesis of the dream process. The American journal of psychiatry, 134 (12), 1335-48 PMID: 21570Hobson, J. (2009). REM sleep and dreaming: towards a theory of protoconsciousness Nature Reviews Neuroscience, 10 (11), 803-813 DOI: 10.1038/nrn2716... Read more »

  • April 8, 2010
  • 03:51 PM
  • 854 views

Social Learning in Antisocial Animals

by Neuroskeptic in Neuroskeptic

In an unusual study with potentially revolutionary implications, Austrian biologists Wilkinson et al show evidence of Social learning in a non-social reptile.Social learning means learning to do something by observing others doing it, rather than by doing it yourself. Many sociable animal species, including mammals, birds and even insects, have shown the ability to learn by observing others doing things. It's often seen as a distinct form of cognition, separate to "normal" learning, which evolved to facilitate group living. It's one of the things that everyone's favourite brain cells, mirror neurons, have been invoked to explain.But if observational learning is a specifically social adaptation, then non-social animals would be predicted to lack this ability. One distinctly unfriendly animal species is the South American red-footed tortoise (Geochelone carbonaria), which is naturally solitary. In the wild, they hatch from their eggs alone, and get no parental care; they live most of their lives without interacting with others.Wilkinson et al found that red-footed tortoises can, nevertheless, learn by observation. They took four tortoises and got them to watch another "demonstrator" tortoise completing a difficult task: walking around an obstacle to get to some food (it's hard if you're a tortoise).The observing animals all learned to do the task. In most cases, they walked around the obstacle to the right, which is what the demonstrators did, but sometimes they went left, showing that they were not simply copying the movements of the demonstrators. The wood chips on the floor of the floor of the cage were mixed up after each trial, to rule out the possibility that the tortoises were just following the smell of the demonstrator. None of four control tortoises, who got no demonstrations, managed to figure it out on their own.The authors conclude thatThe dominant hypothesis in this field claims that social learning evolved as a result of social living and therefore predicts that the tortoises would have difficulty with this task. They did not. The findings suggest that, in this case, social learning may be the result of a general ability to learn. Although the brain mechanisms that underlie the tortoises’ ability to learn socially remain unclear, it seems most likely that it is the product of a general learning mechanism that allows the tortoises to learn, through associative processes, to use the behaviour of another animal just as they would learn to use any cue in the environment.This is a nice experiment, and the result is important: the idea that social learning is somehow evolutionarily and neurally "special" underlies a lot of modern social neuroscience. However, I'm not fully convinced that these tortoises can be accurately described as "non-social". Even the most anti-social species have to socialize in order to mate: no animal is an island. According to Wikipedia the red-footed tortoise has some quite elaborate (and hilarious) mating behaviours...male to male combat is important in inducing breeding in redfoots. Male to male combat begins with a round of head bobbing from each male involved, and then proceeds to a wresting match where the males attempt to turn one another over. The succeeding male (usually the largest male) then attempts to mate with the females. The ritualistic head movements displayed by male red-foots are thought to be a method of species recognition. Other tortoise species have different challenging head movements....The unique body shape of the male redfooted tortoise facilitates the mating process by allowing him to maintain his balance during copulation while the female walks around, seemingly attempting to dislodge the male by walking under low-hanging vegetation.Wilkinson, A., Kuenstner, K., Mueller, J., & Huber, L. (2010). Social learning in a non-social reptile (Geochelone carbonaria) Biology Letters DOI: 10.1098/rsbl.2010.0092... Read more »

  • October 20, 2010
  • 05:38 AM
  • 854 views

You Read It Here First...Again

by Neuroskeptic in Neuroskeptic

A couple of months ago I pointed out that a Letter published in the American Journal of Psychiatry, critiquing a certain paper about antidepressants, made very similar points to the ones that I did in my blog post about the paper. The biggest difference was that my post came out 9 months sooner.Well, it's happened again. Except I was only 3 months ahead this time. Remember my post Clever New Scheme, criticizing a study which claimed to have found a brilliant way of deciding which antidepressant is right for someone, based on their brain activity?That post went up on July 21st. Yesterday, October 19th, a Letter was published by the journal that ran the original paper. Three months ago, I said -...there were two groups in this trial and they got entirely different sets of drugs. One group also got rEEG-based treatment personalization. That group did better, but that might have nothing to do with the rEEG......it would have been very simple to avoid this issue. Just give everyone rEEG, but shuffle the assignments in the control group, so that everyone was guided by someone else's EEG...This would be a genuinely controlled test of the personalized rEEG system, because both groups would get the same kinds of drugs... Second, it would allow the trial to be double-blind: in this study the investigators knew which group people were in, because it was obvious from the drug choice... Thirdly, it wouldn't have meant they had to exclude people whose rEEG recommended they get the same treatment that they would have got in the control group...Now Alexander C. Tsai says, in his Letter:DeBattista et al. chose a study design that conflates the effect of rEEG-guided pharmacotherapy with the effects of differing medication regimes...A more definitive study design would have been one in which study participants were randomized to receive rEEG-guided pharmacotherapy vs. sham rEEG-guided pharmacotherapy.Such a study design could have been genuinely double blinded,would not have required the inclusion of potential subjects whose rEEG treatment regimen was different from the control, and would be more likely to result in medication regimens that were balanced on average across the intervention vs. control arms.To be fair, he also makes a separate point questioning how meaningful the small between-group difference was.I'm mentioning this not because I want to show off, or to accuse Tsai of ripping me off, but because it's a good example of why people like Royce Murray are wrong. Murray recently wrote an editorial in the academic journal Analytical Chemistry, accusing blogging of being unreliable compared to proper, peer-reviewed science.Murray is certainly right that one could use a blog as a platform to push crap ideas, but one can also use peer reviewed papers to do that, and often it's bloggers who are the first to pick up on this when it happens.Tsai AC (2010). Unclear clinical significance of findings on the use of referenced-EEG-guided pharmacotherapy. Journal of psychiatric research PMID: 20943234... Read more »

  • December 28, 2010
  • 06:00 AM
  • 850 views

When Is A Placebo Not A Placebo?

by Neuroskeptic in Neuroskeptic

Irving Kirsch, best known for that 2008 meta-analysis allegedly showing that "Prozac doesn't work", has hit the headlines again.This time it's a paper claiming that something does work. Actually Kirsch is only a minor author on the paper by Kaptchuck et al: Placebos without Deception.In essence, they asked whether a placebo treatment - a dummy pill with no active ingredients - works even if you know that it's a placebo. Conventional wisdom would say no, because the placebo effect is driven by the patient's belief in the effectiveness of the pill.Kaptchuck et al took 80 patients with Irritable Bowel Syndrome (IBS) and recruited them into a trial of "a novel mind-body management study of IBS". Half of the patients got no treatment at all. The other half got sugar pills, after having been told, truthfully, that the pills contained no active drugs but also having been told to expect improvement in a 15 minute briefing session on the grounds thatplacebo pills, something like sugar pills, have been shown in rigorous clinical testing to produce significant mind-body self-healing processes.Guess what? The placebo group did better than the no treatment group, or at least they reported that they did (all the outcomes were subjective). The article has been much blogged about, and you should read those posts for a more detailed and in some cases skeptical examination, but really, this is entirely unsurprising and doesn't challenge the conventional wisdom about placebos.The folks in this trial believed in the possibility that the pills would make them feel better. They just wouldn't have agreed to take part otherwise. And when those people got the treatment that they expected to work, they felt better. That's just the plain old placebo effect. We already know that the placebo effect is very strong in IBS, a disease which is, at least in many cases, psychosomatic.So the only really new result here is that there are people out there who'll believe that they'll experience improvement from sugar pills, if you give them a 15 minute briefing about the "mind-body self-healing" properties of those pills. That's an interesting addition to the record of human quirkiness, but it doesn't really tell us anything new about placebos.Kaptchuk, T., Friedlander, E., Kelley, J., Sanchez, M., Kokkotou, E., Singer, J., Kowalczykowski, M., Miller, F., Kirsch, I., & Lembo, A. (2010). Placebos without Deception: A Randomized Controlled Trial in Irritable Bowel Syndrome PLoS ONE, 5 (12) DOI: 10.1371/journal.pone.0015591... Read more »

Kaptchuk, T., Friedlander, E., Kelley, J., Sanchez, M., Kokkotou, E., Singer, J., Kowalczykowski, M., Miller, F., Kirsch, I., & Lembo, A. (2010) Placebos without Deception: A Randomized Controlled Trial in Irritable Bowel Syndrome. PLoS ONE, 5(12). DOI: 10.1371/journal.pone.0015591  

  • July 27, 2011
  • 03:27 AM
  • 848 views

Brain Connectivity, Or Head Movement?

by Neuroskeptic in Neuroskeptic

"It's pretty painless. Basically you just need to lie there and make sure you don't move your head".This is what I say to all the girls... who are taking part in my fMRI studies. Head movement is a big problem in fMRI. If your head moves, your brain moves and all fMRI analysis assumes that the brain is perfectly still. Although head movement correction is now a standard part of any analysis software, it's not perfect.It may be a particular problem in functional connectivity studies, which attempt to measure the degree to which different parts of the brain are "talking" to each other, in terms of correlated neural activity over time. These are extremely popular nowadays. It's even been claimed that this data may help us understand consciousness itself (although we've heard that before).A new paper offers some important words of caution. It shows that head motion affects estimates of functional connectivity. The more motion, the weaker the measured connectivity in long-range networks, while shorter range connections were stronger. Also, men tended to move more than women.The effect was small - head movement can't explain more than a small fraction of the variability in connectivity.The authors looked at 1,000 scans from healthy volunteers. They just had to lie in the scanner at rest. They looked at functional connectivity, using standard "motion correction" methods, and correlated it with head movement (which you can measure very accurately from the MRI images themselves.) Men tended to move more than women. Could this explain why women tend to have higher functional connectivity?Disconcertingly, head movement was associated with low long range / high short range connections, which is exactly what's been proposed to happen in autism (although in fairness, not all the evidence for this comes from fMRI).This clearly doesn't prove that the autism studies are all dodgy, but it's an issue. People with autism, and people with almost any mental or physical disorder, on average tend to move more than healthy controls.One caveat. Could it be that brain activity causes head movement, rather than the reverse? The authors don't consider this. Head movement must come from the brain, of course. Probably from the motor cortex. The fact that motor cortex functional connectivity was positively associated with movement does suggest a possible link.However, this paper still ought to make anyone who's using functional connectivity worry - at least a little.Head motion is a particularly insidious confound. It is insidious because it biases between-group studies often in the direction of the hypothesized difference....even though there is considerable variation that is not due to head motion, in any given instance, a between-group difference could be entirely due to motion. Van Dijk, K., Sabuncu, M., & Buckner, R. (2011). The Influence of Head Motion on Intrinsic Functional Connectivity MRI NeuroImage DOI: 10.1016/j.neuroimage.2011.07.044... Read more »

  • September 8, 2009
  • 07:18 PM
  • 847 views

Trauma Alters Brain Function... So What?

by Neuroskeptic in Neuroskeptic

According to a new paper in the prestigous journal PNAS, High-field MRI reveals an acute impact on brain function in survivors of the magnitude 8.0 earthquake in China.The earthquake, you'll remember, happened on 12th May last year in central China. Over 60,000 people died. The authors of this paper took 44 earthquake survivors, and 32 control volunteers who had not experienced the disaster.The volunteers underwent a "resting state" fMRI scan; survivors were scanned between 13 and 25 days after the earthquake. Resting state fMRI is simply a scan conducted while lying in the scanner, not doing anything in particular. Previous work has shown that fMRI can be used to measure resting state neural activity in the form of low-frequency oscillations.The authors found differences in the resting state low-frequency activity (ALFF) between the trauma survivors and the controls. In survivors, resting state activity was increased in several areas:"The whole-brain analysis indicated that, vs. controls, survivors showed significantly increased ALFF in the left prefrontal cortex and the left precentral gyrus, extending medially to the left presupplementary motor area... [and] region of interest (ROI) analyses revealed significantly increased ALFF in bilateral insula and caudate and the left putamen in the survivor group..."They also reported correlations between resting activity in some of these areas and self-reported anxiety and depression symptoms in the survivors.Finally, survivors showed reduced functional connectivity between a wide range of areas ("a distributed network that included the bilateral amygdala, hippocampus, caudate, putamen, insula, anterior cingulate cortex, and cerebellum.") Functional connectivity analysis measures the correlation in activity across different areas of the brain - whether the areas tend to activate at the same time or not.Now - what does all this mean? And does it help us understand the brain?The fact that there are differences between the two groups is not very informative or surprising. "Resting state" neural activity presumably reflects whatever is going through a person's mind. Recent earthquake survivors are going to be thinking about rather different things compared to luckier people who didn't experience such trauma. It doesn't take a brain scan to tell you that, but that's all these scans really tell us.But these weren't just any differences - they were particular differences in particular brain regions. Does that make knowing about them more interesting and useful?Not as such, because we don't know what they represent, or what causes them. So living through an earthquake gives you "Increased ALFF in the left prefrontal cortex" - but what does that mean? It could mean almost anything. The left prefrontal cortex is a big chunk of the brain, and its functions probably include most complex cognitive processes. Ditto for the other areas mentioned.The authors link their findings to previous work with frankly vague statements such as "The increased regional activity and reduced functional connectivity in frontolimbic and striatal regions occurred in areas known to be important for emotion processing". But anatomically speaking, most of the brain is either "fronto-limbic" or "striatal", and almost everywhere is involved in "emotion processing" in one way or another.So I don't think we understand the brain much better for reading this paper. Further work, building on these results, might give insights. We might, say, learn that decreased connectivity between Regions X and Y is because trauma decreases serotonin levels, which prevents signals being communicated between these areas, which is why trauma victims can't use X to deliberately stop recalling traumatic memories, which is what Y does.I just made that up. But that's a theory which could be tested. Much of today's neuroimaging research doesn't involve testable theories - it is merely the exploratory search for neural differences between two groups. Neuroimaging technology is powerful, and more advanced techniques are always being developed. What with resting state, functional connectivity, pattern-classification analysis, and other fancy methods, the scope for finding differences between groups is enormous and growing. So I'm being rather unfair in criticizing this paper; there are hundreds like it. I picked this one because it was published last week in a good journal.Exploratory work can be useful as a starting point, but at least in my opinion, there is too much of it. If you want to understand the brain, as opposed to simply getting published papers to your name, you need a theory sooner or later. That's what science is about.Lui, S., Huang, X., Chen, L., Tang, H., Zhang, T., Li, X., Li, D., Kuang, W., Chan, R., Mechelli, A., Sweeney, J., & Gong, Q. (2009). High-field MRI reveals an acute impact on brain function in survivors of the magnitude 8.0 earthquake in China Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.0812751106... Read more »

Lui, S., Huang, X., Chen, L., Tang, H., Zhang, T., Li, X., Li, D., Kuang, W., Chan, R., Mechelli, A.... (2009) High-field MRI reveals an acute impact on brain function in survivors of the magnitude 8.0 earthquake in China. Proceedings of the National Academy of Sciences. DOI: 10.1073/pnas.0812751106  

  • August 4, 2010
  • 03:44 PM
  • 847 views

Real Time fMRI

by Neuroskeptic in Neuroskeptic

Wouldn't it be cool if you could measure brain activation with fMRI... right as it happens?You could lie there in the scanner and watch your brain light up. Then you could watch your brain light up some more in response to seeing your brain light up, and watch it light up even more upon seeing your brain light up in response to seeing itself light up... like putting your brain between two mirrors and getting an infinite tunnel of activations.Ok, that would probably get boring, eventually. But there'd be some useful applications too. Apart from the obvious research interest, it would allow you to attempt fMRI neurofeedback: training yourself to be able to activate or deactivate parts of your brain. Neurofeedback has a long (and controversial) history, but so far it's only been feasible using EEG because that's the only neuroimaging method that gives real-time results. EEG is unfortunately not very good at localizing activity to specific areas.Now MIT neuroscientists Hinds et al present a new way of doing right-now fMRI: Computing moment to moment BOLD activation for real-time neurofeedback. It's not in fact the first such method, but they argue that it's the only one that provides reliable, truly real-time signals.Essentially the approach is closely related to standard fMRI analysis processes, except instead of waiting for all of the data to come in before starting to analyze it, it incrementally estimates neural activation every time a new scan of the brain arrives, while accounting for various forms of noise. They first show that it works well on some simulated data, and then discuss the results of a real experiment in which 16 people were asked to alternately increase or decrease their own neural response to hearing the noise of the MRI scanner (they are very noisy). Neurofeedback was given by showing them a "thermometer" representing activity in their auditory cortex.The real-time estimates of activation turned out to be highly correlated with the estimates given by conventional analysis after the experiment was over - though we're not told how well people were able to use the neurofeedback to regulate their own brains.Unfortunately, we're not given all of the technical details of the method, so you won't be able to jump into the nearest scanner and look into your brain quite yet, though they do promise that "this method will be made publicly available as part of a real-time functional imaging software package."Hinds, O., Ghosh, S., Thompson, T., Yoo, J., Whitfield-Gabrieli, S., Triantafyllou, C., & Gabrieli, J. (2010). Computing moment to moment BOLD activation for real-time neurofeedback NeuroImage DOI: 10.1016/j.neuroimage.2010.07.060... Read more »

Hinds, O., Ghosh, S., Thompson, T., Yoo, J., Whitfield-Gabrieli, S., Triantafyllou, C., & Gabrieli, J. (2010) Computing moment to moment BOLD activation for real-time neurofeedback. NeuroImage. DOI: 10.1016/j.neuroimage.2010.07.060  

  • March 8, 2010
  • 03:45 PM
  • 846 views

Life Without Serotonin

by Neuroskeptic in Neuroskeptic

Via Dormivigilia, I came across a fascinating paper about a man who suffered from a severe lack of monoamine neurotransmitters (dopamine, serotonin etc.) as a result of a genetic mutation: Sleep and Rhythm Consequences of a Genetically Induced Loss of SerotoninNeuroskeptic readers will be familiar with monoamines. They're psychiatrists' favourite neurotransmitters, and are hence very popular amongst psych drug manufacturers. In particular, it's widely believed that serotonin is the brain's "happy chemical" and that clinical depression is caused by low serotonin while antidepressants work by boosting it.Critics charge that there is no evidence for any of this. My own opinion is that it's complicated, but that while there's certainly no simple relation between serotonin, antidepressants and mood, they are linked in some way. It's all rather mysterious, but then, the functions of serotonin in general are; despite 50 years of research, it's probably the least understood neurotransmitter.The new paper adds to the mystery, but also provides some important new data. Leu-Semenescu et al report on the case of a 28 year old man, with consanguineous parents, who suffers from a rare genetic disorder, sepiapterin reductase deficiency (SRD). SRD patients lack an enzyme which is involved, indirectly, in the production of the monoamines serotonin and dopamine, and also melatonin and noradrenaline which are produced from these two. SRD causes a severe (but not total) deficiency of these neurotransmitters.The most obvious symptoms of SRD are related to the lack of dopamine, and include poor coordination and weakness, very similar to Parkinson's Disease. An interesting feature of SRD is that these symptoms are mild in the morning, worsen during the day, and improve with sleep. Such diurnal variation is also a hallmark of severe depression, although in depression it's usually the other way around (better in the evening).The patient reported on in this paper suffered Parkinsonian symptoms from birth, until he was diagnosed with dystonia at age 5 and started on L-dopa to boost his dopamine levels. This immediately and dramatically reversed the problems.But his serotonin synthesis was still impaired, although doctors didn't realize this until age 27. As a result, Leu-Semenescu et al say, he suffered from a range of other, non-dopamine-related symptoms. These included increased appetite - he ate constantly, and was moderately obese - mild cognitive impairment, and disrupted sleep:The patient reported sleep problems since childhood. He would sleep 1 or 2 times every day since childhood and was awake during more than 2 hours most nights since adolescence. At the time of the first interview, the night sleep was irregular with a sleep onset at 22:00 and offset between 02:00 and 03:00. He often needed 1 or 2 spontaneous, long (2- to 5-h) naps during the daytime.After doctors did a genetic test and diagnosed STP, they treated him with 5HTP, a precursor to serotonin. The patient's sleep cycle immediately normalized, his appetite was reduced and his concentration and cognitive function improved (although that may have been because he was less tired). Here's his before and after hypnogram:Disruptions in sleep cycle and appetite are likewise common in clinical depression. The direction of the changes in depression varies: loss of appetite is common in the most severe "melancholic" depression, while increased appetite is seen in many other people.For sleep, both daytime sleepiness and night-time insomnia, especially waking up too early, can occur in depression. The most interesting parallel here is that people with depression often show a faster onset of REM (dreaming) sleep, which was also seen in this patient before 5HTP treatment. However, it's not clear what was due to serotonin and what was due to melatonin because melatonin is known to regulate sleep.Overall, though, the biggest finding here was a non-finding: this patient wasn't depressed, despite having much reduced serotonin levels. This is further evidence that serotonin isn't the "happy chemical" in any simple sense.On the other hand, the similarities between his symptoms and some of the symptoms of depression suggest that serotonin is doing something in that disorder. This fits with existing evidence from tryptophan depletion studies showing that low serotonin doesn't cause depression in most people, but does re-activate symptoms in people with a history of the disease. As I said, it's complicated...Smaranda Leu-Semenescu et al. (2010). Sleep and Rhythm Consequences of a Genetically Induced Loss of Serotonin Sleep, 33 (03), 307-314... Read more »

Smaranda Leu-Semenescu et al. (2010) Sleep and Rhythm Consequences of a Genetically Induced Loss of Serotonin. Sleep, 33(03), 307-314. info:/

  • August 17, 2009
  • 10:09 AM
  • 844 views

Schizophrenia: The Mystery of the Missing Genes

by Neuroskeptic in Neuroskeptic

It's a cliché, but it's true - "schizophrenia genes" are the Holy Grail of modern psychiatry.Were they to be discovered, such genes would provide clues towards a better understanding of the biology of the disease, and that could lead directly to the development of better medications. It might also allow "genetic counselling" for parents concerned about their children's risk of schizophrenia.Perhaps most importantly for psychiatrists, the definitive identification of genes for a mental illness would provide cast-iron proof that psychiatric disorders are "real diseases", and that biological psychiatry is a branch of medicine like any other. Schizophrenia, generally thought of as the most purely "biological" of all mental disorders, is the best bet.With this in mind, let's look at three articles (1,2,3) published in Nature last month to much excited fanfare along the lines of 'Schizophrenia genes discovered!' All three were based on genome-wide association studies (GWAS). In a GWAS, you examine a huge number of genetic variants in the hope that some of them are associated with the disease or trait you're interested in. Several hundred thousand variants per study is standard at the moment. This is the genetic equivalent of trying to find the person responsible for a crime by fingerprinting everyone in town.The Nature papers were based on three seperate large GWAS projects - the SGENE-plus, the MGS, and the ICS. In total, there were over 8,000 schizophrenia patients and 19,000 healthy controls in these studies - enormous samples by the standards of human genetics research, and large enough that if there were any common genetic variants with even a modest effect on schizophrenia risk, they would probably have found them.What did they find? On the face of it, not much. The MGS(1) "did not produce genome-wide significant findings...power was adequate in the European-ancestry sample to detect very common risk alleles (30–60% frequency) with genotypic relative risks of approximately 1.3 ...The results indicate that there are few or no single common loci with such large effects on risk." In the SGENE-plus(2), likewise, "None of the markers gave P values smaller than our genome-wide significance threshold".The ISC study(3) did find one significantly associated variant in the Major Histocompatability Complex (MHC) region on chromosome 6. The MHC is known to be involved in immune function. When the data from all three studies were pooled together, several variants in the same region were also found to be significantly associated with schizophrenia.Somewhat confusingly, all three papers did this pooling, although they each did it in slightly different ways - the only area in which all three analyses found a result was the MHC region. The SGENE team's analysis, which was larger, also implicated two other, unrelated variants, which were not found in other two papers.To summarize, three very large studies found just one "schizophrenia gene" even after pooling their data. The variant, or possibly cluster of related ones, is presumably involved in the immune system. Although the authors of the Nature papers made much of this finding, the main news here is that there is at most one common variant which raises the relative risk of schizophrenia by even just 20%. Given that the baseline risk of schizophrenia is about 1%, there is at most one common gene which raises your risk to more than 1.2%. That's it.So, what does this mean? There are three possibilites. First, it could be that schizophrenia genes are not "common". This possibility is getting a lot of attention at the moment, thanks to a report from a few months back, Walsh et al, suggesting that some cases of schizophrenia are caused by just one rare, high-impact mutation, but a different mutation in each case. In other words, each case of schizophrenia could be genetically almost unique. GWAS studies would be unable to detect such effects.Second, there could be lots of common variants, each with an effect on risk so tiny that it wasn't found even in these three large projects. The only way to identify them would be to do even bigger studies. The ISC team's paper claims that this is true, on the basis of this graph: They took all of the variants which were more common in schizophrenics than in controls, even if they were only slightly more common, and totalled up the number of "slight risk" variants each person has.The graph shows that these "slight risk" markers were more common in people with schizophrenia from two entirely seperate studies, and are also more common in people with bipolar disorder, but were not associated with five medical illnesses like diabetes. This is an interesting result, but these variants must have such a tiny effect on risk that finding them would involve spending an awful lot of time (and money) for questionable benefit.The third and final possibility is that "schizophrenia" is just less genetic than most psychiatrists think, because the true causes of the disorder are not genetic, and/or because "schizophrenia" is an umbrella term for many different diseases with different causes. This possibility is not talked about much in respectable circles, but if genetics doesn't start giving solid results soon, it may be.Purcell, S., & et Al (2009). Common polygenic variation contributes to risk of schizophrenia and bipolar disorder Nature DOI: 10.1038/nature08185Shi, J., & et Al (2009). Common variants on chromosome 6p22.1 are associated with schizophrenia Nature DOI: 10.1038/nature08192... Read more »

  • November 2, 2009
  • 12:52 PM
  • 839 views

Real vs Placebo Coffee

by Neuroskeptic in Neuroskeptic

Coffee contains caffeine, and as everyone knows, caffeine is a stimulant. We all know how a good cup of coffee wakes you up, makes you more alert, and helps you concentrate - thanks to caffeine.Or does it? Are the benefits of coffee really due to the caffeine, or are there placebo effects at work? Numerous experiments have tried to answer this question, but a paper published today goes into more detail than most. (It caught my eye just as I was taking my first sip this morning, so I had to blog about it.)The authors took 60 coffee-loving and gave them either placebo decaffeinated coffee, or coffee containing 280 mg caffeine. That's quite a lot, roughly equivalent to three normal cups. 30 minutes later, they a difficult button-pressing task requiring concentration and sustained effort, plus a task involving mashing buttons as fast as possible for a minute.The catch was that the experimenters lied to the volunteers. Everyone was told that they were getting real coffee. Half of them were told that the coffee would enhance their performance on the tasks, while the other half were told it would impair it. If the placebo effect was at work, these misleading instructions should have affected how the volunteers felt and acted.Several interesting things happened. First, the caffeine enhanced performance on the cognitive tasks - it wasn't just a placebo effect. Bear in mind, though, that these people were all regular coffee drinkers who hadn't drunk any caffeine that day. The benefit could have been a reversal of caffeine withdrawl symptoms.Second, there was a small effect of expectancy on task performance - but it worked in reverse. People who were told that the coffee would make them do worse actually did better than those who expected the coffee to help them. Presumably, this is because they put in extra effort to try to overcome the supposedly negative effects. This paradoxical placebo response reminds us that there's more to "the placebo effect" than meets the eye.Finally, no-one who got the decaf noticed that it didn't actually contain caffeine, and the volunteer's ratings of their alertness and mood didn't differ between the caffeine and placebo groups. So, this suggests that if you were to secretly someone's favorite blend with decaf, they wouldn't notice - although their performance would nevertheless decline. Bear that in mind when considering pranks to play on colleagues or flatmates.It looks like science has just confirmed another piece of The Wisdom of Seinfeld:Elaine: Jerry likes Morning Thunder.George: Jerry drinks Morning Thunder? Morning Thunder has caffeine in it. Jerry doesn't drink caffeine.Elaine: Jerry doesn't know Morning Thunder has caffeine in it.George: You don't tell him?Elaine: No. And you should see him. Man, he gets all hyper, he doesn't even know why! He loves it. He walks around going, "God, I feel great!"- Seinfeld, "The Dog"Harrell PT, & Juliano LM (2009). Caffeine expectancies influence the subjective and behavioral effects of caffeine. Psychopharmacology PMID: 19760283... Read more »

  • February 12, 2010
  • 05:19 PM
  • 838 views

Dope, Dope, Dopamine

by Neuroskeptic in Neuroskeptic

When you smoke pot, you get stoned.Simple. But it's not really, because stoned can involve many different effects, depending upon the user's mental state, the situation, the variety and strength of the marijuana, and so forth. It can be pleasurable, or unpleasant. It can lead to relaxed contentment, or anxiety and panic. And it can feature hallucinations and alterations of thinking, some of which resemble psychotic symptoms.In Central nervous system effects of haloperidol on THC in healthy male volunteers, Liem-Moolenaar et al tested whether an antipsychotic drug would modify the psychoactive effects of Δ9-THC, the main active ingredient in marijuana. They took healthy male volunteers, who had moderate experience of smoking marijuana, and gave them inhaled THC. They were pretreated with 3 mg haloperidol, or placebo.They found that haloperidol reduced the "psychosis-like" aspects of the marijuana intoxication. However, it didn't reverse the effects of THC of cognitive performance, the sedative effects, or the user's feelings of "being high".This makes sense, if you agree with the theory that the psychosis-like effects of THC are related to dopamine. Like all antipsychotics, haloperidol blocks ... Read more »

Liem-Moolenaar, M., Te Beek, E., de Kam, M., Franson, K., Kahn, R., Hijman, R., Touw, D., & van Gerven, J. (2010) Central nervous system effects of haloperidol on THC in healthy male volunteers. Journal of Psychopharmacology. DOI: 10.1177/0269881109358200  

  • September 8, 2010
  • 08:54 AM
  • 835 views

Autistic Toddlers Like Screensavers

by Neuroskeptic in Neuroskeptic

Young children with autism prefer looking at geometric patterns over looking at other people. At least, some of them do. That's according to a new study - Preference for Geometric Patterns Early in Life As a Risk Factor for Autism.Pierce et al took 110 toddlers (age 14 to 42 months). Some of them had autism, some had "developmental delay" but not autism, and some were normally developing.The kids were shown a one-minute video clip. One half of the screen showed some kids doing yoga, while the other was a set of ever-changing complex patterns. A bit like a screensaver or a kaleidoscope. Eye-tracking apparatus was used to determine which side of the screen each child was looking at.What happened? Both the healthy control children, and the developmentally delayed children, showed a strong preference for the "social" stimuli - the yoga kids. However, the toddlers with an autism spectrum disorder showed a much wider range of preferences. 40% of them preferred the geometric patterns. Age wasn't a factor.Intuitively this makes sense because one of the classic features of autism is a fascination with moving shapes such as wheels, fans, and so on. The authors conclude thatA preference for geometric patterns early in life may be a novel and easily detectable early signature of infants and toddlers at risk for autism.But only a minority of the autism group showed this preference, remember. As you can see from the plot above, they spanned the whole range - and over half behaved entirely normally.There was no difference between the "social" and "geometrical" halves of the autism group on measures of autism symptoms or IQ, so it wasn't just that only "more severe" autism was associated with an abnormal preference.They re-tested many of the kids a couple of weeks later, and found a strong correlation between their preference on both occasions, suggesting that it is a real fondness for one over the other - rather than just random eye-wandering.So this is an interesting result, but it's not clear that it would be of much use for diagnosis.Pierce K, Conant D, Hazin R, Stoner R, & Desmond J (2010). Preference for Geometric Patterns Early in Life As a Risk Factor for Autism. Archives of general psychiatry PMID: 20819977... Read more »

  • November 5, 2009
  • 07:29 PM
  • 834 views

The Politics of Psychopharmacology

by Neuroskeptic in Neuroskeptic

It's always nice when a local boy makes good in the big wide world. Many British neuroscientists and psychiatrists have been feeling rather proud this week following the enormous amount of attention given to Professor David Nutt, formerly the British government's chief adviser on illegal drugs.Formerly being the key word. Nutt was sacked (...write your own "nutsack" pun if you must) last Friday, prompting a remarkable amount of condemnation. Critics included the rest of his former organisation, the Advisory Council on the Misuse of Drugs (ACMD), and the Government's Science Minister. The UK's Chief Scientist also spoke in favour of Nutt's views. Journalists joined in the fun with headlines like "politicians are intoxicated by cowardice".Even Nature today ran a bluntly-worded editorial -"The sacking of a government adviser on drugs shows Britain's politicians can't cope with intelligent debate... the position of the Labour government and of the leading opposition party, the Conservatives, which vigorously supported Nutt's sacking, has no merit at all. It deals a significant blow both to the chances of an informed and reasoned debate over illegal drugs, and to the parties' own scientific credibility."They also have an interview with the man himself.*What happened? The short answer is a lecture Nutt gave on the 10th October, Estimating Drug Harms: A Risky Business? I'd recommend reading it (it's free). The Government's dismissal e-mail gave two reasons why he had to go - firstly, "Your recent comments have gone beyond [matters of evidence] and have been lobbying for a change of government policy" and secondly, "It is important that the government's messages on drugs are clear and as an advisor you do nothing to undermine public understanding of them."Many people believe that Nutt was fired because he argued for the liberalization of drug laws, or because he claimed that the harms of some illegal drugs, such as cannabis, are less severe than those of legal substances like tobacco and alcohol. On this view, the government's actions were "shooting the messenger", or dismissing an expert because they didn't like to hear to the facts. It seems to me, however, that the truth is a little more nuanced, and even more stupid.*Nutt's lecture, if you read the whole thing as opposed to the quotes in the media, is remarkably mild. For instance, at no point does he suggest that any drug which is currently illegal should be made legal. The changes he "lobbies for" are ones that the ACMD have already recommended, and this lobbying consists of nothing more than tentative criticism of the stated reasons for the rejection of the ACMD's advice. The ACMD is government's official expert body on illicit drugs, remember.The issue Nutt focusses on is the question of whether cannabis should be a "Class C" or a "Class B" illegal drug, B being "worse", and carrying stricter penalties. It was Class B until 2004, when it was made Class C. In 2007, the Government asked the ACMD to advise on whether it should be re-reclassified back up to Class B. This was in response to concerns about the impact of cannabis on mental health, specifically the possibility that it raises the risk of psychotic illnesses.The resulting ACMD report is available on the Government's website. They concluded that while cannabis use is certainly not harmless, "the harms caused by cannabis are not considered to be as serious as drugs in class B and therefore it should remain a class C drug."Despite this, the Government took the decision to reclassify cannabis as Class B. In his lecture Nutt criticizes this decision - slightly. Nutt quotes the Home Secretary as saying, in response to the ACMD's report -"Where there is a clear and serious problem [i.e. cannabis health problems], but doubt about the potential harm that will be caused, we must err on the side of caution and protect the public. I make no apology for that. I am not prepared to wait and see."Nutt describes this reasoning as -"the precautionary principle - if you’re not sure about a drug harm, rank it high... at first sight it might seem the obvious decision – why wouldn’t you take the precautionary principle? We know that drugs are harmful and that you can never evaluate a drug over the lifetime of a whole population, so we can never know whether, at some point in the future, a drug might lead to or cause more harm than it did early in its use."But he says, there's more to it than this. Firstly, we don't know anything about how classification affects drug use. The whole idea of upgrading cannabis to Class B to protect the public relies on the assumption that it will reduce drug use by deterring people from using it. But there is no empirical evidence as to whether this actually happens. As Nutt points out, stricter classification might equally well increase use by making it seem forbidden, and hence, cooler. (If you think that's implausible, you have forgotten what it is like to be 16.) We just don't know.Second, he says, the precautionary principle devalues the evidence and is thereby self-defeating because it means that people will not take any warnings about drug harms seriously - "[it] leads to a position where people really don’t know what the evidence is. They see the classification, they hear about evidence and they get mixed messages. There’s quite a lot of anecdotal evidence that public confidence in the scientific probity of government has been undermined in this kind of way." Can anyone really dispute this?Finally, he raises the MMR vaccine scare as an example of the precautionary principle ironically leading to concrete harms. Concerns were raised about the safety of a vaccine, on the basis of dubious science. As a result, vaccine coverage fell, and the incidence of measles, mumps and rubella in Britain rose for the first time in decades. The vaccine harmed no-one; these diseases do. We just don't know whether cannabis reclassification will have similar unintended consequences.That's what the Home Secretary described as "lobbying for a change of government policy". I wish all lobbyists were this reasonable.The Home Secretary's second charge against Nutt - "It is important that the government's messages on drugs are clear..." - is even more specious. Nutt's messages were the ACMD's messages, and as he points out, the only lack of clarity comes from the fact that the government and their own Advisory Council disagree with each other. This is hardly the ACMD's fault, and it's certainly not Nutt's fault for pointing it out.All of this is doubly ridiculous because of one easily-forgotten fact - cannabis was downgraded from Class B to Class C in 2004 by the present Labour Party government. Nutt's "lobbying" therefore consists of a recommendation that the government do something they themselves previously did. And if the government are worried about the clarity of their message, the fact that they themselves were saying that cannabis was benign enough to be a Class C drug just 5 years ago might be somewhat relevant.*... Read more »

Nature. (2009) A drug-induced low. Nature, 462(7269), 11-12. DOI: 10.1038/462011b  

Daniel Cressey. (2009) Sacked science adviser speaks out. Nature. info:/

join us!

Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.

If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.

Register Now

Research Blogging is powered by SMG Technology.

To learn more, visit seedmediagroup.com.