Neuroskeptic , Neuroskeptic

757 posts · 711,969 views

Neuroskeptic
440 posts

Sort by Latest Post, Most Popular

View by Condensed, Full

  • September 1, 2011
  • 03:06 AM
  • 1,174 views

Men, Women and Spatial Intelligence

by Neuroskeptic in Neuroskeptic

Do men and women differ in their cognitive capacities? It's been a popular topic of conversation since as far back as we have records of what people were talking about.While it's now (almost) generally accepted that men and women are at most only very slightly different in average IQ, there are still a couple of lines of evidence in favor of a gender difference.First, there's the idea that men are more variable in their intelligence, so there are more very smart men, and also more very stupid ones. This averages out so the mean is the same.Second, there's the theory that men are on average better at some things, notably "spatial" stuff involving the ability to mentally process shapes, patterns and images, while women are better at social, emotional and perhaps verbal tasks. Again, this averages out overall.According to proponents, these differences explain why men continue to dominate the upper echelons of things like mathematics, physics, and chess. These all tap spatial processing and since men are more variable, there'll be more extremely high achievers - Nobel Prizes, grandmasters. (There are also presumably more men who are rubbish at these things, but we don't notice them.)The male spatial advantage has been reported in many parts of the world, but is it "innate", something to do with the male brain? A new PNAS study says - probably not, it's to do with culture. But I'm not convinced.The authors went to India and studied two tribes, the Khasi and the Karbi. Both live right next to other in the hills of Northeastern India and genetically, they're closely related. Culturally though, the Karbi are patrilineal - property and status is passed down from father to son, with women owning no land of their own. The Khasi are matrilineal, with men forbidden to own land. Moreover, Khasi women also get just as much education as the men, while Karbi ones get much less.The authors took about 1200 people from 8 villages - 4 per culture - and got them to do a jigsaw puzzle. The quicker you do it, the better your spatial ability. Here were the results. I added the gender-stereotypical colours.In the patrilineal group, women did substantially worse on average (remember that more time means worse). In the matrilineal society, they performed as well as men. Well, a tiny bit worse, but it wasn't significant. Differences in education explained some of the effect, but only a small part of it.OK.This was a large study, and the results are statistically very strong. However, there's a curious result that the authors don't discuss in the paper - the matrilineal group just did much better overall. Looking at the men, they were 10 seconds faster in the matrilineal culture. That's nearly as big as the gender difference in the patrilineal group (15 seconds)!The individual variability was also much higher in the patrilineal society, for both genders.Now, maybe, this is a real effect. Maybe being in a patrilineal society makes everyone less spatially aware, not just women; that seems a bit of a stretch, though.There's also the problem that this study essentially only has two datapoints. One society is matrilineal and has low gender difference in visuospatial processing. One is patrilineal and has a high difference. But that's just not enough data to conclude that there's a correlation between the two things, let alone a causal relationship; you would need to study lots of societies to do that. Personally, I have no idea what drives the difference, but this study is a reminder of how difficult the question is.Hoffman M, Gneezy U, List JA (2011). Nurture affects gender differences in spatial abilities. Proceedings of the National Academy of Sciences of the United States of America PMID: 21876159... Read more »

Hoffman M, Gneezy U, & List JA. (2011) Nurture affects gender differences in spatial abilities. Proceedings of the National Academy of Sciences of the United States of America. PMID: 21876159  

  • October 8, 2009
  • 03:18 PM
  • 1,173 views

A Vaccine For White Line Fever?

by Neuroskeptic in Neuroskeptic

A study claims that it's possible to immunize against cocaine: Cocaine Vaccine for the Treatment of Cocaine Dependence in Methadone-Maintained Patients. But does it work? And will it be useful?The idea of an anti-drug vaccine is not new; as DrugMonkey explains in his post on this paper, monkeys were being given experimental anti-morphine vaccines as long ago as the 1970s. This one has been under development for years, but this is the first randomized controlled trial to investigate whether it helps addicts to use less of the drug.Martell et al, a Yale-based group, recruited 115 patients. They all used both cocaine and opiates, and were given methadone treatment to try to reduce their opiate use. The reason why the authors chose to focus on these patients is that the methadone keeps people coming back for more and makes them less likely to drop out of the study, or as they put it, "retention in methadone maintenance programs is substantially better than in primary cocaine treatment programs. We also offered subjects $15 per week to enhance retention."The vaccine consists of a bacterial protein (cholera toxin B-subunit) chemically linked to a cocaine-like molecule, succinylnorcocaine. Like all vaccines, it works by provoking an immune response. The bacterial protein triggers the production of antibodies, proteins which recognize and bind to specific targets.In this case, the antibodies bind cocaine (anti-cocaine IgG) because of the succinylnorcocaine in the vaccine. Once a molecule of cocaine is bound to the antibody, it's effectively out of commission, as it cannot enter the brain. So, the vaccine should reduce or abolish the effects of the drug. The control group were given a dummy placebo vaccine.The results? Biologically speaking, the vaccine worked, but in some people more than others. Out of the 55 subjects who were given the active vaccine, all but one produced anti-cocaine IgG. However, the amount of antibodies produced varied widely. Also, the response was short-lived. The vaccine was given 5 times over the first 12 weeks, but antibody levels did not peak until week 16, after which they fell rapidly.And the key question - did it reduce cocaine use? Well, sort of. The authors measured drug use in terms of the proportion of urine samples which were cocaine-free. In the active vaccine group, the proportion of drug-free urine samples was higher over weeks 9 to 16, when the antibody levels were high, and this was statistically significant (treatment x time interaction: Z=2.4, P=.01). As expected, the benefit was greater in the people who made lots of antibodies (43 μg/mL) (treatment x time interaction: Z=4.8, P less than .001). But the effect was pretty small:The bottom line was about 10% more urine samples testing negative, and even that was only true in the minority (38%) of people who responded well to the vaccine! Not very impressive, but on the other hand, the number of drug-free urine tests is a very crude measure of cocaine use. It doesn't tell us how much coke the patients used at a time, or how many times they used it per day.Also, bear in mind that if it works, this vaccine might increase cocaine use in some people, at least at first. By binding and inactivating some of the cocaine in the bloodstream, the vaccine would mean you'd need to take more of the drug in order to feel the effects. It's curious that the authors relied on just one crude outcome measure and didn't ask the patients to describe the effects in more detail.So, these are some interesting results, but the vaccine clearly needs a lot of work before it becomes clinically useful, as the authors admit - "Attaining high (43 μg/mL) IgG anticocaine antibody levels was associated with significantly reduced cocaine use, but only 38% of the vaccinated subjects attained these IgG levels and they had only 2 months of adequate cocaine blockade. Thus, we need improved vaccines and boosters." Quite an admission given that this study was partially funded by Celtic Pharmaceuticals, who make the vaccine.It's also questionable whether any vaccine will be truly beneficial in treating cocaine addiction. Such a vaccine would be a way of reducing the temptation to use cocaine. In this sense, it would be just like naltrexone for heroin addicts, which blocks the effects of the drug. Or disulifram (Antabuse) for alcoholics, which makes drinking alcohol cause horrible side effects. Essentially, these treatments are ways of artificially boosting your "self-control", and they work.But we've had naltrexone and disulifram for many years. They're cheap and safe. But we still have heroin addicts and alcoholics. This is not to say that they're never helpful - some people find them very useful. But they haven't eradicated addiction because addiction is not something that can be cured with a pill or an injection.Addiction is a pattern of behaviour, and medications might help people to break free of it, but the causes of addiction are social, economic and psychological as well as biological. People turn to drugs and alcohol when there's nowhere else to turn, and unfortunately, there's no vaccine against that.Martell BA, Orson FM, Poling J, Mitchell E, Rossen RD, Gardner T, & Kosten TR (2009). Cocaine vaccine for the treatment of cocaine dependence in methadone-maintained patients: a randomized, double-blind, placebo-controlled efficacy trial. Archives of general psychiatry, 66 (10), 1116-23 PMID: 19805702... Read more »

  • January 20, 2010
  • 10:45 AM
  • 1,173 views

The Sweet Taste of Cannabinoids

by Neuroskeptic in Neuroskeptic

Every stoner knows about the munchies, the fondness for junk food that comes with smoking marijuana. Movies have been made about it.It's not just that being on drugs makes you like eating: stimulants, like cocaine and amphetamine, decrease appetite. The munchies are something specific to marijuana. But why?New research from a Japanese team reveals that marijuana directly affects the cells in the taste buds which detect sweet flavors - Endocannabinoids selectively enhance sweet taste.Yoshida et al studied mice, and recorded the electrical signals from the chorda tympani (CT), which carries taste information from the tongue to the brain.They found that injecting the mice with two chemicals, 2AG and AEA, markedly increased the strength of the signals produced in response to sweet tastes - such as sugar, or the sweetener saccharine. However, neither had any effect on the strength of the response to other flavors, like salty, bitter, or sour. Mice given endocannabinoids were also more eager to eat and drink sweet things, which confirms previous findings.2-AG and AEA are both endocannabinoids, an important class of neurotransmitters. Marijuana's main active ingredient, Δ9-THC, works by mimicking the action of endocannabinoids. Although Δ9-THC wasn't tested in this study, it's extremely likely that it has the same effects as 2-AG and AEA.In follow-up experiments, Yoshida et al found that endocannabinoids enhance sweet taste responses by acting on cannabinoid type 1 (CB1) receptors on the tongue's sweet taste cells themselves. In fact, over half of the sweet receptor cells expressed CB1 receptors!This is an important finding, because CB1 receptors are already known to regulate the pleasurable response to sweet foods (amongst other things) in the brain. These new data don't challenge this, but suggest that CB1 also modulates the most basic aspects of sweet taste perception. The munchies are probably caused by Δ9-THC acting at multiple levels of nervous system.This paper also sheds light on the question of how CB1 antagonists work. Given that drugs which activate CB1 make people eat more, it would make sense if CB1 blockers made people eat less, and therefore lose weight, a kind of anti-munchies effect. And indeed they do. Which is why rimonabant, a CB1 antagonist, was released onto the market in 2006 as a weight loss drug.It worked pretty well, although unfortunately it also it caused clinical depression in some people, so it was banned in Europe in 2008 and was never approved in the USA for the same reason. The depression caused by rimonabant was almost certainly due to its ability to block CB1 receptors in the brain, but Yoshida et al's findings suggest that a CB1 antagonist which didn't enter the brain, but only affects peripheral nerves like the taste buds, might be able to make people less fond of sweet foods without causing the same side-effects. Who knows - in a few years you might even be able to buy CB1 antagonist chewing gum to help you stick to your diet...Yoshida, R., Ohkuri, T., Jyotaki, M., Yasuo, T., Horio, N., Yasumatsu, K., Sanematsu, K., Shigemura, N., Yamamoto, T., Margolskee, R., & Ninomiya, Y. (2009). Endocannabinoids selectively enhance sweet taste Proceedings of the National Academy of Sciences, 107 (2), 935-939 DOI: 10.1073/pnas.0912048107... Read more »

Yoshida, R., Ohkuri, T., Jyotaki, M., Yasuo, T., Horio, N., Yasumatsu, K., Sanematsu, K., Shigemura, N., Yamamoto, T., Margolskee, R.... (2009) Endocannabinoids selectively enhance sweet taste. Proceedings of the National Academy of Sciences, 107(2), 935-939. DOI: 10.1073/pnas.0912048107  

  • July 3, 2011
  • 04:03 AM
  • 1,173 views

The NeuROFLscience of Jokes

by Neuroskeptic in Neuroskeptic

A new paper in the Journal of Neuroscience investigates the neural basis of humour: Why Clowns Taste Funny.The authors note that some things are funny because of ambiguous words. For example:Q: Why don’t cannibals eat clowns?A: Because they taste funny!Previous studies, apparently, have shown that these kinds of jokes lead to activation in the lIFG (left inferior frontal gyrus), although it's also involved in processing ambiguity that's not funny, and indeed, language in general.In this study they gave people fMRI and played them audio clips of sentences that were either funny or not, and that either contained ambiguity or not. Examples of non-funny ambiguity included crackers like this:Q: What happened to the post?A: As usual, it was given to the best-qualified applicant.They found that, relative to straightforward ones, ambiguous sentences led to increased activation in two areas, the lIFG and also the left ITG. That fits with previous work.By contrast, funny stimuli, whether ambiguous or not, sent the brain into overdrive, with humour causing activation all over a wide range of hilarious areas such as the amygdala, ventral striatum, hypothalamus, temporal lobes and more.Many of these areas are known to be involved in emotion and pleasure, although some are fairly random such as visual area BA19.There were strong associations between BOLD signal change and funniness in the midbrain, the left ventral striatum, and the left anterior and posterior IFG.The problem is, like so many neuroimaging studies, it's not clear what this adds to our understanding of the topic. All this really shows is that linguistic ambiguity activates language areas, and enjoyable stimuli activate pleasure areas (amongst many others); it doesn't tell us why some things are funny.So more research is needed, and future neuro-humour studies will need a new set of neuro-jokes in order to maximize the laughs. Here's a few I came up with:Q: Why did the chicken cross the road?A :Because of activation in the motor cortex, causing muscle contractions in his legs.Q: What neuroimaging methodology is most useful for studying the brains of cats and dogs?A: PET scanning.Knock knock.Who's there?John.I doubt that. The 'self' is an illusion. The concept of 'John' as an individual is incompatible with modern neuroscience.Bekinschtein TA, Davis MH, Rodd JM, & Owen AM (2011). Why Clowns Taste Funny: The Relationship between Humor and Semantic Ambiguity. The Journal of neuroscience : the official journal of the Society for Neuroscience, 31 (26), 9665-71 PMID: 21715632... Read more »

Bekinschtein TA, Davis MH, Rodd JM, & Owen AM. (2011) Why Clowns Taste Funny: The Relationship between Humor and Semantic Ambiguity. The Journal of neuroscience : the official journal of the Society for Neuroscience, 31(26), 9665-71. PMID: 21715632  

  • February 9, 2014
  • 02:10 PM
  • 1,173 views

Is Ultrasonic Brain Stimulation The Future?

by Neuroskeptic in Neuroskeptic_Discover

A paper just out in Nature Neuroscience proposes a new tool for neuroscientists who want to stimulate the brain – ultrasound. There are already a number of established ways of modulating human brain activity. As neuronal firing is essentially electrical, most of these methods rely on electricity – such as TDCS – or on magnetic […]The post Is Ultrasonic Brain Stimulation The Future? appeared first on Neuroskeptic.... Read more »

  • March 20, 2010
  • 03:00 PM
  • 1,172 views

Absinthe Fact and Fiction

by Neuroskeptic in Neuroskeptic

Absinthe is a spirit. It's very strong, and very green. But is it something more?I used to think so, until I came across this paper taking a skeptical look at the history and science of the drink, Padosch et al's Absinthism a fictitious 19th century syndrome with present impactAbsinthe is prepared by crushing and dissolving the herb wormwood in unflavoured neutral alcohol and then distilling the result; other herbs and spices are added later for taste and colour.It became extremely popular in the late 19th century, especially in France, but it developed a reputation as a dangerous and hallucinogenic drug. Overuse was said to cause insanity, "absinthism", much worse than regular alcoholism. Eventually, absinthe was banned in the USA and most but not all European countries.Much of the concern over absinthe came from animal experiments. Wormwood oil was found to cause hyperactivity and seizures in cats and rodents, whereas normal alcohol just made them drunk. But, Padosch et al explain, the relevance of these experiments to drinkers is unclear, because they involved high doses of pure wormwood extract, whereas absinthe is much more dilute. The fact that authors at the time used the word absinthe to refer to both the drink and the pure extract added to the confusion.It's now known that wormwood, or at least some varieties of it, contains thujone, which can indeed cause seizures, and death, due to being a GABA antagonist. Until a few years ago it was thought that old-style absinthe might have contained up to 260 mg of thujone per litre, a substantial dose.But that was based on the assumption that all of the thujone in the wormwood ended up in the drink prepared from it. Chemical analysis of actual absinthe has repeatedly found that it contains no more than about 6 mg/L thujone. The alcohol in absinthe would kill you long before you drank enough to get any other effects. As the saying goes, "the dose makes the poison", something that is easily forgotten.As Padosch et al point out, it's possible that there are other undiscovered psychoactive compounds in absinthe, or that long-term exposure to low doses of thujone does cause "absinthism". But there is no evidence for that so far. Rather, they say, absinthism was just chronic alcoholism, and absinthe was no more or less dangerous than any other spirit.I'm not sure why, but drinks seem to attract more than their fair share of urban myths. Amongst many others I've heard that the flakes of gold in Goldschläger cause cuts which let alcohol into your blood faster; Aftershock crystallizes in your stomach, so if you drink water the morning afterwards, you get drunk again; and that the little worm you get at the bottom of some tequilas apparently contains especially concentrated alcohol, or hallucinogens, or even cocaine maybe.Slightly more serious is the theory that drinking different kinds of drinks instead of sticking to just one gets you drunk faster, or gives you a worse hangover, or something, especially if you do it in a certain order. Almost everyone I know believes this, although in my drinking experience it's not true, but I'm not sure that it's completely bogus, as I have heard somewhat plausible explanations i.e. drinking spirits alongside beer leads to a concentration of alcohol in your stomach that's optimal for absorption into the bloodstream... maybe.Link: Not specifically related to this but The Poison Review is an excellent blog I've recently discovered all about poisons, toxins, drugs, and such fun stuff.Padosch SA, Lachenmeier DW, & Kröner LU (2006). Absinthism: a fictitious 19th century syndrome with present impact. Substance abuse treatment, prevention, and policy, 1 (1) PMID: 16722551... Read more »

Padosch SA, Lachenmeier DW, & Kröner LU. (2006) Absinthism: a fictitious 19th century syndrome with present impact. Substance abuse treatment, prevention, and policy, 1(1), 14. PMID: 16722551  

  • February 25, 2013
  • 04:17 PM
  • 1,170 views

Vladimir Lenin’s Stoney Brain

by Neuroskeptic in Neuroskeptic_Discover

There’s been a lot of discussion lately about Einstein’s brain. Less well-known, but equally fascinating, is the case of Lenin‘s cerebrum – for just like Albert, the founder of the Soviet Union was fated to end up as a series of preserved slices. Lenin died of a series of strokes at the young age of [...]... Read more »

  • August 2, 2011
  • 04:21 AM
  • 1,169 views

The 30something Brain

by Neuroskeptic in Neuroskeptic

Brain maturation continues for longer than previously thought - well up until age 30. That's according to two papers just out, which may be comforting for those lamenting the fact that they're nearing the big Three Oh.This challenges the widespread view that maturation is essentially complete by the end of adolescence, in the early to mid 20s.Petanjek et al show that the number of dendritic spines in the prefrontal cortex increases during childhood and then rapidly falls during puberty - which probably represents a kind of "pruning" process. That's nothing new, but they also found that the pruning doesn't stop when you hit 20. It continues, albeit gradually, up to 30 and beyond.This study looked at post-mortem brain samples taken from people who died at various different ages. Lebel and Beaulieu used diffusion MRI to examine healthy living brains. They scanned 103 people and everyone got at least 2 scans a few year years apart, so they could look at changes over time.They found that the fractional anisotropy (a measure of the "integrity") of different white matter tracts varies with age in a non-linear fashion. All tracts become stronger during childhood, and most peak at about 20. Then they start to weaken again. But not all of them - others, such as the cingulum, take longer to mature.Also, total white matter volume continues rising well up to age 30.Plus, there's a lot of individual variability. Some people's brains were still maturing well into their late 20s, even in white matter tracts that on average are mature by 20. Some of this will be noise in the data, but not all of it.These results also fit nicely with this paper from last year that looked at functional connectivity of brain activity.So, while most maturation does happen before and during adolescence, these results show that it's not a straightforward case of The Adolescent Brain turning suddenly into The Adult Brain when you hit 21, which point it solidifies into the final product,Lebel C, & Beaulieu C (2011). Longitudinal development of human brain wiring continues from childhood into adulthood. The Journal of Neuroscience, 31 (30), 10937-47 PMID: 21795544Petanjek, Z., Judas, M., Simic, G., Rasin, M., Uylings, H., Rakic, P., & Kostovic, I. (2011). Extraordinary neoteny of synaptic spines in the human prefrontal cortex Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1105108108... Read more »

Lebel C, & Beaulieu C. (2011) Longitudinal development of human brain wiring continues from childhood into adulthood. The Journal of neuroscience : the official journal of the Society for Neuroscience, 31(30), 10937-47. PMID: 21795544  

Petanjek, Z., Judas, M., Simic, G., Rasin, M., Uylings, H., Rakic, P., & Kostovic, I. (2011) Extraordinary neoteny of synaptic spines in the human prefrontal cortex. Proceedings of the National Academy of Sciences. DOI: 10.1073/pnas.1105108108  

  • August 17, 2009
  • 10:09 AM
  • 1,166 views

Schizophrenia: The Mystery of the Missing Genes

by Neuroskeptic in Neuroskeptic

It's a cliché, but it's true - "schizophrenia genes" are the Holy Grail of modern psychiatry.Were they to be discovered, such genes would provide clues towards a better understanding of the biology of the disease, and that could lead directly to the development of better medications. It might also allow "genetic counselling" for parents concerned about their children's risk of schizophrenia.Perhaps most importantly for psychiatrists, the definitive identification of genes for a mental illness would provide cast-iron proof that psychiatric disorders are "real diseases", and that biological psychiatry is a branch of medicine like any other. Schizophrenia, generally thought of as the most purely "biological" of all mental disorders, is the best bet.With this in mind, let's look at three articles (1,2,3) published in Nature last month to much excited fanfare along the lines of 'Schizophrenia genes discovered!' All three were based on genome-wide association studies (GWAS). In a GWAS, you examine a huge number of genetic variants in the hope that some of them are associated with the disease or trait you're interested in. Several hundred thousand variants per study is standard at the moment. This is the genetic equivalent of trying to find the person responsible for a crime by fingerprinting everyone in town.The Nature papers were based on three seperate large GWAS projects - the SGENE-plus, the MGS, and the ICS. In total, there were over 8,000 schizophrenia patients and 19,000 healthy controls in these studies - enormous samples by the standards of human genetics research, and large enough that if there were any common genetic variants with even a modest effect on schizophrenia risk, they would probably have found them.What did they find? On the face of it, not much. The MGS(1) "did not produce genome-wide significant findings...power was adequate in the European-ancestry sample to detect very common risk alleles (30–60% frequency) with genotypic relative risks of approximately 1.3 ...The results indicate that there are few or no single common loci with such large effects on risk." In the SGENE-plus(2), likewise, "None of the markers gave P values smaller than our genome-wide significance threshold".The ISC study(3) did find one significantly associated variant in the Major Histocompatability Complex (MHC) region on chromosome 6. The MHC is known to be involved in immune function. When the data from all three studies were pooled together, several variants in the same region were also found to be significantly associated with schizophrenia.Somewhat confusingly, all three papers did this pooling, although they each did it in slightly different ways - the only area in which all three analyses found a result was the MHC region. The SGENE team's analysis, which was larger, also implicated two other, unrelated variants, which were not found in other two papers.To summarize, three very large studies found just one "schizophrenia gene" even after pooling their data. The variant, or possibly cluster of related ones, is presumably involved in the immune system. Although the authors of the Nature papers made much of this finding, the main news here is that there is at most one common variant which raises the relative risk of schizophrenia by even just 20%. Given that the baseline risk of schizophrenia is about 1%, there is at most one common gene which raises your risk to more than 1.2%. That's it.So, what does this mean? There are three possibilites. First, it could be that schizophrenia genes are not "common". This possibility is getting a lot of attention at the moment, thanks to a report from a few months back, Walsh et al, suggesting that some cases of schizophrenia are caused by just one rare, high-impact mutation, but a different mutation in each case. In other words, each case of schizophrenia could be genetically almost unique. GWAS studies would be unable to detect such effects.Second, there could be lots of common variants, each with an effect on risk so tiny that it wasn't found even in these three large projects. The only way to identify them would be to do even bigger studies. The ISC team's paper claims that this is true, on the basis of this graph: They took all of the variants which were more common in schizophrenics than in controls, even if they were only slightly more common, and totalled up the number of "slight risk" variants each person has.The graph shows that these "slight risk" markers were more common in people with schizophrenia from two entirely seperate studies, and are also more common in people with bipolar disorder, but were not associated with five medical illnesses like diabetes. This is an interesting result, but these variants must have such a tiny effect on risk that finding them would involve spending an awful lot of time (and money) for questionable benefit.The third and final possibility is that "schizophrenia" is just less genetic than most psychiatrists think, because the true causes of the disorder are not genetic, and/or because "schizophrenia" is an umbrella term for many different diseases with different causes. This possibility is not talked about much in respectable circles, but if genetics doesn't start giving solid results soon, it may be.Purcell, S., & et Al (2009). Common polygenic variation contributes to risk of schizophrenia and bipolar disorder Nature DOI: 10.1038/nature08185Shi, J., & et Al (2009). Common variants on chromosome 6p22.1 are associated with schizophrenia Nature DOI: 10.1038/nature08192... Read more »

  • July 15, 2011
  • 04:04 AM
  • 1,164 views

Violent Brains In The Supreme Court

by Neuroskeptic in Neuroskeptic

Back in June, the U.S. Supreme Court ruled that a Californian law banning the sale of violent videogames to children was unconstitutional because it violated the right to free speech.However, the ruling wasn't unanimous. Justice Stephen Breyer filed a dissenting opinion. Unfortunately, it contains a whopping piece of bad neuroscience. The ruling is here. Thanks to the Law & Neuroscience Blog for noticing this.Breyer says (on page 13 of his bit)Cutting-edge neuroscience has shown that “virtual violence in video game playing results in those neural patterns that are considered characteristic for aggressive cognition and behavior.”He then cites this fMRI study from 2006. It's from the same group as this one I wrote about recently.Breyer quotes this study as part of a discussion of the evidence linking violent video game use to violence. I have nothing to say about this, but I will point out than the fact that violent crime fell heavily in America after 1990, which is when the Super Nintendo and Sega Megadrive were invented.Anyway, does this study show that playing violent games causes aggressive brain activity? Not exactly. By which I mean "no".They scanned 13 young men playing a shooter game. The main finding was that during "violent" moments of the game, activity in the rostral ACC and the amygdala activity falls. At least this is the interpretation the authors give.OK, but even if this neural response is "characteristic for aggressive cognition and behavior", it only lasted a few seconds. There's no evidence at all that this causes any lasting effects on brain function, or behaviour.The real problem though is that the whole thing is based on the theory that violence is associated with reduced amygdala (and rACC) activity.The authors cite various studies to this effect, but they don't distinguish between reduced activity as an immediate neural response to violence, as in this study, and reduced activity in people with high exposure to violent media, in response to non-violent stimuli.This is rather like saying that because having a haircut reduces your total hair, and because bald people have no hair, haircuts cause baldness. Short-term doesn't automatically become long-term.Besides, the whole idea that amygdala deactivation = violence is a bit weird because they used to destroy people's amydalas to reduce violent aggression in severe mental and neurological illness:Different surgical approaches have involved various stereotactic devices and modalities for amygdaloid nucleus destruction, such as the injection of alcohol, oil, kaolin, or wax; cryoprobe lesioning; mechanical destruction; diathermy loop; and radiofrequency lesioning...Lovely. It even worked sometimes, apparantly. Although it killed 4% of people. You can't reduce the activity of a region much more than by destroying it, yet destroying the amygdala reduced violence, or at the very least, didn't make it worse.The truth is that aggression isn't a single thing. Everyone knows that there are two main kinds, "in cold blood" and "in the heat of the moment". Killing someone in a spontaneous bar brawl is one thing, but carefully planning to sneak up behind them and stab them is quite another.Just based on what we know about the rare cases of amygdala-less people, I would imagine that destroying the amygdala would reduce violence "in the heat of the moment", which is motivated by anger and fear. The kind of patients who got this surgery seem to have been that kind of violent person, not the cold calculating kind.So, even if violent video games reduced amygdala activity long term, that would probably reduce some kinds of violence.Weber, R., Ritterfeld, U., & Mathiak, K. (2006). Does Playing Violent Video Games Induce Aggression? Empirical Evidence of a Functional Magnetic Resonance Imaging Study Media Psychology, 8 (1), 39-60 DOI: 10.1207/S1532785XMEP0801_4... Read more »

  • April 7, 2010
  • 08:48 AM
  • 1,161 views

Why Do We Dream?

by Neuroskeptic in Neuroskeptic

A few months ago, I asked Why Do We Sleep?That post was about sleep researcher Jerry Siegel, who argues that sleep evolved as a state of "adaptive inactivity". According to this idea, animals sleep because otherwise we'd always be active, and constant activity is a waste of energy. Sleeping for a proportion of the time conserves calories, and also keeps us safe from nocturnal predators etc.Siegel's theory in what we might call minimalist. That's in contrast to other hypotheses which claim that sleep serves some kind of vital restorative biological function, or that it's important for memory formation, or whatever. It's a hotly debated topic.But Siegel wasn't the first sleep minimalist. J. Allan Hobson and Robert McCarley created a storm in 1977 with The Brain As A Dream State Generator; I read somewhere that it provoked more letters to the Editor in the American Journal of Psychiatry than any other paper in that journal.Hobson and McCarley's article was so controversial because they argued that dreams are essentially side-effects of brain activation. This was a direct attack on the Freudian view that we dream as a result of our subconscious desires, and that dreams have hidden meanings. Freudian psychoanalysis was incredibly influential in American psychiatry in the 1970s.Freud believed that dreams exist to fulfil our fantasies, often though not always sexual ones. We dream about what we'd like to do - except we don't dream about it directly, because we find much of our desires shameful, so our minds disguise the wishes behind layers of metaphor etc. "Steep inclines, ladders and stairs, and going up or down them, are symbolic representations of the sexual act..." Interpreting the symbolism of dreams can therefore shed light on the depths of the mind.Hobson and McCarley argued that during REM sleep, our brains are active in a similar way to when we are awake; many of the systems responsible for alertness are switched on, unlike during deep, dreamless, non-REM sleep. But of course during REM there is no sensory input (our eyes are closed), and also, we are paralysed: an inhibitory pathway blocks the spinal cord, preventing us from moving, except for our eyes - hence why it's Rapid Eye Movement sleep.Dreams are simply a result of the "awake-like" forebrain - the "higher" perceptual, cognitive and emotional areas - trying to make sense of the input that it's receiving as a result of waves of activation arising from the brainstem. A dream is the forebrain's "best guess" at making a meaningful story out of the assortment of sensations (mostly visual) and concepts activated by these periodic waves. There's no attempt to disguise the shameful parts; the bizarreness of dreams simply reflects the fact that the input is pretty much random.Hobson and McCarley proposed a complex physiological model in which the activation is driven by the giant cells of the pontine tegmentum. These cells fire in bursts according to a genetically hard-wired rhythm of excitation and inhibition.The details of this model are rather less important than the fact that it reduces dreaming to a neurological side effect. This doesn't mean that the REM state has no function; maybe it does, but whatever it is, the subjective experience of dreams serves no purpose.A lot has changed since 1977, but Hobson seems to have stuck by the basic tenets of this theory. A good recent review came out in Nature Neuroscience last year, REM sleep and dreaming. In this paper Hobson proposes that the function of REM sleep is to act as a kind of training system for the developing brain.The internally-generated signals that arise from the brainstem (now called PGO waves) during REM help the forebrain to learn how to process information. This explains why we spend more time in REM early in life; newborns have much more REM than adults; in the womb, we are in REM almost all the time. However, these are not dreams per se because children don't start reporting experiencing dreams until about the age of 5.Protoconscious REM sleep could therefore provide a virtual world model, complete with an emergent imaginary agent (the protoself) that moves (via fixed action patterns) through a fictive space (the internally engendered environment) and experiences strong emotion as it does so.This is a fascinating hypothesis, although very difficult to test, and it begs the question of how useful "training" based on random, meaningless input is.While Hobson's theory is minimalist in that it reduces dreams, at any rate in adulthood, to the status of a by-product, it doesn't leave them uninteresting. Freudian dream re-interpretation is probably ruled out ("That train represents your penis and that cat was your mother", etc.), but if dreams are our brains processing random noise, then they still provide an insight into how our brains process information. Dreams are our brains working away on their own, with the real world temporarily removed.Of course most dreams are not going to give up life-changing insights. A few months back I had a dream which was essentially a scene-for-scene replay of the horror movie Cloverfield. It was a good dream, scarier than the movie itself, because I didn't know it was a movie. But I think all it tells me is that I was paying attention when I watched Cloverfield.On the other hand, I have had several dreams that have made me realize important things about myself and my situation at the time. By paying attention to your dreams, you can work out how you really think, and feel, about things, what your preconceptions and preoccupations are. Sometimes.Hobson JA, & McCarley RW (1977). The brain as a dream state generator: an activation-synthesis hypothesis of the dream process. The American journal of psychiatry, 134 (12), 1335-48 PMID: 21570Hobson, J. (2009). REM sleep and dreaming: towards a theory of protoconsciousness Nature Reviews Neuroscience, 10 (11), 803-813 DOI: 10.1038/nrn2716... Read more »

  • October 20, 2010
  • 05:38 AM
  • 1,161 views

You Read It Here First...Again

by Neuroskeptic in Neuroskeptic

A couple of months ago I pointed out that a Letter published in the American Journal of Psychiatry, critiquing a certain paper about antidepressants, made very similar points to the ones that I did in my blog post about the paper. The biggest difference was that my post came out 9 months sooner.Well, it's happened again. Except I was only 3 months ahead this time. Remember my post Clever New Scheme, criticizing a study which claimed to have found a brilliant way of deciding which antidepressant is right for someone, based on their brain activity?That post went up on July 21st. Yesterday, October 19th, a Letter was published by the journal that ran the original paper. Three months ago, I said -...there were two groups in this trial and they got entirely different sets of drugs. One group also got rEEG-based treatment personalization. That group did better, but that might have nothing to do with the rEEG......it would have been very simple to avoid this issue. Just give everyone rEEG, but shuffle the assignments in the control group, so that everyone was guided by someone else's EEG...This would be a genuinely controlled test of the personalized rEEG system, because both groups would get the same kinds of drugs... Second, it would allow the trial to be double-blind: in this study the investigators knew which group people were in, because it was obvious from the drug choice... Thirdly, it wouldn't have meant they had to exclude people whose rEEG recommended they get the same treatment that they would have got in the control group...Now Alexander C. Tsai says, in his Letter:DeBattista et al. chose a study design that conflates the effect of rEEG-guided pharmacotherapy with the effects of differing medication regimes...A more definitive study design would have been one in which study participants were randomized to receive rEEG-guided pharmacotherapy vs. sham rEEG-guided pharmacotherapy.Such a study design could have been genuinely double blinded,would not have required the inclusion of potential subjects whose rEEG treatment regimen was different from the control, and would be more likely to result in medication regimens that were balanced on average across the intervention vs. control arms.To be fair, he also makes a separate point questioning how meaningful the small between-group difference was.I'm mentioning this not because I want to show off, or to accuse Tsai of ripping me off, but because it's a good example of why people like Royce Murray are wrong. Murray recently wrote an editorial in the academic journal Analytical Chemistry, accusing blogging of being unreliable compared to proper, peer-reviewed science.Murray is certainly right that one could use a blog as a platform to push crap ideas, but one can also use peer reviewed papers to do that, and often it's bloggers who are the first to pick up on this when it happens.Tsai AC (2010). Unclear clinical significance of findings on the use of referenced-EEG-guided pharmacotherapy. Journal of psychiatric research PMID: 20943234... Read more »

  • August 30, 2014
  • 08:12 AM
  • 1,156 views

The Myth Of “Roid Rage”?

by Neuroskeptic in Neuroskeptic_Discover

Are men who inject testosterone and other anabolic steroids at risk of entering a violent “roid rage“? Many people think so. Whenever a professional athlete commits a violent crime, it’s not long before someone suggests that steroids may have been involved. The most recent example of this is the case of Jonathan “War Machine” Koppenhaver. […]The post The Myth Of “Roid Rage”? appeared first on Neuroskeptic.... Read more »

  • March 31, 2010
  • 09:18 AM
  • 1,155 views

Predicting Psychosis

by Neuroskeptic in Neuroskeptic

"Prevention is better than cure", so they say. And in most branches of medicine, preventing diseases, or detecting early signs and treating them pre-emptively before the symptoms appear, is an important art.Not in psychiatry. At least not yet. But the prospect of predicting the onset of psychotic illnesses like schizophrenia, and of "early intervention" to try to prevent them, is a hot topic at the moment.Schizophrenia and similar illnesses usually begin with a period of months or years, generally during adolescence, during which subtle symptoms gradually appear. This is called the "prodrome" or "at risk mental state". The full-blown disorder then hits later. If we could detect the prodromal phase and successfully treat it, we could save people from developing the illness. That's the plan anyway.But many kids have "prodromal symptoms" during adolescence and never go on to get ill, so treating everyone with mild symptoms of psychosis would mean unnecessarily treating a lot of people. There's also the question of whether we can successfully prevent progression to illness at all, and there have been only a few very small trials looking at whether treatments work for that - but that's another story.Stephan Ruhrmann et al. claim to have found a good way of predicting who'll go on to develop psychosis in their paper Prediction of Psychosis in Adolescents and Young Adults at High Risk. This is based on the European Prediction of Psychosis Study (EPOS) which was run at a number of early detection clinics in Britain and Europe. People were referred to the clinics through various channels if someone was worried they seemed a bit, well, prodromalReferral sources included psychiatrists, psychologists, general practitioners, outreach clinics, counseling services, and teachers; patients also initiated contact. Knowledge about early warning signs (eg, concentration and attention disturbances, unexplained functional decline) and inclusion criteria was disseminated to mental health professionals as well as institutions and persons who might be contacted by at-risk persons seeking help.245 people consented to take part in the study and met the inclusion criteria meaning they were at "high risk of psychosis" according to at least one of two different systems, the Ultra High Risk (UHR) or the COGDIS criteria. Both class you as being at risk if you show short lived or mild symptoms a bit like those seen in schizophrenia i.e.COGDIS: inability to divide attention; thought interference, pressure, and blockage; and disturbances of receptive and expressive speech, disturbance of abstract thinking, unstable ideas of reference, and captivation of attention by details of the visual field...UHR: unusual thought content/ delusional ideas, suspiciousness/persecutory ideas, grandiosity, perceptual abnormalities/hallucinations, disorganized communication, and odd behavior/appearance... Brief limited intermittent psychotic symptoms (BLIPS) i.e. hallucinations, delusions, or formal thought disorders that occurred resolved spontaneously within 1 week...Then they followed up the 245 kids for 18 months and saw what happened to them.What happened was that 37 of them developed full-blown psychosis: 23 suffered schizophrenia according to DSM-IV criteria, indicating severe and prolonged symptoms; 6 had mood disorders, i.e depression or bipolar disorder, with psychotic features, and the rest mostly had psychotic episodes too short to be classed as schizophrenia. 37 people is 19% of the 183 for whom full 18 month data was available; the others dropped out of the study, or went missing for some reason.Is 19% high or low? Well, it's much higher than the rate you'd see in randomly selected people, because the risk of getting schizophrenia is less than 1% lifetime and this was only 18 months; the risk of a random person developing psychosis in any given year has been estimated at 0.035% in Britain. So the UHR and COGDIS criteria are a lot better than nothing.On the other hand 19% is far from being "all": 4 out of 5 of the supposedly "high risk" kids in this study didn't in fact get ill, although some of them probably developed illness after the 18 month period was over.The authors also came up with a fancy algorithm for predicting risk based on your score on various symptom rating scales, and they claim that this can predict psychosis much better, with 80% accuracy. As this graph shows, the rate of developing psychosis in those scoring highly on their Prognostic Index is really high. (In case you were wondering the Prognostic Index is [1.571 x SIPS-Positive score 16] + [0.865 x bizarre thinking score] + [0.793 x sleep disturbances score] + [1.037 x SPD score] + [0.033 x (highest GAF-M score in the past year – 34.64)] + [0.250 x (years of education – 12.52)]. Use it on your friends for hours of psychiatric fun!)However they came up with the algorithm by putting all of their dozens of variables into a big mathematical model, crunching the numbers and picking the ones that were most highly correlated with later psychosis - so they've specifically selected the variables that best predict illness in their sample, but that doesn't mean they'll do so in any other case. This is basically the non-independence problem that has so troubled fMRI, although the authors, to their credit, recognize this and issue the appropriate cautions.So overall, we can predict psychosis, a bit, but far from perfectly. More research is needed. One of the proposed additions to the new DSM-V psychiatric classification system is "Psychosis Risk Syndrome" i.e. the prodrome; it's not currently a disorder in DSM-IV. This idea has been attacked as an invitation to push antipsychotic drugs on kids who aren't actually ill and don't need them. On the other hand though, we shouldn't forget that we're talking about terrible illnesses here: if we could successfully predict and prevent psychosis, we'd be doing a lot of good.Ruhrmann, S., Schultze-Lutter, F., Salokangas, R., Heinimaa, M., Linszen, D., Dingemans, P., Birchwood, M., Patterson, P., Juckel, G., Heinz, A., Morrison, A., Lewis, S., Graf von Reventlow, H., & Klosterkotter, J. (2010). Prediction of Psychosis in Adolescents and Young Adults at High Risk: Results From the Prospective European Prediction of Psychosis Study ... Read more »

Ruhrmann, S., Schultze-Lutter, F., Salokangas, R., Heinimaa, M., Linszen, D., Dingemans, P., Birchwood, M., Patterson, P., Juckel, G., Heinz, A.... (2010) Prediction of Psychosis in Adolescents and Young Adults at High Risk: Results From the Prospective European Prediction of Psychosis Study. Archives of General Psychiatry, 67(3), 241-251. DOI: 10.1001/archgenpsychiatry.2009.206  

  • November 2, 2009
  • 12:52 PM
  • 1,154 views

Real vs Placebo Coffee

by Neuroskeptic in Neuroskeptic

Coffee contains caffeine, and as everyone knows, caffeine is a stimulant. We all know how a good cup of coffee wakes you up, makes you more alert, and helps you concentrate - thanks to caffeine.Or does it? Are the benefits of coffee really due to the caffeine, or are there placebo effects at work? Numerous experiments have tried to answer this question, but a paper published today goes into more detail than most. (It caught my eye just as I was taking my first sip this morning, so I had to blog about it.)The authors took 60 coffee-loving and gave them either placebo decaffeinated coffee, or coffee containing 280 mg caffeine. That's quite a lot, roughly equivalent to three normal cups. 30 minutes later, they a difficult button-pressing task requiring concentration and sustained effort, plus a task involving mashing buttons as fast as possible for a minute.The catch was that the experimenters lied to the volunteers. Everyone was told that they were getting real coffee. Half of them were told that the coffee would enhance their performance on the tasks, while the other half were told it would impair it. If the placebo effect was at work, these misleading instructions should have affected how the volunteers felt and acted.Several interesting things happened. First, the caffeine enhanced performance on the cognitive tasks - it wasn't just a placebo effect. Bear in mind, though, that these people were all regular coffee drinkers who hadn't drunk any caffeine that day. The benefit could have been a reversal of caffeine withdrawl symptoms.Second, there was a small effect of expectancy on task performance - but it worked in reverse. People who were told that the coffee would make them do worse actually did better than those who expected the coffee to help them. Presumably, this is because they put in extra effort to try to overcome the supposedly negative effects. This paradoxical placebo response reminds us that there's more to "the placebo effect" than meets the eye.Finally, no-one who got the decaf noticed that it didn't actually contain caffeine, and the volunteer's ratings of their alertness and mood didn't differ between the caffeine and placebo groups. So, this suggests that if you were to secretly someone's favorite blend with decaf, they wouldn't notice - although their performance would nevertheless decline. Bear that in mind when considering pranks to play on colleagues or flatmates.It looks like science has just confirmed another piece of The Wisdom of Seinfeld:Elaine: Jerry likes Morning Thunder.George: Jerry drinks Morning Thunder? Morning Thunder has caffeine in it. Jerry doesn't drink caffeine.Elaine: Jerry doesn't know Morning Thunder has caffeine in it.George: You don't tell him?Elaine: No. And you should see him. Man, he gets all hyper, he doesn't even know why! He loves it. He walks around going, "God, I feel great!"- Seinfeld, "The Dog"Harrell PT, & Juliano LM (2009). Caffeine expectancies influence the subjective and behavioral effects of caffeine. Psychopharmacology PMID: 19760283... Read more »

  • January 22, 2011
  • 12:46 PM
  • 1,153 views

When "Healthy Brains" Aren't

by Neuroskeptic in Neuroskeptic

There's a lot of talk, much of it rather speculative, about "neuroethics" nowadays.But there's one all too real ethical dilemma, a direct consequence of modern neuroscience, that gets very little attention. This is the problem of incidental findings on MRI scans.An "incidental finding" is when you scan someone's brain for research purposes, and, unexpectedly, notice that something looks wrong with it. This is surprisingly common: estimates range from 2–8% of the general population. It will happen to you if you regularly use MRI or fMRI for research purposes, and when it does, it's a shock. Especially when the brain in question belongs to someone you know. Friends, family and colleagues are often the first to be recruited for MRI studies.This is why it's vital to have a system in place for dealing with incidental findings. Any responsible MRI scanning centre will have one, and as a researcher you ought to be familiar with it. But what system is best?Broadly speaking there are two extreme positions:Research scans are not designed for diagnosis, and 99% of MRI researchers are not qualified to make a diagnosis. What looks "abnormal" to Joe Neuroscientist BSc or even Dr Bob Psychiatrist is rarely a sign of illness, and likewise they can easily miss real diseases. So, we should ignore incidental findings, pretend the scan never happened, because for all clinical purposes, it didn't.You have to do whatever you can with an incidental finding. You have the scans, like it or not, and if you ignore them, you're putting lives at risk. No, they're not clinical scans, they can still detect many diseases. So all scans should be examined by a qualified neuroradiologist, and any abnormalities which are possibly pathological should be followed-up.Neither of these extremes is very satisfactory. Ignoring incidental findings sounds nice and easy, until you actually have to do it, especially if it's your girlfriend's brain. On the other hand, to get every single scan properly checked by a neuroradiologist would be expensive and time-consuming. Also, it would effectively turn your study into a disease screening program - yet we know that screening programs can cause more harm than good, so this is not necessarily a good idea.Most places adopt a middle-of-the-road approach. Scans aren't routinely checked by an expert, but if a researcher spots something weird, they can refer the scan to a qualified clinician to follow up. Almost always, there's no underlying disease. Even large, OMG-he-has-a-golf-ball-in-his-brain findings can be benign. But not always.This is fine but it doesn't always work smoothly. The details are everything. Who's the go-to expert for your study, and what are their professional obligations? Are they checking your scan "in a personal capacity", or is this a formal clinical referral? What's their e-mail address? What format should you send the file in? If they're on holiday, who's the backup? At what point should you inform the volunteer about what's happening?Like fire escapes, these things are incredibly boring, until the day when they're suddenly not.A new paper from the University of California Irvine describes a computerized system that made it easy for researchers to refer scans to a neuroradiologist. A secure website was set up and publicized in University neuroscience community.Suspect scans could be uploaded, in one of two common formats. They were then anonymized and automatically forwarded to the Department of Radiology for an expert opinion. Email notifications kept everyone up to date with the progress of each scan.This seems like a very good idea, partially because of the technical advantages, but also because of the "placebo effect" - the fact that there's an electronic system in place sends the message: we're serious about this, please use this system.Out about 5,000 research scans over 5 years, there were 27 referrals. Most were deemed benign... except one which turned out to be potentially very serious - suspected hydrocephalus, increased fluid pressure in the brain, which prompted an urgent referral to hospital for further tests.There's no ideal solution to the problem of incidental findings, because by their very nature, research scans are kind of clinical and kind of not. But this system seems as good as any.Cramer SC, Wu J, Hanson JA, Nouri S, Karnani D, Chuang TM, & Le V (2011). A system for addressing incidental findings in neuroimaging research. NeuroImage PMID: 21224007... Read more »

Cramer SC, Wu J, Hanson JA, Nouri S, Karnani D, Chuang TM, & Le V. (2011) A system for addressing incidental findings in neuroimaging research. NeuroImage. PMID: 21224007  

  • August 7, 2009
  • 06:33 AM
  • 1,152 views

Science, Journalism, and Bug Spray

by Neuroskeptic in Neuroskeptic

Watch out! The BBC report that - Deet bug repellent 'toxic worry' While The Telegraph are even more concerned -Insect repellent Deet is bad for your nerves, claim scientists This is in reference to a new paper about the widely-used insect repellent DEET. The BBC, as usual, performed slightly better than the Telegraph here. They included quotes from two experts making it clear that the research in question was preliminary and in no way proves that DEET is dangerous to humans. But they still ran the headline implying that DEET could be "toxic", which is the only thing most people will remember about the article. As you'll see below, this is quite misleading.DEET is an insect repellant, generally used to prevent mosquito bites. You spray it on your skin, clothes, mosquito nets, etc. If you've ever been to a tropical country, you'll probably remember it. It has a distinctive smell, it stings the eyes and throat, and, most distressingly, it dissolves plastics. My watch fell off in Thailand because DEET ate through the strap.That aside, DEET is believed to be safe, so long as you spray it instead of drinking it. Hundreds of millions of people have used it for decades. And it works, which means it saves lives. Mosquitoes spread diseases like malaria, yellow fever, Dengue, and plenty more. They can all kill you. This is why any health professional will advise you to use mosquito-repellants, preferably DEET-based ones, when visiting risk areas.So it would be massive news if DEET was found to be dangerous. But it hasn't. What's been found is that, in animals and in test-tubes, DEET is a cholinesterase inhibitor. Cholinesterase is an enzyme which breaks down acetylcholine (ACh), a neurotransmitter. If you inhibit cholinesterase, ACh levels rapidly increase. This can cause problems because ACh is the transmitter that your nerves use to communicate with your muscles. As ACh builds up, your muscles don't stop contracting, and you suffer paralysis, until you can't breathe. This is how "nerve gas" works.But we know DEET isn't a strong cholinesterase inhibitor, when used normally, because people don't get cholinergic effects after using it. The toxicity of cholinesterase inhibitors is acute. You get paralyzed, and suffer other symptoms like uncontrollable salivation, crying, vomiting, and incontinence. You'd know if this happened to you.Cholinesterase inhibitors are not, as various media reports have said about DEET, "neurotoxic" , they do not cause "neural damage". They act on the nerves, but they do not damage the nerves. In fact people with Alzheimer's take them (in low doses!), as do people with the nerve disease myasthenia gravis.So the fact that DEET can act as a cholinesterase inhibitor in the lab changes nothing. It's still safe, at least until evidence comes along that it actually causes harm in people who use it. You can't show that something is harmful by doing an experiment showing how it could be harmful in theory.To be fair, there is one cause for concern in the paper - in the experiments, DEET interacted with other cholinesterase inhibitors, leading to an amplified effect. That suggests that DEET could become toxic in combination with cholinesterase inhibitor insecticides, but again, the risk is theoretical.The media should never have reported on this paper. The science itself is perfectly good, but the results are completely irrelevant to the average person who might want to use DEET. They are of interest only to biologists. If people decide not to use DEET on the basis of these reports, they are putting themselves in danger. Others have noted that journalists almost always report on laboratory experiments like these as if they were directly relevant to human health. They're not.Appendix: In one of the articles, an expert says that "I also would guess that the actual concentration [of DEET] in the body is much lower than they had to use in the study to see an effect in the mouse tissues." But we don't have to guess, we can work it out. DEET had detectable effects in mammalian tissues at a concentration of 0.5 millimole. A millimole is a unit of concentration; 1 millimole is 0.19 grams DEET per liter of water. (Molar weight of DEET = 191g/mole). The human body is 60% water by weight. A person weighs, say, 75 kg, which means roughly 50 liters of water. That means that to achieve the level of DEET used in this study, you would need to absorb into your body about 50 x 0.19 = 9.5 grams of DEET (assuming it was evenly distributed in your body).That's a huge amount. But maybe it's not completely impossible, bearing in mind that DEET might be absorbed through the skin? Is there any data on DEET levels in humans? Yes. This paper reports on the development of a way of measuring DEET in human blood. This method could detect DEET at levels from 1 ng/mL to 100 ng/mL. I assume that the upper limit was chosen because no-one ever gets more DEET than that. 100 ng / mL = 100 micrograms / L = 0.52 micromolar = 0.0005 millimolar. That's 1000-fold too low, and that's the upper limit.This was just a back-of-the-envelope calculation so please feel free to critique it, but, I find it reassuring.Corbel, V., Stankiewicz, M., Pennetier, C., Fournier, D., Stojan, J., Girard, E., Dimitrov, M., Molgo, J., Hougard, J., & Lapied, B. (2009). Evidence for inhibition of cholinesterases in insect and mammalian nervous systems by the insect repellent deet BMC Biology, 7 (1) DOI: 10.1186/1741-7007-7-47... Read more »

Corbel, V., Stankiewicz, M., Pennetier, C., Fournier, D., Stojan, J., Girard, E., Dimitrov, M., Molgo, J., Hougard, J., & Lapied, B. (2009) Evidence for inhibition of cholinesterases in insect and mammalian nervous systems by the insect repellent deet. BMC Biology, 7(1), 47. DOI: 10.1186/1741-7007-7-47  

  • October 26, 2009
  • 04:24 PM
  • 1,152 views

Barack Obama Boosts Testosterone

by Neuroskeptic in Neuroskeptic

But only if you voted for him, and only if you're a man. That's according to a PLoS One paper called Dominance, Politics, and Physiology.It's already known that in males, winning competitions - achieving "dominance" - causes a rapid rise in testosterone release, whilst losing does the opposite. That's true in humans, as well as in other mammals. The authors wondered whether the same thing happens when men "win" vicariously - i.e. when someone we identify with triumphs.What better way of testing this than the U.S. Presidential Election? The authors took 163 American voters, and got them to provide saliva samples before, during and after the results came in on the night of the 4th November. Here's what happened -In Obama supporters (the blue line, natch), salivary testosterone levels stayed flat throughout the crucial hours. But supporters of John McCain or Libertarian candidate Bob Barr, suffered a testosterone crash after Obama's victory became apparent. That was only true in men, though; in women, there was no change.Heh. Of course, we hardly needed biology to tell us that people often identify strongly with their preferred political parties, and the fact that social events cause hormonal changes shouldn't surprise anyone - the brain controls the secretion of most hormones.The gender difference is interesting, though. Does this mean that men identify closer with politicians? Or maybe only with male ones - what would have happened if Hilary had won... or Palin? It could be that the testosterone surge accompanying success is strictly a man thing, although it's been shown to occur in women in some studies, but not consistently.Finally, I should mention that this paper contains some excellent quotes, such as "...Robert Barr, who arguably did not have a chance of winning...", "In retrospective reports of their affective state upon the announcement of Obama as the president-elect, McCain and Barr voters felt significantly more unhappy" and my favourite, "men who voted for John McCain or Bob Barr (losers)". That last one may be taken slightly out of context.Stanton, S., Beehner, J., Saini, E., Kuhn, C., & LaBar, K. (2009). Dominance, Politics, and Physiology: Voters' Testosterone Changes on the Night of the 2008 United States Presidential Election PLoS ONE, 4 (10) DOI: 10.1371/journal.pone.0007543... Read more »

  • December 14, 2009
  • 09:08 AM
  • 1,149 views

In the Brain, Acidity Means Anxiety

by Neuroskeptic in Neuroskeptic

According to Mormon author and fruit grower "Dr" Robert O. Young, pretty much all diseases are caused by our bodies being too acidic. By adopting an "alkaline lifestyle" to raise your internal pH (lower pH being more acidic), you'll find that
if you maintain the saliva and the urine pH, ideally at 7.2 or above, you will never get sick. That’s right you will NEVER get sick!
Wow. Important components of the alkaline lifestyle include eating plenty of the right sort of fruits and vegetables, ideally ones grown by Young, and taking plenty of nutritional supplements. These don't come cheap, but when the payoff is being free of all diseases, who could complain?

Young calls his amazing theory the Alkavorian Approach™, aka the New Biology™. Almost everyone else calls it quack medicine and pseudoscience. Because it is quack medicine and pseudoscience. But a paper just published in Cell suggests an interesting role for pH in, of all things, anxiety and panic - The amygdala is a chemosensor that detects carbon dioxide and acidosis to elicit fear behavior.

The authors, Ziemann et al, were interested in a protein called Acid Sensing Ion Channel 1a, ASIC1a, which as the name suggests, is acid-sensitive. Nerve cells expressing ASIC1a are activated when the fluid around them becomes more acidic.

One of the most common causes of acidosis (a fall in body pH) is carbon dioxide, CO2. Breathing is how we get rid of the CO2 produced by our bodies; if breathing is impaired, for example during suffocation, CO2 levels rise, and pH falls as CO2 is converted to carbonic acid in the bloodstream.

In previous work, Ziemann et al found that the amygdala contains lots of ASIC1a. This is intriguing, because the amygdala is a brain region believed to be involved in fear, anxiety and panic, although it has other functions as well. It's long been known that breathing air with added CO2 can trigger anxiety and panic, especially in people vulnerable to panic attacks.

What's unclear is why this happens; various biological and psychological theories have been proposed. Ziemann et al set out to test the idea that ASIC1a in the amygdala mediates anxiety caused by CO2.

In a number of experiments they showed that mice genetically engineered have no ASIC1a (knockouts) were resistant to the anxiety-causing effects of air containing 10% or 20% CO2. Also, unlike normal mice, the knockouts were happy to enter a box with high CO2 levels - normal mice hated it. Injections of a weakly acidic liquid directly into the amygdala caused anxiety in normal mice, but not in the knockouts.

Most interestingly, they found that knockout mice could be made to fear CO2 by giving them ASIC1a in the amygdala. Knockouts injected in the amygdala with a virus containing ASIC1a DNA, which caused their cells to start producing the protein, showed anxiety (freezing behaviour) when breathing CO2. But it only worked if the virus was injected into the amygdala, not nearby regions.

This is a nice series of experiments which shows convincingly that ASIC1a mediates acidosis-related anxiety, at least in mice. What's most interesting however is that it also seems to involved in other kinds of anxiety and fear. The ASIC1a knockout mice were slightly less anxious in general; injections of an alkaline solution prevented CO2-related anxiety, but also reduced anxiety caused by other scary things, such as the smell of a cat.

The authors conclude by proposing that amygdala pH might be involved in fear more generally

Thus, we speculate that when fear-evoking stimuli activate the amygdala, its pH may fall. For example, synaptic vesicles release protons, and intense neural activity is known to lower pH.

But this is, as they say, speculation. The link between CO2, pH and panic attacks seems more solid. As the authors of another recent paper put it

We propose that the shared characteristics of CO2/H+ sensing neurons overlap to a point where threatening disturbances in brain pH homeostasis, such as those produced by CO2 inhalations, elicit a primal emotion that can range from breathlessness to panic.

ResearchBlogging.orgZiemann, A., Allen, J., Dahdaleh, N., Drebot, I., Coryell, M., Wunsch, A., Lynch, C., Faraci, F., Howard III, M., & Welsh, M. (2009). The Amygdala Is a Chemosensor that Detects Carbon Dioxide and Acidosis to Elicit Fear Behavior Cell, 139 (5), 1012-1021 DOI: 10.1016/j.cell.2009.10.029... Read more »

Ziemann, A., Allen, J., Dahdaleh, N., Drebot, I., Coryell, M., Wunsch, A., Lynch, C., Faraci, F., Howard III, M., & Welsh, M. (2009) The Amygdala Is a Chemosensor that Detects Carbon Dioxide and Acidosis to Elicit Fear Behavior. Cell, 139(5), 1012-1021. DOI: 10.1016/j.cell.2009.10.029  

  • June 14, 2010
  • 06:39 AM
  • 1,148 views

The Face of a Mouse in Pain

by Neuroskeptic in Neuroskeptic

Have you ever wanted to know whether a mouse is in pain?Of course you have. And now you can, thanks to Langford et al's paper Coding of facial expressions of pain in the laboratory mouse.It turns out that mice, just like people, display a distinctive "Ouch!" facial expression when they're suffering acute pain. It consists of narrowing of the eyes, bulging nose and cheeks, ears pulled back, and whiskers either pulled back or forwards.With the help of a high-definition video camera and a little training, you can reliably and accurately tell how much pain a mouse is feeling. It works for most kinds of mouse pain, although it's not seen in either extremely brief or very long-term pain.Langford et al tried it out on mice with a certain genetic mutation, which causes severe migraines in humans. These mice displayed the pain face even in the absence of external painful stimuli, showing that they were suffering internally. A migraine drug was able to stop the pain.Finally, lesions to a part of the brain called the anterior insula stopped mice from expressing their pain. This is exactly what happens in people as well, suggesting that our displays of suffering are an evolutionary ancient mechanism. Of course this kind of study can't prove that animals consciously feel pain in the same way that we do, but I see no reason to doubt it: we feel pain as a result of neural activity, and mammals have exactly the same brain systems.Langford, D., Bailey, A., Chanda, M., Clarke, S., Drummond, T., Echols, S., Glick, S., Ingrao, J., Klassen-Ross, T., LaCroix-Fralish, M., Matsumiya, L., Sorge, R., Sotocinal, S., Tabaka, J., Wong, D., van den Maagdenberg, A., Ferrari, M., Craig, K., & Mogil, J. (2010). Coding of facial expressions of pain in the laboratory mouse Nature Methods, 7 (6), 447-449 DOI: 10.1038/nmeth.1455... Read more »

Langford, D., Bailey, A., Chanda, M., Clarke, S., Drummond, T., Echols, S., Glick, S., Ingrao, J., Klassen-Ross, T., LaCroix-Fralish, M.... (2010) Coding of facial expressions of pain in the laboratory mouse. Nature Methods, 7(6), 447-449. DOI: 10.1038/nmeth.1455  

join us!

Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.

If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.

Register Now

Research Blogging is powered by SRI Technology.

To learn more, visit http://selfregulationinstitute.org/.