Skip to main content

Drugs and error bars

I've been away from writing here for a while, as recently I've been stuck into a very different type of writing - finishing up my thesis for my PhD.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


I’ve been away from writing here for a while, as recently I’ve been stuck into a very different type of writing – finishing up my thesis for my PhD. One of the things that prompted me back into the blogging world was my reaction to the recent channel 4 show, ‘Drugs Live: the Ecstacy Trial

Now, I’m not going to discuss the ethics of taking illegal drugs on television, as this has already been well-covered. Instead, I wanted to draw attention to what to some might seem a much smaller point, but one that I feel represents a much larger issue throughout the reporting of science in the media. This post is much broader than my usual articles, but is relevant to animal behaviour, which, as a popular topic presented by the media, is often misrepresented in this way.

First, check out this graph from the show:


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


This seems to show that people invested more money (in computer-simulated faces) when they had taken ecstacy (MDMA) than when they had not (the control), and that this effect increased from one to eight days… but does it really? This graph doesn’t tell us this.

What I’m getting at here is that, to show whether these bars actually differ from each other, we need to see what are called ‘error bars’. Error bars are a way of showing visually how varied the measurements you’ve taken actually are. For example, if I stood on Oxford street and stopped 1000 people and asked them if I could count their fingers, if people let me do this and didn’t just look at me strangely before backing away, it is likely that I would find that the average number of fingers was 10 (or just under, as it’s probably more likely that someone would have lost a finger than have an extra one). However, it’s highly likely that most of the people I’d sampled would have 10 fingers. Because of this small variation in finger number, if I were to draw this on a graph, the error bar would be small.

If, on the other hand (no pun intended, really), I were to stop another 1000 people in the street and count the number of pointless items they had in their wallets, I might find that I also get an average of 10, but that this time there would be much more variation: a lot of tidy, organised (or sociopathic as I like to call them) types would have zero pointless items, whereas other people might have dozens of scrap bits of paper, receipts going back to the 1980s, and rewards cards to shops they didn’t even know existed. This time the error bars would be massive.

By putting error bars on a graph we can see how much variation there is in the data we’re looking at, and more importantly, visually assess how likely it is that the averages of different groups actually differ from each other.

As an example, this graph (of data I just made up) shows that men drink more beer than women (on average 20 rather than 15 a week; the purple bars). However, people in the UK do not eat more pies than people in the US (the green bars). Even though the average numbers are the same (15 beers drunk by women, 15 pies eaten in the US and 20 beers drunk by men and 20 pies eaten in the UK), the error bars tell us that the data are very different from each other. In the case of the number of pies eaten, there is a lot of variation in the data, with some people in the UK and the US eating very few pies, and some eating lots, most likely those located in Manchester. Because there is so much variation in the data within each of these groups (UK and US), we cannot tell for sure whether the fact that the average is more in the UK than the US is a real finding, or just do to chance.

Going back to the ‘Drugs Live: the Ecstacy Trial’, as none of their graphs had error bars on them, we cannot be sure that any of the differences that we seem to see are actually there or not. In short, the graphs are pretty pointless.

On the drugs science website David Nutt, Val Curran and Robin Carhart-Harris wrote that they had originally included error bars (and p-values) on the graphs, but that Channel 4 removed them for clarity. This is somewhat ironic, as in doing this, they inadvertently made the results less clear to interpret. Apparently Channel 4 felt that an explanation would be ‘difficult’ for a general audience. I think that Channel 4 should not patronise their audiences. People like to be challenged and discover things that may not be clear at first; it’s why people continue to watch murder mysteries. Even if the programme did not have enough time to explain what error bars are, I do not think that people would have found the graphs harder to interpret with them present, and it might even lead to people going to google and finding out what they are for themselves.

I think the more the scientific process is made accessible to people the better, and a good way of doing this is through television programmes like this one. As a study that was both doing original work, and on a topic that many people are interested in, it seems perfect for television. My complaint here is not aimed specifically at Channel 4, but at the presentation of science more generally. The more people are given the tools for understanding how experiments are designed, carried out and then interpreted (using statistics as a tool), the better the communication between scientists and everyone else (including lawmakers) will be.

 

 

 

*********************************************************************************

If you’re keen to know more about statistics beyond the error bar, I came across this great blog the other day.