There are few things the media love more than a quotable science study. Catty soundbites from reality TV stars don’t get as much attention as one crazy science statistic. A “9 out of 10 people prefer garlic cereal” headline garners clicks regardless of the study’s merit.
Here’s the real truth: Study results are not trustworthy. News media reports of study results are even less trustworthy. News media sell fear. People don’t buy “happy” news. Journalists without science degrees are tasked with selling news. They learn how to take one science journal article and run with it, not worrying about how it may negatively influence society. Let’s look at a recent example of this and then let’s go over 5 reasons why you should be very wary of studies published in news media.
A new study reported in BBC News was paired with the headline: Internet ‘may be changing brains.’ This is a perfect example of how news media twist study results to sell newspapers. The headline plays right into the common fear that the Internet is making us all stupid. (Despite the irresponsible headline, the BBC outlined the study results in a more responsible way than North American news media would. In the article you can see a few quotes from the researchers that lend us a clue about the study’s real results. E.g. “The study cannot tell us whether using the internet is good or bad for our brains.”)
We’re all worried about Internet addiction and brain damage, so let’s talk first about this study’s subject: Environmental influence on brain structure.
Brain “plasticity” is the theory that the brain adapts and changes to the environment. Researchers have been looking at this phenomenon for a long while, most notably with the use of fMRI techniques. Internet use may be changing some brain structures slightly, but this is normal and nothing to worry about. Scientists have known about this for a long time. Just as lifting weights changes muscle structure, use of certain tools may be able to change parts of our brains. Stop using the tool, and your brain structure will probably revert back to their original state. We’re talking tiny changes here, people. Changes only scientists with huge magnets and electron microscopes can see. Internet addiction, which is really the main fear at the core of the BBC headline, is a different behavioral (perhaps also physical) process entirely. Think of Internet Addiction like Gambling Addiction and you’ll get more of a familiar picture of what a behavioral addiction is. Cocaine Addiction and Internet Addiction are not similar enough to be compared.
Internet use alone does not lead to Internet Addiction. If your Internet use is disrupting your life, examine your habits against a list of your goals. If your habits are not supporting your goals, change your behavior. If you are having trouble changing your behavior, seek counseling. Only those with severe behavioral addictions (like Gambling) who also show some chemical imbalances will need medical interventions to help with curbing the behavior. Those imbalances were not caused by brain plasticity and/or Internet use.
This study and BBC report is a good example of why we must be always on our guard against false impressions given to us by sensational headlines. The news media wants to sell news, and they won’t give you much idea about the truth behind science studies.
Here are 5 reasons why you can’t rely on study results you see in the news media.
5 Reasons Why Science Studies Are Wrong
1. No Null Results: A Null result means “no correlation.” Results that say “Hey, guess what? We’ve found NO RELATIONSHIP between these two things!” are never published in journals. In other words, if the researchers in this study found that there was NO correlation between number of Facebook Friends and a certain brain structure, then the journal would not have published it. Journals want to sell journals, too, and null results are boring. See more explanation at the Journal of Negative Results: http://www.jnr-eeb.org/index.php/jnr
2. Correlation Does Not Equal Cause: A link between two things doesn’t mean one causes the other. In fact, no social science can ever certainly say that one thing causes another. A lot of alcoholics also smoke, but smoking doesn’t cause alcoholism. But a news outlet would publish that study result as “Smoking and Alcoholism are linked” when in fact, they are not. Sure, they are correlated; Lots of bar patrons smoke. A CAUSE is different: smoking a cigarette doesn’t give you a craving for a drink, but smoking DOES CAUSE lung cancer. See more explanation at this George Mason University site: http://stats.org/faq_vs.htm
3. One Lone Example: One study does not mean much. For the science community to be thoroughly convinced of a correlation or a causal link between two things, many, many studies have to repeat the result. Over and over again, long term studies showed the Smoking/Lung Cancer connection. Even with the tobacco industry funding their own studies that magically would produce null results, medical facilities produced evidence many times over that there was in fact a very strong result. One new study touting some connection is to be treated with major skepticism. “Wait and See” is the approach all of us should take when we come across a lone study result. The autism and vaccines debacle is a heartbreaking example of this: Retracted Autism study ‘an elaborate fraud’: http://www.cnn.com/2011/HEALTH/01/05/autism.vaccines/
4. Statistical Shenanigans: There are well-known and well-tested, tried-and-true research methods. You do them right or you don’t. It’s simple. Lots of studies published don’t get it right. Bad sample sizes (e.g. they don’t survey enough people), bad question design (e.g. they structure the questions to foster certain answers), bad sampling methods (e.g. they pick the wrong people to fill out the survey) and bad math (e.g. they didn’t run the right statistical formulas correctly) all plague many studies published in journals. Most journals have what is called “blind peer review,” which means that volunteer scientists will look over a study submitted for publication and determine whether or not the study is sound (worthy of publication). This volunteer scientist doesn’t have access to the data, so can’t herself run the formulas to check it herself. She has to look at the other parts of the study to guess whether or not the study was well done. This is a lot of work for an unpaid volunteer and it’s almost impossible to determine every study correctly. Check out this article for more info: Sloppy Stats Shame Science, The Economist: http://www.economist.com/node/2724226
5. Funding Shenanigans: Paying for desired results happens all the time. As I mentioned before, the tobacco industry hired a bunch of scientists, who, coincidentally, magically produced null results for the lung cancer question. How does this happen? Don’t scientists have integrity? Why does it matter who pays for the research? Well, let’s give our scientists the benefit of the doubt and assume that they are all pure in motivation. They are just scientists and their work is their work, no matter where their paycheck originates. Let’s assume this. But here’s what tends to happen (and it’s been shown to happen over and over): Researchers who are funded by a certain company or group (say, The Roman Catholic Church) tend to find results favorable to that group’s agenda. The happens for many reasons, but the most important of which are these: Researchers are human and want to survive. In order to do this, they will design a study that will favor their funder’s agenda (e.g. writing the question as “when did you stop beating your wife” phrase instead of “True or false: there is no physical violence in my home.”). There are almost a million considerations in designing a study. Tiny decisions here and there end up amassing into one big, brown-nosed, pandering study. See here for more information: Research Grant Funding: http://www.experiment-resources.com/research-grant-funding.html
Keep these 5 criticisms in mind each time you read an attention-grabbing headline about the latest link between Thing 1 and Thing 2. If a conversation partner tries to make a point by quoting a study result, bring up (or hyperlink to) the insider knowledge above about the dark side of science publishing. Train them (and your kids!) to look at each media report with a very healthy dose of skepticism.
Comments on this entry are closed.
My favorite professor in college used to discuss spurious correlation (he was an elections forecaster when not teaching). His favorite was the correlation between ice cream sales and murder rates.
Higher ice cream sales almost always coincide with an increase in the murder rate, which someone could easily turn in to a shocking headline, if the two had anything to do with each other. Sometimes two things can have the same root cause, but be totally unrelated, in this case, it’s the weather.
I just picked up a free kindle book the other day on this subject: http://www.amazon.com/gp/product/B001QL5MZ0 (still free for the time being).
Greg,
The Ice Cream/Murder rates is a perfect and classic example of false causation. Thanks for reminding me.
I’ll check out that e-book. Seems like a good find!
-Christine
Here’s a scary article re: medical studies…
http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/8269/
Nice add! Thanks, Mike.
You are stupid.