Not long ago I was publicly accused of cherry-picking. But I don’t even like cherries. They taste a little bit poisonous to me. So what is cherry picking in the scientific sense, and did I do it? Do you do it?
Wikipedia defines it like this:
Cherry picking, suppressing evidence, or the fallacy of incomplete evidence is the act of pointing to individual cases or data that seem to confirm a particular position, while ignoring a significant portion of related cases or data that may contradict that position. It is a kind of fallacy of selective attention, the most common example of which is the confirmation bias. Cherry picking may be committed intentionally or unintentionally. This fallacy is a major problem in public debate.
Conducting research is a constant struggle. You learn from the start of research methodology courses that bias is an important part of the discussion. You always want to perform that set of experiments, reach that combination of data that is convincing enough to make it clear that these conclusions cannot result simply from your wish to find them.
Wishing for a certain outcome, then squinting your eyes until the fuzzy image matches your idea – looking only in the direction where your idea seems to find support – that, is bias. So how can we avoid it?
The easy answer is that we can’t. We will always have favourite theories and favourite experts. We will be more inclined to believe them and mistrust others if they contradict our favourite view.
That’s not to say that efforts to stay aware of our predispositions couldn’t go a long way.
Fascination is a great tool for avoiding bias. The world is a magnificent place. There is amazing stuff going on in it. Much of that we didn’t have any idea about before science, and there’s plenty more left to uncover. The laws of physics are not yet pinned down; the phenomenon of consciousness keeps puzzling most of us, scientist and layman alike; just to mention the first two fascinating topics that spring to mind.
Much of that fascinating stuff would be lost on me if I valued my favourite ideas better than understanding the real world. Don’t take me wrong, I love having ideas. I write fiction, and that’s all about having ideas of my own and running with them. It’s great to explore one’s own ideas. But having ideas about the real world and simply running with them is no longer creativity, it’s blindness. It’s arrogance. So I try to remind myself that I would rather be wrong and found out than always right. That way I get to keep on understanding.
When I wrote a piece called Natural Assumptions about organic farming over at the Skepti Forum Blog, someone in the comments quickly accused me of cherry-picking and making poor assumptions. I was not happy, obviously. But as painful as it was I also acknowledged that if I were doing those things, I would rather find out than contribute to the spread of bad information. This was my answer:
I would love to find references that do defend environmental benefits of organic farming. Most of all, I would love to find evidence of organic farming orienting itself after scientific evidence. Sadly what I have found so far does not support that.
On regards to the choice of research presented here, I find it sad to be accused of cherry-picking without knowing what research it is that I am ignoring. I did not embark on this in order to discredit organic farming, on the contrary.
Apart from the handful of individual papers which were influential in my journey into learning more, I also reference here what is described ‘the most comprehensive meta-analysis of nutritional studies’ and a 2012 analysis of over a 100 studies across Europe on environmental effects of organic vs conventional farming. I can’t easily ignore the scope of those. As I said, I would love to find more research on the topic. If you think I cherry pick, then I hope you can point out (preferably a similar body of) evidence that I have missed.
Luckily, my critic was a person interested in civil debate, and he provided me with a 6-point list of issues he saw with my piece. I can’t tell you how rare and valuable this is.
I am thorough, and have gone through that whole list. I will add my replies to the remaining 5 points in later posts (On farming, animals, and the environment, and Delving deeper into the roots of organic).
My critic’s first point refers to the meta-analysis I talked about in the context of (no) nutritional benefits of organic food. For a discussion on that meta-analysis, see this article News in Standford medicine – Little evidence of health benefits from organic foods:
“The most comprehensive meta-analysis to date of existing studies comparing organic and conventional foods.”
or you can also find here a direct link to the paper (behind paywall).
My critic argued as follows:
The Stanford study is not the only meta-analysis conducted on organic food, it has been criticized for excluded data and poor statistical analysis, and other studies have reached opposite conclusions. (e.g. http://www.ncbi.nlm.nih.gov/pubmed/24968103 I think it’s more accurate to say that the jury is still out on this one.
Going from a supporter of organic food to an organic critic, one could think that I would be a good example of not being biased – after all, I had looked at the evidence and changed my mind. But that doesn’t clear me of bias. I could have a psychological need to embrace the opposite view in order to compensate for my perceived failure, for instance, resulting in a knee-jerk reaction in the other direction. Was I looking at this one meta-analysis while ignoring others, that may have reflected more positively on organic food?
I took a look at the study my critic had provided, titled: “Higher antioxidant and lower cadmium concentrations and lower incidence of pesticide residues in organically grown crops: a systematic literature review and meta-analyses.” I will call it the High Antioxidant study.
I wanted to see what critical evaluations had been made about that study. But, wait. Should I not have started by looking at critical evaluations of the Stanford study I had referenced? After all, I had just heard that it “had been criticized”. I wasn’t looking for that criticism. I suspected that there was no serious criticism. This is bias.
Optimally I would have received a reference to said criticism from my critic, since he brought that up. But as it happens, he didn’t provide a source, and I still haven’t stumbled on one. It’s important to note that my bias (the simple fact that I had, for whatever reason, previously arrived at a favourable view of the Stanford study) would likely not make me the first person to discover criticism directed at it. That’s not good, is it?
Well, this bias at its core is both useful as well as dangerous.
The economics of mental resources dictate that I should not actively seek to disprove all my prior conclusions. Being biased in this regard is a useful mental heuristic – I don’t have the time to look for criticism of every idea I consider well established. But at the same time, this is a constant natural bias that we are likely never getting rid of. A bias to trust a credible-seeming study with a wide scope (in this case, a meta-analysis) is not one of the worst thing to have. This heuristic is much more dangerous when it comes to our assumptions – these ideas are more fundamental, more pervasive, and at the same time often not very well thought through. Formation of assumptions may happen without us fully paying attention, and our bias can keep those assumptions in place for a long time.
There are important habits I can develop to keep this kind of bias in check. I should be aware of it, and remember not to consider myself free of bias. But I by no means have to remain hopelessly blinded by it. Whenever I am presented with criticism challenging my prior established ideas, I should remind myself to give those claims (or evidence, if indeed there is evidence) serious consideration.
I decided that I would fairly entertain the possibility that the Stanford study were somehow flawed, and that I could not completely trust its conclusions. The next logical step was to compare it to other similarly extensive sources – other reviews and meta-analyses. So I started looking at the suggested High Antioxidant study.
I found a source, Science 2.0, criticising the study for being too inclusive, for not dismissing studies with flawed methodology. Interesting that the Stanford study was claimed to have been too exclusive. Which was the better procedure? From Science 2.0:
They used a lot of papers, that is a good thing if there is actually a large body of knowledge and it is rigorous, but in even the most controversial toxicological issue, the EPA will end up disqualifying all but about a dozen papers due to lack of underlying data being included or methodological concern. In a review, they look at no data, of course, and 343 papers becomes the problem rather than the solution when the methodology is flawed.
Meta-analysis, as everyone with statistics knowledge knows, can boost the strength of systematic reviews when done properly but easily suffers from bias unless the researchers are truly interested in controlling eligibility criteria and methodological quality. Without controlled eligibility, it’s easy to find any pattern you want. With Web of Knowledge search terms like ‘organic’ and ‘biodynamic’, it’s really easy to skew the inclusion. Then they synthesized their dramatically different studies using a random effects model.
There was also criticism of the study’s claims that antioxidants reduce risk of chronic illness, both by Science Media Centre, Nutritional content of organic and conventional foods, and pharmacologist Ian Musgrave over at The Conversation, Organic food is still not more nutritious than conventional food:
While consumption of antioxidant containing fruit and vegetables have been associated with better health outcomes, the antioxidants themselves do not appear to have any role in this effect (see also here, here and here), despite the number of television advertisements that exhort us to buy antioxidant enriched food. Indeed, the major finding is that high concentrations of fat soluble antioxidant vitamins are associated with detrimental effects.
Ian Musgrave points out the lack of significance in the difference of vitamin and antioxidant levels – carotenoid in fruits was 50% higher, but then, fruits are a poor source of carotenoid all in all, and our intake mainly comes from vegetables, which have much more carotenoids and showed no difference between organic and conventional. Organic apples had 6% more vitamin C. For recommended daily intake: you’d need 5.3 organic vs 5.6 conventional apples. Pesticide and cadmium levels were likewise low for both, with no demonstrable harm from residue of either (the food we have in the western world is the safest humans have ever eaten, and pesticides are one of the least of our worries, see more here and here).
Ian Musgrave continues with the context of natural variability:
the nutritional value of foods is very variable, influenced strongly by local regional factors, variations in growing seasons and rainfall, ripeness of food when harvested and time of harvest. I’m writing this is South Australia, possibly the wine capital of Australia, where we know that having vines on the different sides of a hill will affect sugar and flavor of the grapes. Even different cultivars of the same crop may vary significantly in composition due to the factors above. Nutritional values of crops can vary from between 100% to nearly 200% (which should be kept in mind when the differences reported between conventional crops and organic crops run from 6-69%).
So perhaps the High Antioxidant study didn’t reach such a strong conclusion for benefits of organic produce after all. What wisdom could other studies bring to the table? Choosing between two reviews, myself as a non-expert and while not actually sitting down to read through and understand the studies’ merits myself, my bias might let me take the word of those sounding convincing, and falsely ignore the points of the other. How could I avoid that? My best approach would be to see what wisdom other studies would bring to the table.
The Conversation mentioned two other reviews concluding no difference. I found that a medical science blogger, Steven Novella MD, whom I value (bias? – one of my favourite experts), had made a summary of the existing meta-analyses on the topic. He had looked at the High Antioxidant study and laid it out with four other reviews on the topic. He begins with a great principle of how to approach a scientific question.
Whenever I am trying to quickly grasp the bottom line of any scientific question, I look for a consensus among several independent systematic reviews. If multiple reviewers are looking at the same body of research and coming to the same conclusion, that conclusion is likely reliable.
In this case, there are three [he later added a fourth, see below] other recent large systematic reviews and meta-analyses of the same research on the nutritional content and safety of organic vs conventional produce. The other three studies all came to the opposite conclusion as the current study.
Here are the studies, a 2009 review by Dangour et. al. concluded:
On the basis of a systematic review of studies of satisfactory quality, there is no evidence of a difference in nutrient quality between organically and conventionally produced foodstuffs. The small differences in nutrient content detected are biologically plausible and mostly relate to differences in production methods.
A 2010 review also by Dangour found:
From a systematic review of the currently available published literature, evidence is lacking for nutrition-related health effects that result from the consumption of organically produced foodstuffs.
And a 2012 review by Smith-Spangler et. al., which is my ‘Stanford study’, found:
After analyzing the data, the researchers found little significant difference in health benefits between organic and conventional foods. No consistent differences were seen in the vitamin content of organic products, and only one nutrient — phosphorus — was significantly higher in organic versus conventionally grown produce (and the researchers note that because few people have phosphorous deficiency, this has little clinical significance).
Novella adds another 2002 review which also finds no nutritional difference.
At this point I am satisfied with my conclusion and I stop looking. I am convinced that my earlier statement reflects the best scientific knowledge on the subject. How can I know if my bias is keeping me from seeing this topic clearly?
It must be reasonable to assume, that should there be a stable measurable difference in the nutritional content of organic food, other reviews would be able to arrive at that conclusion independently of each other. Instead, we have four reviews on similar lines – finding no nutritional or health benefits of organic food. Then we have one study stating a different conclusion, but at a closer look demonstrating very little differences for a few nutrients, likely at level of no consequence for our health or nutrient intake.
Could the four of them be deeply flawed and the High Antioxidant one be the best of the bunch? Could future studies show a much greater difference in nutrient content? That is possible, but based on the evidence at the time, it does not seem likely.
For me to reasonably doubt the common conclusion of the four reviews, I would need to see detailed reports on their faults, at least in similar depth as I have seen for the outlier study. In the limits of time and effort, I will accept my bias not to attempt further scrutiny of them. Why? Because in this light, the evidence of no nutritional benefits of organic food is quite strong, and choosing to trust the outlier meta-analysis would indeed be cherry-picking. I’ve never liked cherries.