In September 2012, a paper published by a French scientist involved in anti-GMO activism – that’s genetically-modified organism – made the claim that a variety of corn known as NK603, genetically engineered by the U.S. bio-chemical firm Monsanto to be resistant to glyphosate herbicides and marketed by the same company, caused tumors in rats fed an exclusive diet of the corn.
The reaction was dramatic.
Almost instantly there were widespread calls by many, especially in GMO-phobic Europe, to crack down on agricultural biotechnology. In France, the Prime Minister called for a ban on the genetically engineered corn, while Russia halted imports of NK603 crops. In Africa, heavily dependent upon aid from and exports to Europe and thus very sensitive to European opinion, countries like Kenya went even further and decided to ban all GMO crops. The problem with all this proactive protection, unfortunately, is that the study in question has turned out to be almost certainly untrue.
Or, at least if you listen to the study’s many critics, it’s so substantially flawed that it should never have published in the first place. As it happens, the journal that published the paper to begin with agreed, and retracted it on its own earlier this month after the study’s authors refused to accept they had made many fundamental mistakes. Among them, it turned out, was the grave research sin of simply using using way too few observations – e.g. rats eating corn – to make any sound statistical inferences about cancer one way or the other.
Combine that error with using rats more prone to develop cancer than other breeds of lab rat and actually reporting within the paper itself that consuming higher amounts of GMO corn led to fewer rats developing cancer, and the sensational claims made by the authors fell apart.
The study, critics say, simply did not add up and flew in the face of many others conducted elsewhere by other equally-talented, though ideologically neutral, scientists that indicated GMO corn could not be demonstrated to cause cancer under experimental conditions.
So what should members of the public take away from this scientific tempest in a kernel-sized teapot? Does GMO corn cause cancer? Can we trust the scientific establishment to provide us with the best information possible so that we can judge risks on our own?
In many respects, the study and the fallout from its retraction are a great example of science in action. A controversial claim, purportedly backed by empirical research, is made. The scientific establishment weighs in to determine whether the balance of evidence supports the claim – thus meriting it further research and serious consideration – or does not. In this case, clearly it did not, so the result should be seen as a win for the scientific method and institutionalized science. True, those opposed to GMO technology may not be happy, but according to the rules of the scientific game – which are objective and fair – they lost, this time.
On the other hand, the fact that such shoddy research was allowed in the first place points to a larger crisis in institutionalized science that is nonetheless lurking in the background. As The Economist pointed out in October, the ‘trust, but verify’ mantra that institutional science ostensibly lives by is not exactly standard operating procedure in terms of how science is actually practiced. Today, a vast research complex consisting of millions of scientists employed in a variety of institutions around the world – universities, corporations, government research centers– simply take a lot of what passes as science today on faith.
Not faith in any particular set of findings, of course, but faith in the peer-review process itself to weed out research that is too methodologically flawed to pass muster with the standards of the field a given scientist works in.
Unfortunately, as The Economist again reports, too often flawed studies like the case of the cancer-causing corn are simply passed along with little in the way of critical scrutiny because the authors are, well, fellow scientists, and fellow scientists would never fudge the numbers or make them up out of whole cloth, would they?
Thus, the crisis in science that the cancer corn study highlights is that trust has gotten in the way of verification for a number of mutually-reinforcing but disconcerting reasons. The first is that verification is a difficult, painstaking process that curiously enough, gives those pursuing it little in the way of an incentive to do. This is because science today is run on the principle of publish or perish, wherein to advance in one’s field one must constantly produce papers that are published in high-ranking, peer-reviewed scientific journals. The more one publishes, put simply, the more accolades, money – both in salary and research grants – and prestige a scientist receives.
Thus, there is premium to putting out a large amount of studies – even if findings are marginal – to journals so as to publish as much as possible. Since verification studies are time consuming, expensive, and run the risk of actually confirming that the original study’s findings are true and thus not be interesting enough to publish, actual scientists deep in the publish-or-perish game rarely do them. This obviously can corrupt a system quickly if left unchecked.
Scientists intuitively know this, so quite often stretch what is claimed in their papers to the limits of what they think they can get away with. After all, what matters is not so much that the claim is actually right or even important in the way their own field measures things, but that the paper is published and allowed to go on the scientist’s curriculum vitae – the list of research publications that the researcher in question has produced and which is used by peers to judge their prospect for promotion, tenure, and much else. Pad the vitae and the future, so to speak, is bright even if the actual knowledge contained therein is, in contrast, rather dim and murky.
Advance this organizational model a few generations and one can see how the corn study could get through with not much in the way of oversight being done. A gentlemanly trust that others are doing sound work turns, instead, into a shell game where scientists tacitly agree not to check too closely each other’s shoddy, marginal work. Left to go on a few generations more and it can result in not much actual stock being put into peer review or the journals dedicated to it by much of anyone in the business except those handing out promotion-and-tenure awards and high-dollar research grants.
Science as an institution is therefore running the risk of hitting something known as Campbell’s Law, a sociological finding that states that the more a social indicator or quantitative measure is used for decision-making purposes, the more likely that indicator will become corrupted by those subject to it. Put more simply, when individuals are put under the metaphorical gun and forced to measure up, people often find ingenious ways to cheat in order to do exactly that. Since scientists are people, too, and rather clever ones at that, it should come as no surprise that many of them have learned to game the Marquis of Queensberry rules that govern the peer-review process.
The consequences of this are grave. Again, according to The Economist, those investing in science-heavy business ventures such as biotechnology estimate that perhaps only half of all published research can be replicated. Even worse, “last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 ‘landmark’ studies in cancer research,” while, earlier, a group at Bayer managed to repeat just a quarter of 67 similarly important papers.
This is devastating if true and is a terrible indictment of the system of institutionalized science as it currently operates. It represents a vast waste of resources in time, money, and human potential that could no doubt be better used elsewhere.
What’s worse, it can potentially undermine the public’s confidence in the only real method humanity has found to determine what is or is not true in the real world. Given how much modern society depends on science to produce the miracles it needs to survive for another day, the degree to which the institution itself has become something of an Oz-like figure dependent upon no one looking too closely behind the curtain is disturbing.
Is science, therefore, something we should instinctively distrust? Are climate-change deniers, vaccine skeptics, and anti-GMO crusaders right to worry that we are being led astray by a vast conspiracy of men and women in white lab coats? Most certainly, no, they are not. As the case of the cancer-causing corn demonstrates, when truly spectacular claims are made science as an institution can still work as it is supposed to, with competition between research groups hashing it out – often viciously – as to what the real facts of the matter may be.
The problem is that the matter being so hashed out has to be high-profile and lucrative enough to justify bringing in the vast amounts of analytical force science can collectively throw at a problem. Only then, when the stakes are high, will competition really force science to act as it is supposed to. For science skeptics who see conspiracies in every research lab, this paradoxically means that where stakes – and the fortunes and reputations bet on them – are highest, science usually gets it right in the end precisely because so much is at risk. This is exactly the opposite conclusion that science skeptics make about the where most science fraud and malpractice are likely to be found.
Cancer-causing corn, along with vaccine safety and climate change, are prime examples of just such high-profile matters on which science is likely to get it right precisely because so much is at stake. As for everything else – especially stuff that bubbles up from less high-profile areas of scientific research or can be found in less-reputable journals or is produced at low-tier universities and research centers – well, best take it with a grain of salt. Exactly what a good scientist is supposed to do anyway.