As scientific enterprise becomes ever more competitive it appears that misconduct and outright fraud is increasing. Emma Dorey writes that researchers are only now beginning to get a handle on the size of the problem
Despite our high expectations of scientists, fraud and deception in academic research seem to be on the rise – misconduct that could lead to fallacious policies or dangerous medicines. While more empirical data is needed before these toxic research practices can be tackled, stamping out questionable human behaviour in scientific endeavour will be next to impossible.
The past few years have seen a spate of highprofile cases of scientific misconduct or outright fraud. Hwang Woo-suk, the disgraced South Korean scientist, was recently given a suspended custodial sentence for his part in fraudulent claims to have cloned the first human embryos and extracted stem cells from them (C&I 2009, 21, 6). Then there were the revelations about Jan Hendrik Schön, whose breakthrough ‘discoveries’ about the properties of plastic, many of which were published in Nature and Science, turned out to be fabricated (C&I 2006, 1, 6). And Merck faced allegations of manipulated research after its painkiller Vioxx was withdrawn, amid numerous revelations of suppression of important safety data and aggressive marketing practices (C&I 2008, 9, 11).
Facing up to fraud
Scientific misconduct is ‘a growing concern, partly because it has the potential to undermine science’s credibility in general, and partly because of the increasing amount of public funding that science has received over the past few decades,’ says Massimo Pigliucci, chair of the department of philosophy at Lehman College at the City University of New York, US.
A meta-analysis of scientific misconduct surveys and interviews has shown that 2% of scientists admit fabricating, falsifying or modifying data and 34% admit to some kind of questionable research practices – numbers that are almost certainly on the low side due to self-reporting. And a massive 72% thought that the research practices of colleagues were questionable (PLoS One doi:10.1371/journal. pone.0005738). The results also suggest that misconduct is most common among medical or pharmacological researchers – a finding that is ‘both not surprising and particularly worrisome,’ says Pigliucci.
The analysis also found that the frequency of such behaviour is inversely proportional to its perceived gravity. So questionable research practices are much more common than fabrication, for example. ‘This, of course, is just what scientists say, so it could be untrue, but it makes sense,’ says Daniele Fanelli, Marie Curie research fellow at the University of Edinburgh’s Institute for the Study of Science, Technology & Innovation, who conducted the study.
So why does scientific misconduct happen? ‘Human nature surely has a lot to do with it. There is no reason to think that scientists are fundamentally different from other human beings when it comes to the pursuit of fame and financial reward,’ says Pigliucci. ‘The pressure in terms of “publish or perish” and the increasing difficulty of getting jobs and grants funded certainly adds to the problem.’
Other factors may also be involved, such as advances in technology enabling easy cutting and pasting and the manipulation of images or tough competition to attract high profile papers that leads prestigious journals to omit intensive scrutiny of technical details.
‘The primary responsibility does lie with the scientists themselves, of course, though an article by Paul Basken in the Chronicle of Higher Education (24 July 2009) rightly argues that there is responsibility on the part of universities, which do not do a good job at pursuing suspected cases of misconduct; the government, which does not establish a clear set of guidelines; and even funding agencies, such as the NIH, which requires only minimal training in ethical conduct on the part of its funded scientists,’ says Pigliucci.
The real problem is a lack of transparency, according to Aubrey Blumsohn, a scientific misconduct blogger. ‘Investigations of scientific misconduct should themselves align with the usual principles of scientific discourse – open discussion, honesty, transparency of method, public disclosure of evidence, open public analysis and public discussion and reasoning underlying any conclusion,’ he says. ‘This is usually not the case. The scientific community covers its tracks, prevents real fraud from being prosecuted and prevents knowledge of fraud from being discussed. Usually this is to prevent reputational damage to “important” people.’ Blumsohn adds that many of these problems would be prevented by allowing readers, reviewers and authors access to raw scientific data.
Meanwhile, the extent of the problem is very hard to estimate. The impact of misconduct depends on various factors, such as its frequency, the level of distortion of results and how many people will actually be mislead by the fraudulent data, Fanelli explains. In addition, there are the consequences for society – whether the fraudulent research leads to fraudulent medicines, for example, or to bad environmental policies, he says.
While understanding of the causes of misconduct is still in its infancy, clearly education and fostering a culture of integrity in research is going to be part of the solution, Fanelli says. ‘And many initiatives, particularly in the US but now also in Europe, are heading in that direction – with courses, workshops and initiatives aimed at increasing researchers’ and students’ understanding of the boundary between good and bad practice.’
Policy changes will be needed to tackle scientific misconduct that is a public threat, but more empirical data will be needed to support such decisions, Fanelli says. ‘For example, it is typically said that misconduct might be growing because scientists are increasingly competing for positions and grants, and their careers are determined by the sheer number of publications, which might force them to produce results at any cost. This makes sense, and would call for policy interventions, of course. But in my opinion there is little evidence to support these claims.’
There are signs that steps are being taken to tackle fraudulent behaviour in science. In October 2009, recognising that questionable research practices can tarnish the UK’s reputation as a centre of excellence, the UK Research Integrity Office, the independent body that offers advice about the conduct of research, launched a code of practice for researchers.
Pigliucci says individual researchers, universities and funding agencies are increasingly aware of the problem, and thinks the situation is likely to improve significantly in the short term. ‘In the long run, science’s built-in mechanisms of perennial peer review and competition among labs do guarantee a high degree of positive outcomes and self-correction, but the problem is that in the short term a certain number of bad papers [containing fraudulent or questionable data] remain in the literature and can be used for policy or medical decisions,’ he says. ‘Science does self-correct, but it takes time.’ Pigliucci points out that once a paper based on misconduct gets into print, studies show it typically takes more than two years to be retracted and over three times as long if a senior researcher is involved (Journal of Clinical Epidemiology doi:10.1016/j. jclinepi.2007.11.019).
Whatever the efforts to tackle misconduct, scientific bias may be, to some extent, inevitable, Fanelli says. ‘Scientists, like all human beings, tend to filter reality through the lense of personal beliefs. They tend to see in the world what they believe to be true, or what they wish it was. No matter how hard they try, they will never be really indifferent to their results. And there is nothing wrong with that. All we can do is learn to acknowledge the potential bias, and evaluate scientific results accordingly.’
China’s exponential research growth could fuel fraud
China’s research output has exploded four-fold over the past decade, far outpacing research activity in the rest of the world, according to a global research report by Thomson Reuters. The country generated nearly 112,000 research papers in 2008, up from just over 20,000 in 1998. China surpassed Japan, the UK and Germany in 2006 and now stands second only to the US (C&I 2009, 22, 7).
‘All the data we analyse refer to publications in journals that meet Thomson Reuters editorial standards, including those on peer review,’ says Jonathan Adams, director of research evaluation at Thomson Reuters. ‘We can therefore regard the indexed growth of China’s share of world publications as representing a real increase in research outputs meeting international quality standards.’
It has been reported that rates of duplicate publications are higher in China and Japan than other industrialised countries (Nature doi:10.1038/451397a). However, it is not clear whether the levels of other fraud or misconduct are elevated in Chinese academia. ‘We understand that there is significant pressure on researchers to publish and, where possible, to publish in high-quality international journals. This may be more explicit in China – for example, it has been reported that incentive payments are offered to those who publish in Nature and Science,’ says Adams. But he points out that pressure is also applied to researchers in the UK and the US to meet these challenges, and that promotion and tenure in many countries may hang on regular output in top quality journals.
Nevertheless, a recent editorial in The Lancet paints a picture of growing scientific fraud in China (Lancet, 2009, 375, 94). Recently, 70 Chinese papers had to be retracted by Acta Crystallographica Section E after the crystal structures were discovered to be fabricated. The journal’s editors warn that preliminary investigations suggest that the number of retractions will rise. The editorial calls on China’s government, which funds nearly all scientific research, to take a more active role in promoting integrity and establishing robust and transparent procedures to handle misconduct.