Research integrity in times of pandemic


Itajubá Clinics Hospital, Itajubá, MG, Brazil
Hospital de Clínicas de Porto Alegre, Federal University of Rio Grande do Sul, Porto Alegre, RS, Brazil

Keywords

covid, integrity, research

EDITORIAL

In 1994, Douglas Graham Altman, one of the greatest statisticians of all time, wrote "We need less research, better research, and research done for good reasons"1. Twenty-seven years ago, Altman pointed out that the system favored unscientific behavior and that "bad science" was easy to publish, highlighting the financial implications of this amount of poorly designed research, with erroneous statistical methods, unrepresentative samples, or fraud. The covid-19 pandemic has once again put clinical research in check. The pressure for urgent responses was unprecedented. Knowledge of the origin of the virus, the transmission dynamics, the pathophysiology of the disease, efficient pharmacological and non-pharmacological measures would be counted in lives - and economies, and in governments.

With the exponential increase in the number of submissions about covid-192, renowned scientific journals such as the New England Journal of Medicine, The Lancet, and JAMA saw their editorial flow grow so that their team of reviewers could not keep up with the demand3. The race to be the first to publish news on the current scientific topic caused an exaggerated shortening of the time between submission and publication. As a result, the quality of the reviews could not be maintained, and indeed, many articles published in renowned journals would not be so in the past4. As of September 2021, 154 articles on covid-19 had been published that had been retracted5. The retraction causes involved lack of data integrity, plagiarism, error in analysis, methodological deviations, ethical conflicts, and lack of privacy6.

Observational findings and ecological studies, intrinsically limited to establishing cause-and-effect relationships, were mistakenly raised to the category of robust intervention studies and dictated new clinical practices7. Other known problems such as "statistical fishing", "data slicing," and selective reporting were also rife. The pharmaceutical industry participated in this scenario, which financed and provided feedback for research with serious conduct errors and integrity deviations with the aim of mass marketing of drugs with questionable biological plausibility7. The low-quality scientific overproduction scenario has also fattened academic curricula.

Observational findings and ecological studies, intrinsically limited to establishing cause-and-effect relationships, were mistakenly raised to the category of robust intervention studies and dictated new clinical practices7. Other known problems such as "statistical fishing", "data slicing," and selective reporting were also rife. The pharmaceutical industry participated in this scenario, which financed and provided feedback for research with serious conduct errors and integrity deviations with the aim of mass marketing of drugs with questionable biological plausibility8. The low-quality scientific overproduction scenario has also fattened academic curricula.

Studies with extraordinary results that have never been replicated, commonly arising from biased methodologies or even fabricated data, tend to be published in scientific journals of low editorial quality9. The appeal of surveys with "too good to be true" results increase the prestige of scientific journals, the number of citations (the H-index), and the amount of public attention (Altmetric score). The simple promise of indexing and obtaining a definitive record (Digital Object Identifier) at affordable prices caused predatory journals to promote much of the misinformation with articles alleging from the reduction of the oxygen level with the use of masks to the relation of 5G network in the dissemination of the coronavirus. Predatory journals, however, should be seen not as a cause but due to the unrestrained pace that research has become, especially in times of pandemic10.

Preprint repositories such as medRxiv, bioRxiv, and Research Square came to the academic world as an informal peer-review and pre-submission dissemination mechanism, making papers available before the acceptance of indexed journals and serving as a reference for review impact by peers in the publication of the final manuscript. While they have optimized access to studies and their raw data, they have also done the disservice of disseminating "bad science." For the layperson, distinguishing a poor-quality preprint from a carefully peer-reviewed article is not a trivial task, and this has potentialized the disclosure of incorrect data11. The lack of a clear policy on the availability of manuscript versions aggravated this situation.

When faced with concerns about research integrity, many authors act defensively and disdainfully. There is a refusal to share complete anonymized raw data under the most diverse pretexts. As a rule, editors are slow to act on studies with errors or suspicions of scientific misconduct. Letters to authors are rare and, when made available by scientific journals, have character restriction12 . During the covid-19 pandemic, the most emblematic case occurred in The Lancet13, and the article was retracted after the authors refused to support the investigations. The retraction of an Egyptian randomized controlled trial (RCT) published in a preprint repository occurred long after being used in several systematic reviews and had influenced ineffective ivermectin treatment worldwide. In Brazil, there are strong suspicions that a research group has violated basic ethical precepts in an attempt to adopt an anti-androgenic drug for treating covid-1914 .

RCTs and systematic reviews, with or without meta-analyses, occupy top positions in the hierarchy of evidence quality in biomedical science. The time required to prepare a relevant, innovative, engaging, ethical and viable research question can be frustrating for society and authors less familiar with the process8. Time is also needed to gain ethical approval, recruit and randomize participants, intervene, analyze findings, and publish. Systematic reviews, as they are instruments that allow grouping fragments of unconsolidated knowledge and finding an answer to a scientific question, can function as "time bombs" if inappropriate methodology is applied, which does not assess the quality of evidence and does not present a judicious interpretation of the results.

This massive invasion of low-quality studies or data of dubious origin undermines academic efforts, compromises health care, creates sources of misinformation and can guide harmful public policies. Ill-established scientific concepts require Herculean mobilization to be demobilized. The fragility of clinical research is not new in the scientific community. However, there is still inertia in the adoption of corrective measures. Government bodies and funding agencies must be co-responsible for scientific reform. Initiatives to train and recognize the volunteer work of reviewers, such as Publons, the mandatory public availability of research projects before their beginning on platforms such as Clinical Trials and ReBEC, and the sharing of the database at the time of publication of the final article are aimed at transparency and scientific honesty and should be broadly encouraged. Less and better research, avoiding waste of time and resources, has been urgent since 1994.