"Compromised science" news/opines (includes retractions, declining academic standards, pred-J, etc)

India’s retraction crisis casts shadow over science research
https://timesofindia.indiatimes.com...er-science-research/articleshow/120238864.cms

EXCERPT: As of 2023, 40,822 research articles were retracted globally, according to the Retraction Watch Database. China led with 21,999 retractions in 2023, followed by the US with 3,731 and India with 2,737. Retractions occur when a paper is found to be flawed due to errors, plagiarism, data fabrication, or peer-review fraud.

While some stem from honest mistakes, many involve misconduct, driven by the ‘publish or perish’ culture. In India, academic promotions and funding often hinge on publication counts, tempting researchers to cut corners. Paper mills, which produce fraudulent studies for a fee, have exploited this pressure, flooding journals with sham research.

The consequences of such scientific fraud are profound, as history shows...

= = = = = = = = = = = = = =

Predatory journals even worse since "Get Me Off Your F*ck*ng Mailing List" was accepted for publication
https://boingboing.net/2025/04/12/p...ailing-list-was-accepted-for-publication.html

EXCERPTS: I have to admit, as an academic who is frequently frustrated by predatory journals as well as by the inability to extricate myself from some journal and conference mailing lists, I thoroughly resonate with the paper "Get Me Off Your Fucking Mailing List," which I can only assume was submitted in fit of rage to the predatory International Journal of Advanced Computer Technology back in 2014.

[...] Vox points out that while this particular incident is "pretty hilarious," it points to bigger issues in academic and scientific publishing and the growth of "online-only, for-profit operations that take advantage of inexperienced researchers under pressure to publish their work in any outlet that seems superficially legitimate." These journals differ from legitimate journals because they don't conduct peer reviews—heck, some clearly don't even read the papers! These predatory journals also require payment from the author to be published, whereas legitimate journals don't.

Sadly, and unsurprisingly, more than a decade later, the problem of predatory publishing and the accompanying issue of junk science have just gotten worse...

= = = = = = = = = = = = = =

Invasion of the ‘journal snatchers’: the firms that buy science publications and turn them rogue
https://www.nature.com/articles/d41586-025-01198-6

Study finds dozens of journals that have hiked their fees and started churning out papers after being acquired by small, recently formed companies...

- - - - - - - - - - - - - - -

Tell-tale signs of stealth journal takeovers: A bibliometric approach to detecting questionable publisher acquisitions
https://zenodo.org/records/15213855

ABSTRACT: Stealth journal takeovers, that is, the discreet acquisition of established academic journals by entities with questionable publishing practices, represent an emerging threat to the integrity of scholarly communication. This study proposes a set of bibliometric analyses to detect such takeovers, focusing on sudden shifts in citation patterns and authorship networks.

Based on an analysis of 55 journals linked to a known network of related publishers, we identify substantial increases in cross-journal citations and shared authorship following ownership transitions, often between journals with no clear thematic connection. These patterns may serve as early warning signals for bibliographic databases and other academic stakeholders aiming to avoid compromised journals.

Our findings also demonstrate the potential of scientometric methods to support the detection of problematic publishing practices, contributing to the growing field of forensic scientometrics...
_
 
‘Squared blunder’: Google engineer withdraws preprint after getting called out for using AI
https://retractionwatch.com/2025/04...hdraws-arxiv-preprint-tortured-phrases-genai/

An expert in AI at Google has admitted he used the technology to help write a preprint manuscript that commenters on PubPeer found to contain a slew of AI-generated phrases like “squared blunder” and “info picture.”...

- - - - - - - - - - - - - - - -

Sodom comet paper to be retracted two years after editor’s note acknowledging concerns
https://retractionwatch.com/2025/04...rs-after-editors-note-acknowledging-concerns/

Scientific Reports has retracted a controversial paper claiming to present evidence an ancient city in the Middle East was destroyed by an exploding celestial body – an event the authors suggested could have inspired the Biblical account of Sodom and Gomorrah. The decision comes two years after Scientific Reports, a Springer Nature title, published an editor’s note informing readers the journal was looking into concerns about the data and conclusions in the work...

- - - - - - - - - - - - - - - -

UC Davis research director loses three papers for image manipulation
https://retractionwatch.com/2025/04...tor-allen-gao-retractions-image-manipulation/

A lead researcher at UC Davis has lost three decades-old papers from the same journal for image duplication, and the journal says it is investigating more...

- - - - - - - - - - - - - - - -

Suspended UK surgeon earns nine expressions of concern, one withdrawal
https://retractionwatch.com/2025/04...tony-dixon-expressions-of-concern-withdrawal/

A U.K.-based surgeon who was suspended last year for conducting colorectal surgeries that caused harm to hundreds of women has had nine of his research papers flagged and one withdrawn...
_
 
Science sleuths flag hundreds of papers that use AI without disclosing it
https://www.nature.com/articles/d41586-025-01180-2

Telltale signs of chatbot use are scattered through the scholarly literature — and, in some cases, have disappeared without a trace...

- - - - - - - - - - - - - - - -

When do scholarly retractions become a form of censorship?
https://quillette.com/2025/04/21/when-do-scholarly-retractions-become-a-form-of-censorship/

Focusing on the handful of papers that are retracted for political reasons can obscure the more important problems afflicting the field of academic publishing. [...] Obviously, the threat of ideologically motivated retraction campaigns is real. And sometimes, you even see authors take the initiative by asking to have their articles retracted after they observe that they are being cited in support of unfashionable conclusions...

- - - - - - - - - - - - - - - -

Russian academic fakes his way to Nobel-level citation index, creates global plagiarism market for scientific papers
https://theins.ru/en/news/280722

EXCERPTS: Academics who publish such papers are often awarded grants from their home institutions, but Sechenov University likely did not object to the investment, as Bokov’s “work” effectively opened up a channel into top-tier journals. [...] Initially, investigators suspected Bokov was “publishing” this work solely in order to receive university grants. But the operation turned out to be far more expansive: after infiltrating the Iran-Iraq plagiarism network, Bokov appears to be building his own global “plagiarism exchange.”

- - - - - - - - - - - - - - - -

How an accomplished professor went from a chronicler of conspiracy theories to a character in one
https://statenews.com/article/2025/...rompts-lawsuit-with-cicada-3301-puzzle-leader

EXCERPT: Whether those theories swiftly unravel or firmly take root, the case certainly underlines a number of tensions for the growing league of academics who study online misinformation. At play are questions about how universities and academic journals can best handle contentious research about the often volatile world of the internet, and about what happens when scholars who intend only to study it find themselves entangled in its unearthly orbit. Dilley’s case represents a rather extreme example of that phenomenon. She claims her involvement began as a purely academic pursuit, but today, the professor is perhaps more of a character in the hyper-specific internet subculture than a chronicler of it...
_
 
Preprints serve the anti-science agenda – This is why we need peer review
https://scholarlykitchen.sspnet.org...ience-agenda-this-is-why-we-need-peer-review/

EXCERPTS: Proponents of this free-for-all style of scientific publishing argue that this is just how science should adapt to the ways knowledge moves through the internet – that the quick dissemination of new research is a modernization that is always inherently beneficial.

But science shouldn’t work like the rest of the internet. [...] This is not to say that peer review, in its current state, is what we need. As many people have pointed out to me, our traditional two-person peer review can also put a stamp of credibility on bad science.

But peer review has its flaws because it has been taken over by big publishing businesses that prioritize profits over accuracy, not because the idea of peer review is inherently flawed. The answer is to fix peer review so it meets the needs of modern science, not to scrap it entirely or pretend it can work ad hoc in the comments section of a preprint...

- - - - - - - - - - - - - - - -

Stronger ethical standards can turn the tide on retractions
https://www.universityworldnews.com/post.php?story=20250424070107596

INTRO (excerpts): The rising incidence of scientific papers retractions has become a serious concern in the global academic community. [...] Beyond identifying geographic regions and countries with a high concentration of retracted articles, research and available data on the subject have started revealing several notable findings and patterns.

A recent survey indicated that articles [...] focused on health and life sciences – as opposed to the social and physical sciences – are more likely to be retracted. Another striking pattern is the prevalence of this challenge in regions like Africa, where it had traditionally been less common.

Except for South Africa, Egypt, and Nigeria, which are known for their relatively high publication output, experiences regarding article retractions in Africa have been limited or rare. However, this trend is gradually changing as research output from the continent continues to increase...

- - - - - - - - - - - - - - - -

The carbon footprint of science when it fails to self-correct
https://www.biorxiv.org/content/10.1101/2025.04.18.649468v1

ABSTRACT: Science is – in principle – self-correcting, but there is growing evidence that such self-correction can be slow, and that spurious findings continue to drive research activity that is no longer justified. Here we highlight the environmental impact of this failure to self-correct sufficiently rapidly.

We identified a non-fraudulent occurrence of irreproducible findings: the literature on the association between genetic variation in the serotonin transporter gene (5-HTTLPR) with anxiety and depression. An initial report in 1996 found evidence for an association, but a study as early as 2005 that was three orders of magnitude larger found no evidence for an association.

However, studies investigating this association continue to be published. We isolated 1,183 studies published between 1996 and 2024 that investigated the association and calculated an estimated carbon footprint of these studies. We estimate that the failure to self-correct had a footprint of approximately 30,068 tons of CO2 equivalent.

Our aim is to present a case study of the potential carbon footprint of research activity that is no longer justified, when a theory is disproven. We highlight the importance of integrating self-correction mechanisms within research, and embracing the need to discontinue unfruitful lines of enquiry.

- - - - - - - - - - - - - - - -

Introducing the Journal of Robustness Reports
https://scipost.org/10.21468/JRobustRep.0-Editorial

ABSTRACT: The vast majority of empirical research articles report a single primary analysis outcome that is the result of a single analysis plan, executed by a single analysis team (usually the team that also designed the experiment and collected the data). However, recent many-analyst projects have demonstrated that different analysis teams generally adopt a unique approach and that there exists considerable variability in the associated conclusions.

There appears to be no single optimal statistical analysis plan, and different plausible plans need not lead to the same conclusion. A high variability in outcomes signals that the conclusions are relatively fragile and dependent on the specifics of the analysis plan.

Crucially, without multiple teams analyzing the data, it is difficult to gauge the extent to which the conclusions are robust. We have recently proposed that empirical articles of particular scientific interest or societal importance are accompanied by two or three short reports that summarize the results of alternative analyses conducted by independent experts [F. Bartoš et al., Nat. Hum. Behav. (2025)].

In order to showcase the practical feasibility and epistemic benefits of this approach we have founded the Journal of Robustness Reports, which is dedicated to publishing short reanalyses of empirical findings. This editorial describes the scope and the workflow of the Journal of Robustness Reports including the type and format of the published articles. We hope that the Journal of Robustness Reports will help make reanalyses of published findings the norm across the empirical sciences.
_
 
A Ph.D. in paper mills?
https://retractionwatch.com/2025/05/01/phd-paper-mills-wiley-leiden-springer-nature/

A university and a publisher are teaming up to combat paper mills in a unique way: By enlisting a Ph.D. candidate...

- - - - - - - - - - - - - - - - - -

Why has it taken more than a year to correct a COVID-19 paper?
https://retractionwatch.com/2025/04...ter-covid-19-metformin-expression-of-concern/

A correction to a clinical trial on a potential treatment for COVID-19 has taken more than a year — and counting — to get published. In the meantime, the article remains marked with an expression of concern that appeared in February 2024...

- - - - - - - - - - - - - - - - - -

AI-Reddit study leader gets warning as ethics committee moves to ‘stricter review process’
https://retractionwatch.com/2025/04...ai-llm-reddit-changemyview-university-zurich/

The university ethics committee that reviewed a controversial study that deployed AI-generated posts on a Reddit forum made recommendations the researchers did not heed...

- - - - - - - - - - - - - - - - - -

Experiment using AI-generated posts on Reddit draws fire for ethics concerns
https://retractionwatch.com/2025/04...sts-on-reddit-draws-fire-for-ethics-concerns/

An experiment deploying AI-generated messages on a Reddit subforum has drawn criticism for, among other critiques, a lack of informed consent from unknowing participants in the community...

- - - - - - - - - - - - - - - - - -

University of Toronto should take action on flawed breast screening study
https://retractionwatch.com/2025/04...nto-canadian-national-breast-screening-study/

The Canadian National Breast Screening Study conducted in the 1980s and led by researchers at the University of Toronto evaluated the efficacy of breast cancer screening in reducing mortality from breast cancer. Because the research was supposedly a “gold standard” randomized controlled trial, its results, published in academic journals and reported in the media, have influenced public perceptions and informed policy on mammography screening in several countries. However, over the past decades, flaws in this study have come to light...
_
 
Research integrity is a clown car: The anatomy of an utter failure of academic governance
https://jamesclaims.substack.com/p/research-integrity-is-a-clown-car

EXCERPTS: I am very, very tired of critical scientists being polite about the yards of pigshit they have to wade through to get anything done. [...] I see no point in trying to access formal research integrity systems any more. They don’t work. I am making other plans.

[...] I have seen my fair share of academic malpractice [...] But what I’m about to outline below is new, at least, it is new to me. I did not think a real journal, run by actual adults, could fail this hard. [...] Let me work through the complete anatomy of what happened here, so you too can appreciate the grandeur, the single biggest parade of incompetence that I have ever seen in one place...

- - - - - - - - - - - - - -

We need to focus on doing better science, not filling quotas
https://heterodoxacademy.substack.com/p/as-the-doj-questions-journals-how

EXCERPT: But Martin’s probing on whether “competing viewpoints” are being adequately published in CHEST Journal or other science journals—while inappropriate as questions from the government—does raise a question worth asking about academic publishing: How do we ensure that journals are truly open to a wide variety of expert inquiry on the topics on which they publish?

Or, as Jonathan Haidt asked when he founded HxA, “Can research that emerges from an ideologically uniform and orthodox academy be as good, useful, and reliable as research that emerges from a more heterodox academy?”

- - - - - - - - - - - - - -

Scientific integrity under threat: The role of the IDSA, PIDS, and SHEA journals in an evolving political landscape
https://academic.oup.com/cid/advance-article/doi/10.1093/cid/ciaf136/8120687?login=false

INTRO: The landscape of scientific publishing, education, and healthcare research is facing unprecedented challenges as political decisions increasingly encroach on academic freedom, data accessibility, and global health priorities.

Recent policy changes in the United States, including the removal of key educational materials from government websites, funding freezes on global health initiatives such as PEPFAR, and restrictions on language use in scientific communications, present a significant threat to the integrity of scientific discourse.

The implications of these policies extend beyond research institutions—they have real and lasting consequences for healthcare equity, evidence-based policy making, and the ability to address infectious diseases worldwide...

- - - - - - - - - - - - - -

The politicization of retraction
https://www.tandfonline.com/doi/full/10.1080/08989621.2025.2498428

ABSTRACT: The retraction of flawed scientific journal articles is one of the most important means by which science “self-corrects.” The prevailing consensus is that retraction is appropriate only when the reported findings are unreliable due to research misconduct or honest errors, ethical violations have occurred, or there are legal concerns about the article.

Recently, however, retractions seem to be occurring for political reasons. This trend is exemplified by recent editorial guidance from Nature and Human Behavior, which advises the retraction of works that risk significant harm to members of certain social groups.

This commentary argues that while “political” retractions may be appropriate in rare cases, retraction is typically not the best means to address potentially harmful research.

The politicization of retraction risks harm to science in general as it may further undermine diminishing public trust in science and may encourage scientists to self-censor their work, leading to the under-exploration of some important scientific issues...

Unfortunately, no free access to this paper.
_
 
A scary plastic study should probably be recycled
https://www.acsh.org/news/2025/05/04/scary-plastic-study-should-probably-be-recycled-49453

Lately, the press has feasted on a new Lancet article that concludes that about 350,000 of you are going to die yearly from heart disease brought about by long-term ingestion of di-2-ethylhexylphthalate (DEHP), a chemical used to soften plastics. The good news is that the study's data are hardly convincing. Why? We need to look at the good and the bad – the numbers behind the study and how they were used...

- - - - - - - - - - - - - - -

The race question
https://theness.com/neurologicablog/the-race-question/

As a scientific concept – does race exist? Is it a useful construct, or is it more misleading than useful? I wrote about this question in 2016, and my thinking has evolved a bit since then. My bottom line conclusion has not changed – the answer is, it depends. There is no fully objective answer because this is ultimately a matter of categorization which involves arbitrary choices, such as how to weight different features, how much difference is meaningful, and where to draw lines...

- - - - - - - - - - - - - - -

‘It’s been a tough period’: NIH’s new director speaks with Science
https://www.science.org/content/article/it-s-been-tough-period-nih-s-new-director-speaks-science

INTRO (excerpts): When Jayanta “Jay” Bhattacharya took the helm of the National Institutes of Health (NIH) on 1 April, the agency was in turmoil. [...] Last week saw no letup for the former Stanford University health economist: NIH and HHS were finalizing a new policy on foreign research funding, preparing for a big announcement on plans to develop universal flu vaccines, and prepping for the release of the president’s 2026 budget proposal, which seeks to slash NIH by about 40%.

But the day before that budget was released, Bhattacharya sat down for an interview with this Science reporter. He was joined by NIH Chief of Staff Seana Cranston, a former Congressional staffer who replaced John Burklow, a 4-decade NIH communications veteran. The encounter was brief, sometimes confrontational, and even personal.

[..] Below are some excerpts from before our conversation was cut short, edited for brevity and clarity... (MORE - details)
_
 
Can Germany rein in its academic bullying problem?
https://www.nature.com/articles/d41586-025-01207-8

INTRO (excerpts): At one of Germany’s top-funded universities, a high-profile biology researcher has bullied his large group of junior staff, targeting women and international students, for decades.

[...] The university is aware of his behaviour. Yet, no investigation has been conducted nor any sanctions imposed because the complainants requested anonymity, fearing retribution — anonymity that cannot be upheld in an investigation owing to German labour law. This, along with defamation and other laws in Germany, can make it difficult to report on these incidents or to impose sanctions against professors, most of whom are tenured civil servants.

Germany is not the only country to have a problem with bullying in academia, but a combination of structural and legal powers that, in effect, protect tenured professors over early-career researchers, enables and emboldens abusers. “It’s considered culturally normal — this is just what science is like,” says Daniel Leising... (MORE - details)

- - - - - - - - - - - - - - -

6 genetic myths still taught in schools (that science says are wrong)
https://www.zmescience.com/science/...aught-in-schools-that-science-says-are-wrong/

EXCERPT: There’s no shame in teaching simplified models. But we need to mark them clearly as models, not truths. Instead of asking whether a trait is genetic, we should ask how it’s genetic. What genes are involved? How strong is the effect? What else plays a role? This shift—from black-and-white to shades of gray—isn’t just more accurate. It’s more interesting. It’s closer to how biology actually works... (MORE - details)

COVERED: Tongue rolling? Not genetic ..... Attached earlobes ..... Eye color ..... Widow’s Peak ..... Hand-Clasping Preference ..... The “Language Gene”

- - - - - - - - - - - - - - -

Reporting and representation of race and ethnicity in clinical trials of pharmacotherapy for mental disorders
https://jamanetwork.com/journals/ja...ign=ftm_links&utm_content=tfl&utm_term=050725

EXCERPTS: This meta-analysis based on data from 1683 RCTs ... found significant underreporting of race and ethnicity and underrepresentation of specific racial and ethnic groups, particularly in geographic locations other than the US and in small studies in some continents. The findings of the study suggest that significant gaps in reporting race and ethnicity in RCTs of pharmacotherapies for mental disorders call for collaborative efforts among stakeholders and policymakers to develop international guidelines that promote equitable recruitment in clinical trials... (PRESS RELEASE)
_
 
Web of Science delists bioengineering journal in wake of paper mill cleanup
https://retractionwatch.com/2025/05...vate-delist-bioengineered-paper-mill-cleanup/

Bioengineered has lost its spot in Clarivate’s Web of Science index, as of its April update. The journal has been working to overcome a flood of paper mill activity, but sleuths have questioned why hundreds of papers with potentially manipulated images have still not been retracted...

- - - - - - - - - - - - - - -

Journal collected $400,000 from papers it later retracted
https://retractionwatch.com/2025/05...y-systems-jifs-sage-retracted-papers-revenue/

A Sage journal that holds the distinction of highest number of retracted articles in the Retraction Watch Database likely made in excess of $400,000 in revenue from those papers, by our calculations...

- - - - - - - - - - - - - - -

‘Now is not the time to fade’: Retraction Watch awarded Council of Science Editors’ highest honor
https://retractionwatch.com/2025/05...ded-council-of-science-editors-highest-honor/

Retraction Watch has been honored with the Council of Science Editors’ highest honor: The 2025 Award for Meritorious Achievement....

- - - - - - - - - - - - - - -

A ‘stupid mistake’: EPA researcher added their underage child as an author on manuscript
https://retractionwatch.com/2025/05/06/epa-researcher-child-author-inspector-general-report/

A researcher at the Environmental Protection Agency added their underage child as a coauthor on a paper after the manuscript cleared the agency’s internal review, an investigation found...
_
 
Chinese research isn’t taken as seriously as papers from elsewhere – my new study
https://theconversation.com/chinese...-as-papers-from-elsewhere-my-new-study-255794

My new research suggests there is a stubborn pattern in academic publishing. My co-author and I examined some 8,000 articles published in the world’s most reputable economics journals to study citations, which are where academics cite previously published research in their papers. We found papers whose lead author had a Chinese surname received on average 14% fewer citations than comparable papers written by those with a non-Chinese name...
_
 
‘Publish or perish’ culture fuelling research misconduct in India
https://nenews.in/articles/publish-or-perish-culture-fuelling-research-misconduct-in-india/24760/

Surge in research misconduct, ranging from plagiarism to data falsification, is posing a serious challenge to India’s academic credibility...

- - - - - - - - - - - - - - - -

Second chance: convicted US chemist Charles Lieber moves to Chinese university
https://www.nature.com/articles/d41586-025-01410-7

Former Harvard scientist convicted of making false statements says he wants to do research that benefits humanity — and cannot do that in the United States...

- - - - - - - - - - - - - - - -

P hacking — Five ways it could happen to you
https://www.nature.com/articles/d41586-025-01246-1

Most researchers don’t set out to cheat, but they could unknowingly make choices that push them towards a significant result. Here are five ways P hacking can slip into your research...

- - - - - - - - - - - - - - - -

Two gynecologists punished for research misconduct
https://www.chinadaily.com.cn/a/202505/07/WS681b51eaa310a04af22bdf67.html

Two gynecologists from Fujian Provincial People's Hospital have been punished for research misconduct related to a controversial paper on endometriosis [...] Endometriosis usually affects females, especially those of reproductive age. While in the paper, out of the 100 collected patient samples, 64 were male, sparking heated discussion. An investigation by the hospital confirmed the research misconduct, leading to the disciplinary actions against the authors...
_
 
The do’s and don’ts of scientific image editing (only partially free access)
https://www.nature.com/articles/d41586-025-01299-2

Acceptable image-editing practices are partly a matter of common sense. But researchers say journals and funders could help scientists by standardizing policies...

- - - - - - - - - - - - - - - - - -

Google Scholar is (still) doing nothing about citation manipulation
https://reeserichardson.blog/2025/0...ll-doing-nothing-about-citation-manipulation/

INTRO: Almost one year on from Larry Richardson becoming Google Scholar’s highest cited cat, a year and change after Ibrahim et al. released a pre-print (now published) showing that Google Scholar is eminently manipulable and a full 15 years after fictitious personality Ike Antkare became one of the most highly-cited scientists of all time, Google Scholar still appears to have done absolutely nothing to curb citation manipulation on its platform. This is especially concerning given that Scholar is probably the single most commonly-used source for citation metrics used in hiring decisions (according to Ibrahim et al.). Here, I’ll detail a recent citation-selling scheme that exploits Scholar’s longstanding vulnerabilities... (MORE - details)

- - - - - - - - - - - - - - - - - -

Extraordinarily corrupt or statistically commonplace? Reproducibility crises may stem from a lack of understanding of outcome probabilities
https://peerj.com/articles/18972/

ABSTRACT: Reports of crises of reproducibility have abounded in the scientific and popular press, and are often attributed to questionable research practices, lack of rigor in protocols, or fraud. On the other hand, it is a known fact that—just like observations in a single biological experiment—outcomes of biological replicates will vary; nevertheless, that variability is rarely assessed formally.

Here I argue that some instances of failure to replicate experiments are in fact failures to properly describe the structure of variance. I formalize a hierarchy of distributions that represent the system-level and experiment-level effects, and correctly account for the between-and within-experiment variances, respectively. I also show that this formulation is straightforward to implement and generalize through Bayesian hierarchical models, although it doesn’t preclude the use of Frequentist models.

One of the main results of this approach is that a set of repetitions of an experiment, instead of being described by irreconcilable string of significant/nonsignificant results, are described and consolidated as a system-level distribution. As a corollary, stronger statements about a system can only be made by analyzing a number of replicates, so I argue that scientists should refrain from making them based on individual experiments. (MORE - details)

- - - - - - - - - - - - - - - - - -

From 2015 to 2023, eight years of empirical research on research integrity: a scoping review
https://link.springer.com/article/10.1186/s41073-025-00163-1

- ABSTRACT -

Background. Research on research integrity (RI) has grown exponentially over the past several decades. Although the earliest publications emerged in the 1980 s, more than half of the existing literature has been produced within the last five years. Given that the most recent comprehensive literature review is now eight years old, the present study aims to extend and update previous findings.

Method. We conducted a systematic search of the Web of Science and Constellate databases for articles published between 2015 and 2023. To structure our overview and guide our inquiry, we addressed the following seven broad questions about the field:-What topics does the empirical literature on RI explore?
  • What are the primary objectives of the empirical literature on RI?
  • What methodologies are prevalent in the empirical literature on RI?
  • What populations or organizations are studied in the empirical literature on RI?
  • Where are the empirical studies on RI conducted?
  • Where is the empirical literature on RI published?
  • To what degree is the general literature on RI grounded in empirical research?
Additionally, we used the previous scoping review as a benchmark to identify emerging trends and shifts.

Results. Our search yielded a total of 3,282 studies, of which 660 articles met our inclusion criteria. All research questions were comprehensively addressed. Notably, we observed a significant shift in methodologies: the reliance on interviews and surveys decreased from 51 to 30%, whereas the application of meta-scientific methods increased from 17 to 31%. In terms of theoretical orientation, the previously dominant “Bad Apple” hypothesis declined from 54 to 30%, while the “Wicked System” hypothesis increased from 46 to 52%. Furthermore, there has been a pronounced trend toward testing solutions, rising from 31 to 56% at the expense of merely describing the problem, which fell from 69 to 44%.

Conclusion. Three gaps highlighted eight years ago by the previous scoping review remain unresolved. Research on decision makers (e.g., scientists in positions of power, policymakers, accounting for 3%), the private research sector and patents (4.7%), and the peer review system (0.3%) continues to be underexplored. Even more concerning, if current trends persist, these gaps are likely to become increasingly problematic. (MORE - details)
_
 
'This should not be published': Scientists cast doubt on study claiming trees 'talk' before solar eclipses
https://www.livescience.com/planet-...udy-claiming-trees-talk-before-solar-eclipses

EXCERPTS: So, what exactly is going on in this work published April 30 in Royal Society Open Science, and how seriously should we take it?

[...] "It's disappointing that this paper is getting so much press because it's just an idea and there's not much here other than assertion," said Cahill. "This could have been replicated, it should be replicated. There's no understanding of why they are focusing on electrical signals instead of the photosynthetic rate. They also didn't compare this to just night and day, which is the obvious thing to do and that's very worrisome to me." (MORE - details)

- - - - - - - - - - - - - - - -

Study debunks 5G health conspiracy theory (again)
https://www.popsci.com/health/5g-conspiracy-theory-debunk/

EXCERPT: Studies have repeatedly debunked these claims in the past, but researchers at Germany’s Constructor University recently decided to provide more evidence that our smartphones aren’t secretly poisoning us. What’s more, their study published in the May issue of the journal PNAS Nexus showcases just how innocuous 5G’s electromagnetic fields really are... (MORE - details)
_
 
Paper with duplicated images retracted four months after concerns were raised
https://retractionwatch.com/2025/05/15/kaohsiung-journal-medical-sciences-wiley-retraction/

We write plenty of stories about lengthy investigations and long wait times for retractions. So we are always glad when we can highlight when journals act in a relatively timely fashion...

- - - - - - - - - - - - - -

Clarivate to stop counting citations to retracted articles in journals’ impact factors
https://retractionwatch.com/2025/05...act-factor-retracted-articles-web-of-science/

The change comes after some have wondered over the years whether citations to retracted papers should count toward a journal’s impact factor, a controversial yet closely watched metric that measures how often others cite papers from that journal. For many institutions, impact factors have become a proxy for the importance of their faculty’s research...

- - - - - - - - - - - - - -

Dozens of Elsevier papers retracted over fake companies and suspicious authorship changes
https://retractionwatch.com/2025/05...-companies-and-suspicious-authorship-changes/

Since March of last year, Elsevier has pulled around 60 papers connected to companies in the Caucasus region that don’t seem to exist. The retraction notices attribute the decision to suspicious changes in authorship and the authors being unable to verify the existence of their employers. Online sleuths have also flagged potentially manipulated citations among the articles...

- - - - - - - - - - - - - -

How do retractions impact researchers’ career paths and collaborations?
https://retractionwatch.com/2025/05...-researchers-career-paths-and-collaborations/

Three researchers from New York University’s campus in Abu Dhabi wanted to better understand how a retraction affects a scientist’s career trajectory and future collaborations....
_
 
From ‘publish or perish’ to ‘be visible or vanish’: What’s next?
https://www.malaymail.com/news/what...ish-whats-next-mohammad-tariqur-rahman/175983

EXCERPT: Arguably, the race to increase the number of papers resulted in a number of scientific misconducts, namely, but not limited to, the unethical practice in authorship assignments e.g., guest and honorary authorship; emergence of paper mills; and publishing unauthenticated or manipulated results.

The trend of scientific misconduct has been condemned, yet no practical measures have been taken either to control or to decrease it. Rather, the increasing number of retracted papers every year attest the ongoing “pandemic” of scientific misconduct. Will the new dictum “be visible or vanish” then add to the pandemic? (MORE - details)

- - - - - - - - - - - - - - -

Is it OK for AI to write science papers? Nature survey shows researchers are split
https://www.nature.com/articles/d41586-025-01463-8

EXCERPT: The survey results suggest that researchers are sharply divided on what they feel are appropriate practices. Whereas academics generally feel it’s acceptable to use AI chatbots to help to prepare manuscripts, relatively few report actually using AI for this purpose — and those who did often say they didn’t disclose it.

Past surveys reveal that researchers also use generative AI tools to help them with coding, to brainstorm research ideas and for a host of other tasks. In some cases, most in the academic community already agree that such applications are either appropriate or, as in the case of generating AI images, unacceptable. Nature’s latest poll focused on writing and reviewing manuscripts — areas in which the ethics aren’t as clear-cut... (MORE - details)

- - - - - - - - - - - - - - -

Is the list of Highly Cited Researchers losing credibility?
https://blogs.lse.ac.uk/impactofsoc...-highly-cited-researchers-losing-credibility/

INTRO: For over two decades, the Highly Cited Researchers list has spotlighted global scientific influence. But behind its annual release lies a shifting story, which includes evolving methods, changing ownership and growing misuse. Lauranne Chaignon traces the list’s transformation from a research tool to a high-stakes benchmark, raising questions about its continued role in academic evaluation... (MORE - details)

- - - - - - - - - - - - - - -

More science friction for less science fiction
https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3003167

INTRO: AI-ready health datasets can be exploited to generate many research articles with potentially limited scientific value. A study in PLOS Biology highlights this problem, by describing a recent, sudden explosion in papers analyzing the NHANES health dataset... (MORE - details)
_
 
From ‘publish or perish’ to ‘be visible or vanish’: What’s next?
https://www.malaymail.com/news/what...ish-whats-next-mohammad-tariqur-rahman/175983

EXCERPT: Arguably, the race to increase the number of papers resulted in a number of scientific misconducts, namely, but not limited to, the unethical practice in authorship assignments e.g., guest and honorary authorship; emergence of paper mills; and publishing unauthenticated or manipulated results.

The trend of scientific misconduct has been condemned, yet no practical measures have been taken either to control or to decrease it. Rather, the increasing number of retracted papers every year attest the ongoing “pandemic” of scientific misconduct. Will the new dictum “be visible or vanish” then add to the pandemic? (MORE - details)

- - - - - - - - - - - - - - -

Is it OK for AI to write science papers? Nature survey shows researchers are split
https://www.nature.com/articles/d41586-025-01463-8

EXCERPT: The survey results suggest that researchers are sharply divided on what they feel are appropriate practices. Whereas academics generally feel it’s acceptable to use AI chatbots to help to prepare manuscripts, relatively few report actually using AI for this purpose — and those who did often say they didn’t disclose it.

Past surveys reveal that researchers also use generative AI tools to help them with coding, to brainstorm research ideas and for a host of other tasks. In some cases, most in the academic community already agree that such applications are either appropriate or, as in the case of generating AI images, unacceptable. Nature’s latest poll focused on writing and reviewing manuscripts — areas in which the ethics aren’t as clear-cut... (MORE - details)

- - - - - - - - - - - - - -
Hmm, weird that most of them are choosing to not disclose the use of AI when assisting with preparing manuscripts, etc. Are they afraid of being considered a fraud? There should be legal and financial consequences for that though, imo. (Not the use of AI tools in general, but not disclosing it.)
 
Last edited:
  • Like
Reactions: C C
Hmm, weird that most of them are choosing to not disclose the use of AI when assisting with preparing manuscripts, etc. Are they afraid of being considered a fraud? There should be legal and financial consequences for that though, imo. (Not the use of AI tools in general, but not disclosing it.)

Perhaps primarily not disclosing it because they're worried about how the future might retroactively respond to what they're publishing now. AI use is newborn and unstable territory, where the rules of what is acceptable in the science publishing industry haven't been fully worked out yet and universally established slash agreed upon. Unpredictable: Maybe tomorrow will be lenient on current work with AI involvement, or maybe it will be more wrathful.

Since either Generation Alpha or the one after such will have their writing skills heavily compromised via growing up with deep reliance upon AI, there's no dodging that younger research teams will be increasingly dependent upon AI describing what they did in their studies (along with its graphics). So at some point the establishment will probably have to bite the bullet and lower standards with respect to approving AI assisted or generated science literature.
_
 
Last edited:
Silencing the CDC
https://pauloffit.substack.com/p/silencing-the-cdc

EXCERPTS: Recently, researchers at the Centers for Disease Control and Prevention (CDC) published a study examining the impact of these two strategies to prevent RSV. [...] With wider use, hospitalizations will continue to decrease, and the infant mortality rate will likely continue to drop. So why haven’t we heard about this? Why didn’t this story dominate the news when the results appeared in a medical journal? [...] RFK Jr. ushered in his administration with the phrase “radical transparency.” [...] This transparency doesn’t apparently extend to information that counters his fixed, immutable, science-resistant belief that “no vaccines are safe and effective.” (MORE - details)

- - - - - - - - - - - - - -

Claims that “the Universe will end sooner than expected” are false
https://bigthink.com/starts-with-a-bang/universe-end-sooner-expected-false/

KEY POINTS: Back in 2023, a team of researchers put forth the bold suggestion that Hawking radiation, which causes black hole evaporation, might emerge from all massive objects, not just black holes. If this were true, then all massive objects would slowly lose energy (and, therefore, mass), and as a result everything would eventually decay. Ultimately, the Universe itself would even end. Rather than talk about what could be true if this scenario holds, we can look at the fundamental issues underlying the idea, not just an approximation. It turns out not to be physically possible... (MORE - details)
_
 
Researchers to pull duplicate submission after reviewer concerns and Retraction Watch inquiry
https://retractionwatch.com/2025/05...ssion-peer-reviewer-retraction-watch-inquiry/

Based on the published paper and documents shared with us, it appears the authors submitted the same manuscript to the journals Applied Sciences and Virtual Reality within 11 days of each other, and withdrew one version when the other was published....

- - - - - - - - - - - - - - -

Correction finally issued seven years after authors promise fix ‘as soon as possible’
https://retractionwatch.com/2025/05/20/neuron-correction-seven-years-cell-press/

A journal has finally issued a correction following a seven-year-old exchange on PubPeer in which the authors promised to fix issues “as soon as possible.” But after following up with the authors and the journal, it’s still not clear where the delay occurred...

- - - - - - - - - - - - - - -

Can a better ID system for authors, reviewers and editors reduce fraud? STM thinks so
https://retractionwatch.com/2025/05...ication-framework-stm-author-editor-reviewer/

Unverifiable researchers are a harbinger of paper mill activity. [...] The International Association of Scientific, Technical, & Medical Publishers (STM) has taken a stab at developing a framework for journals and institutions to validate researcher identity, with its Research Identity Verification Framework, released in March. [...] But how this will be implemented and standardized remains to be seen. We spoke with Hylke Koers, the chief information officer for STM and one of the architects of the proposal. The questions and answers have been edited for brevity and clarity...

- - - - - - - - - - - - - - -

Is Donald Trump to blame for a COVID lab leak?
https://reason.com/2025/05/22/is-donald-trump-to-blame-for-a-covid-lab-leak/

Both writers are trivially correct that the Obama administration implemented a pause on gain-of-function research and the first Trump administration lifted it. Yet both writers' implied point—that Trump's newfound hawkishness on gain-of-function research is belated and hypocritical—misses a few key facts...

- - - - - - - - - - - - - - -

How Elon Musk unleashed chaos inside the NIH
https://www.thefp.com/p/how-elon-musk-unleashed-chaos-inside-nih

While Musk blew into Washington with big aspirations for cost-cutting, his unconventional tactics have increasingly clashed with reality and faced pushback. As Musk winds down his time at DOGE, he has vowed to spend less time in Washington. But none of the staffers he installed at DOGE, including many young software engineers with no prior government experience, seem to be following Musk out the door just yet...
_
 
Can we trust social science yet?
https://asteriskmag.com/issues/10/can-we-trust-social-science-yet

EXCERPTS: Everyone likes the idea of evidence-based policy, but [...] Given the current state of evidence production in the social sciences, I believe that many — perhaps most — attempts to use social scientific evidence to inform policy will not lead to better outcomes. This is not because of politics or the challenges of scaling small programs. The problem is more immediate. Much of social science research is of poor quality, and sorting the trustworthy work from bad work is difficult, costly, and time-consuming.

But it is necessary. If you were to randomly select an empirical paper published in the past decade — including any studies from the top journals in political science or economics — there is a high chance that its findings may be inaccurate. And not just off by a little: possibly two times as large, or even incorrectly signed. As an academic, this bothers me. I think it should bother you, too. So let me explain why this happens... (MORE - details)
_
 
Back
Top