Experts fail to reliably detect AI-generated histological data
https://www.nature.com/articles/s41598-024-73913-8
ABSTRACT: AI-based methods to generate images have seen unprecedented advances in recent years challenging both image forensic and human perceptual capabilities. Accordingly, these methods are expected to play an increasingly important role in the fraudulent fabrication of data. This includes images with complicated intrinsic structures such as histological tissue samples, which are harder to forge manually. Here, we use stable diffusion, one of the most recent generative algorithms, to create such a set of artificial histological samples. In a large study with over 800 participants, we study the ability of human subjects to discriminate between these artificial and genuine histological images. Although they perform better than naive participants, we find that even experts fail to reliably identify fabricated data. While participant performance depends on the amount of training data used, even low quantities are sufficient to create convincing images, necessitating methods and policies to detect fabricated data in scientific publications.
- - - - - - - - - - - - - - - -
The paper mills helping China commit scientific fraud
https://www.spectator.co.uk/article/the-paper-mills-helping-china-commit-scientific-fraud/
EXCERPTS: This week, the individual prize went to Elisabeth Bik, not a conventional boffin, but a sleuth – a dogged Dutch researcher who abandoned a career at a biomedical start-up for one exposing scientific fraud. [...] She investigates China-based ‘paper mills’ – outfits that churn out fake academic papers to order, manipulating and reusing sections of the same images and passing them off as original research...
[...] On the surface at least, China is well on the way to achieving President Xi Jinping’s goal of becoming a scientific superpower. Since 2017, it has published more scientific papers per year than any other country, while also leading the world in the number of citations, usually regarded as a measure of a paper’s impact. The problem is that much of this research is decidedly dodgy. In its rush for global dominance, the Chinese Communist party (CCP) has sacrificed quality for quantity, enabling large-scale fraud which threatens to undermine trust in the entire process of scientific publication... (MORE - details)
- - - - - - - - - - - - - - - -
Stanford professor paid $600/hr for expertise accused of using ChatGPT
https://www.sfgate.com/tech/article/stanford-professor-lying-and-technology-19937258.php
EXCERPT: Hancock cited 15 references in his declaration, mostly research papers related to political deepfakes and their impacts. Two of the 15 sources do not appear to exist. The journals he cites are real, as are some of the two citations’ authors, but journal archives show no sign of either paper.
[...] “The citation bears the hallmarks of being an artificial intelligence (AI) ‘hallucination,’ suggesting that at least the citation was generated by a large language model like ChatGPT,” Bednarz wrote. “Plaintiffs do not know how this hallucination wound up in Hancock’s declaration, but it calls the entire document into question, especially when much of the commentary contains no methodology or analytic logic whatsoever.” (MORE - details)
_
https://www.nature.com/articles/s41598-024-73913-8
ABSTRACT: AI-based methods to generate images have seen unprecedented advances in recent years challenging both image forensic and human perceptual capabilities. Accordingly, these methods are expected to play an increasingly important role in the fraudulent fabrication of data. This includes images with complicated intrinsic structures such as histological tissue samples, which are harder to forge manually. Here, we use stable diffusion, one of the most recent generative algorithms, to create such a set of artificial histological samples. In a large study with over 800 participants, we study the ability of human subjects to discriminate between these artificial and genuine histological images. Although they perform better than naive participants, we find that even experts fail to reliably identify fabricated data. While participant performance depends on the amount of training data used, even low quantities are sufficient to create convincing images, necessitating methods and policies to detect fabricated data in scientific publications.
- - - - - - - - - - - - - - - -
The paper mills helping China commit scientific fraud
https://www.spectator.co.uk/article/the-paper-mills-helping-china-commit-scientific-fraud/
EXCERPTS: This week, the individual prize went to Elisabeth Bik, not a conventional boffin, but a sleuth – a dogged Dutch researcher who abandoned a career at a biomedical start-up for one exposing scientific fraud. [...] She investigates China-based ‘paper mills’ – outfits that churn out fake academic papers to order, manipulating and reusing sections of the same images and passing them off as original research...
[...] On the surface at least, China is well on the way to achieving President Xi Jinping’s goal of becoming a scientific superpower. Since 2017, it has published more scientific papers per year than any other country, while also leading the world in the number of citations, usually regarded as a measure of a paper’s impact. The problem is that much of this research is decidedly dodgy. In its rush for global dominance, the Chinese Communist party (CCP) has sacrificed quality for quantity, enabling large-scale fraud which threatens to undermine trust in the entire process of scientific publication... (MORE - details)
- - - - - - - - - - - - - - - -
Stanford professor paid $600/hr for expertise accused of using ChatGPT
https://www.sfgate.com/tech/article/stanford-professor-lying-and-technology-19937258.php
EXCERPT: Hancock cited 15 references in his declaration, mostly research papers related to political deepfakes and their impacts. Two of the 15 sources do not appear to exist. The journals he cites are real, as are some of the two citations’ authors, but journal archives show no sign of either paper.
[...] “The citation bears the hallmarks of being an artificial intelligence (AI) ‘hallucination,’ suggesting that at least the citation was generated by a large language model like ChatGPT,” Bednarz wrote. “Plaintiffs do not know how this hallucination wound up in Hancock’s declaration, but it calls the entire document into question, especially when much of the commentary contains no methodology or analytic logic whatsoever.” (MORE - details)
_