Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

The beginning of AI recognition GPZero scanned all 4,841 papers approved by the prestigious Conference on Neural Information Processing Systems (NeurIPS), which took place last month in San Diego. The company found 100 quotes on 51 papers that it proved to be fake, the company told TechCrunch.
Having a paper approved by NeurIPS is a great resume in the world of AI. Considering that these are the leaders in AI research, one would think that they would use LLMs for the tedious task of typing words.
So a caveat abounds with the findings: 100 positives written in 51 papers aren’t really important. Each paper contains many words. So out of the tens of thousands mentioned, this is, statistically speaking, zero.
It is also important to note that negative statements do not disprove the paper’s research. As NeurIPS said Chancewhich was the first to report on GPTZero’s research, “Even if 1.1% of the papers have one or more incorrect references due to the use of LLMs, the content of those papers (is) unlikely to be worthless.”
But having said all that, a lie is not a lie. NeurIPS prides itself on its “professional publications in machine learning and artificial intelligence,” it does. And each paper is peer-reviewed by a number of peers who are advised to make presentations.
The word is a form of money for researchers. It is used as a performance metric to show how a researcher’s work is impacting their peers. When AI creates them, it lowers their value.
No one can criticize peer reviewers for not quoting a few words generated by AI given the huge amount involved. GPTZero is quick to explain this. The purpose of the game was to provide information about how AI jumps through the “shipping tsunami” that “has disrupted the evaluation pipelines of these conferences to the end,” start says in his report. GPZero also points to a May 2025 paper titled “The Challenge of AI Conference Peer Review” who discussed the problem at the first meetings, including NeurIPS.
Techcrunch event
San Francisco
| |
October 13-15, 2026
However, why were the researchers unable to see the validity of the LLM program? Surely they should know the exact list of papers they used in their work.
What it all boils down to is one big, weird thing: If the world’s leading AI experts, whose reputations are at stake, can’t confirm that their LLM application is accurate in detail, what does that mean for the rest of us?