Psychology is in crisis. This scientist’s striking confession explains how we got here.

“We shook the data a bit more until something slightly more newsworthy fell out of it.”

The field of psychology is in the middle of a painful, deeply humbling period of introspection. Long-held psychological theories are failing replication tests, forcing researchers to question the strength of their methods and the institutions that underlie them.

Take Joseph Hilgard, a psychologist at the University of Pennsylvania.

In a recent blog post titled “I was wrong,” he fesses up to adding a shoddy conclusion to the psychological literature (with the help of colleagues) while he was a graduate student at the University of Missouri. “[W]e ran a study, and the study told us nothing was going on,” he writes. “We shook the data a bit more until something slightly more newsworthy fell out of it.”

This a bold and honest move — the type that gives me reasons to be optimistic for the future of the science. He’s confessing to a practice called p-hacking, or the cherry-picking of data after an experiment is run in order to find a significant, publishable result. While this has been commonplace in psychology, researchers are now reckoning with the fact that p-hacks greatly increase the chances that their journals are filled with false positives. It’s p-hacks like the one Hilgard and his colleagues used that gave weight to a theory called ego depletion, the very foundation of which is now being called into question.

Ego depletion is a theory that finds when a task requires a lot of mental energy — resisting temptations, regulating emotions — it depletes a store of internal willpower and dulls our mental edge. A forthcoming paper in Perspectives on Psychological Science finds no evidence of ego depletion in a trial of more than 2,000 participants across a couple dozen labs.

The study from Hilgard and his colleagues was a spin on a classic ego-depletion experiment. In their test, participants were assigned to play video games of varying levels of violence and difficulty, and then later took a brain teaser to test how much of their willpower had been sapped. The researchers wanted to find out if it was the game’s violent content that led to a decrease in willpower or if it was the game’s difficulty.

But there was a problem: The experiment found no effects for game violence or for game difficulty.

“So what did we do?” Hilgard writes. “We needed some kind of effect to publish, so we reported an exploratory analysis, finding a moderated-mediation model that sounded plausible enough.”

They found that if they ran the numbers accounting for a player’s experience level with video games, they could achieve a significant result.