The open-science movement has promoted the sharing of scientific protocols, statistical-analysis programs and data files. These efforts have undoubtedly made it easier to see how researchers come to their conclusions. But methods and findings are just the tip of the iceberg. Research often involves a long journey, during which scientists reassess their expectations, pivot and adapt to unexpected challenges and constraints. All too often, this is hidden from view.

Is my study useless? Why researchers need methodological review boards
I vividly remember the first experiment I conducted for my PhD in economics, investigating the conditions under which trust forms between strangers. I had built a solid theoretical framework, designed the experiment — in which students played a trust game — carefully, and optimistically named my database ‘AwesomeData’. But when I ran my first sessions, the results made no sense. My participants weren’t behaving as theory — or even common sense — would suggest, because my set-up made the task too confusing.
The fall was hard. Despite all my preparation, no one had told me what the research process looks like. The project ended up going through many iterations, a completely redesigned experiment and six rejections from journals before a paper was published. These setbacks remain invisible to anyone reading the final manuscript.
For science to be truly open, we need to pay more attention to these twists and turns, which happen in all research. Not disclosing this information causes three key problems.
First, it prevents readers of research from properly understanding a project’s robustness and generalizability. In some fields, small pilot tests are used to fine-tune an experiment’s design — such as showing that a programme works only if participants receive incentives, or if a message is phrased in a particular way. Details of these pilots (and so the lessons learnt) rarely appear in print. Policymakers might then assume that the intervention can be scaled up regardless of incentives or wording.

Impact factors are outdated, but new research assessments still fail scientists
Second, lack of disclosure stops other researchers from learning how to work more efficiently and invest their resources wisely. This can exacerbate inequities. Researchers with sparse social networks can struggle to access the informal knowledge held by veterans in their field, yet this is easier for well-connected researchers, who often come from wealthier regions or groups.
Finally, sharing the trial-and-error aspects of research matters for mental health. Humans have a hard time learning from missing evidence — the ‘what you see is all there is’ phenomenon described by psychologist Daniel Kahneman. Hiding failures can lead researchers to overestimate how much others succeed, fuelling impostor syndrome.