Published: 01-04-2026 17:00 | Updated: 01-04-2026 17:00

Half of social science results cannot be replicated

Illustration of a man analysing research findings
Image: Getty Images

A major international collaboration on scientific reliability has been completed and is now presented in three articles in Nature by researchers from institutions including Karolinska Institutet. Around half of previously published research results in the social and behavioural sciences could not be replicated in new experiments.

The research programme, known as SCORE, involved 865 researchers who analysed nearly 3,900 scientific articles published between 2009 and 2018 in 62 journals in the fields of criminology, economics, educational science, health sciences, leadership, marketing, organisational behaviour, psychology, political science, public administration and sociology.

In three studies published in Nature, different methods were used to investigate whether the research results are reliable. The questions addressed were whether the results can be reproduced, whether they are robust, and whether they can be replicated.

Replication involves testing the same research question, but with new data. In the replication study, the researchers analysed 164 previously published results in the social and behavioural sciences. Of these, just under half, 49 per cent, could be replicated with a similar result to that of the original study.

Open data is key

In the reproducibility study, the same analysis was carried out on the same data. Reproducibility was hampered by the fact that data was often unavailable; only just under a quarter of the articles studied had shared their data openly. Of the 143 articles analysed, 74 per cent could be reproduced to an approximate degree, and 54 per cent to an exact degree. When the original data and code were shared, these figures increased to 91 and 77 per cent, respectively.

Portrait photo of Gustav Nilsonne
Gustav Nilsonne. Photo: Elise Cervin

“This shows that transparency is key to achieving credible research results,” says Gustav Nilsonne, associate professor of neuroscience at Karolinska Institutet, who co-led the robustness study and is a co-author of all three papers. “Sharing research data enables outsiders to assess which results are reliable.”

In the robustness study, alternative analyses were tested on the same data in 100 different articles. For each article, at least five researchers analysed the same hypothesis using the same data, but with the analysis method they deemed to be the best.

Same conclusion in most cases

Only a third of the new analyses yielded results very close to those reported in the original study. However, three out of four analyses reached the same overall conclusion as the original article. But in about a quarter of the cases, no clear effects were found at all, and in a few cases (around two per cent), the results pointed in the opposite direction.

“This is the world’s largest research project to date investigating the reliability of reported scientific results, and an example of how large-scale collaborations can address questions that no single research group could answer alone,” concludes Gustav Nilsonne. “I hope we will see systematic replication attempts in more fields of research in the future.”

The collaboration was led by the Center for Open Science and researchers at Pennsylvania State University, TwoSix Technologies, the University of Southern California and Eötvös Loránd University. The programme was funded by the US research council DARPA.

Publications

“Investigating the reproducibility of the social and behavioural sciences”, Olivia Miske et al., Nature, online 1 April 2026, doi: 10.1038/s41586-026-10203-5.

“Investigating the analytical robustness of the social and behavioural sciences”, Balazs Aczel et al., Nature, online 1 April 2026, doi: 10.1038/s41586-025-09844-9.

“Investigating the replicability of the social and behavioural sciences”, Andrew Tyner et al., Nature, online 1 April 2026, doi: 10.1038/s41586-025-10078-y.