In psychology, recent concerns about the reliability of published findings has led to the realization that research practices can be improved. In times where science funding is under pressure, and more reliable data often means more data, an important question is how reliable knowledge be generated as efficiently as possible, while taking both statistical and non-statistical aspects of the empirical cycle into account, such as the resources researchers have available, and goals they pursue.
In this project, the goal is to examine which design decisions will generate reliable empirical knowledge most efficiently. This concerns questions such as how reliable the evidence in single studies should be (e.g., how to justify alpha levels), how researchers can determine which effect sizes are interesting, or which effects are important enough to be independently replicated. Additional projects concern how to falsify predictions, and how to collect and organize meta-data from published articles to get easier insights into the research that is performed. This project aims to provide insights, and practical recommendations, on how to improve the reliability of psychological science.