AuthorAlger, Bradley E
PublisherSociety for Neuroscience
MetadataShow full item record
AbstractScience needs to understand the strength of its findings. This essay considers the evaluation of studies that test scientific (not statistical) hypotheses. A scientific hypothesis is a putative explanation for an observation or phenomenon; it makes (or "entails") testable predictions that must be true if the hypothesis is true and that lead to its rejection if they are false. The question is, "how should we judge the strength of a hypothesis that passes a series of experimental tests?" This question is especially relevant in view of the "reproducibility crisis" that is the cause of great unease. Reproducibility is said to be a dire problem because major neuroscience conclusions supposedly rest entirely on the outcomes of single, p valued statistical tests. To investigate this concern, I propose to (1) ask whether neuroscience typically does base major conclusions on single tests; (2) discuss the advantages of testing multiple predictions to evaluate a hypothesis; and (3) review ways in which multiple outcomes can be combined to assess the overall strength of a project that tests multiple predictions of one hypothesis. I argue that scientific hypothesis testing in general, and combining the results of several experiments in particular, may justify placing greater confidence in multiple-testing procedures than in other ways of conducting science.
Rights/TermsCopyright © 2020 Alger et al.
Identifier to cite or link to this itemhttp://hdl.handle.net/10713/13481