Dr Peter Martin, is a Lecturer in Applied Statistics in the Department of Applied Health Research at University College London. One of the joys of being a statistician is that he gets to collaborate with fellow researchers from many disciplines, including psychology, sociology, medicine, health care, and epidemiology. He has a particular interest in mental health and psychotherapy research.
Abstract: Many published research findings may be false. Attempts to replicate results from high-profile scientific studies too often contradict the original findings. One source of the problem is the way that statistical hypothesis tests are commonly used in contemporary research. In particular, many scientists misunderstand p-values. The “p < 0.05” threshold was originally intended to protect researchers from over-interpreting random variation. But “statistical significance” is now often wrongly perceived as indicating the scientific robustness of a finding. When a journal editor rejects a manuscript because the p-value is larger than 0.05, publication bias is the logical consequence.