Interpreting null findings / effects

Follow and Like:
Follow by Email
Follow Me

In the last year or so I’ve been teaching advanced social psychology and giving workshops on open-science, discussing some of the recent developments in psychological science and my understanding of what some refer to as the “replication/reproducibility crisis”. One of the topics discussed was the importance of sharing and publishing everything, regardless of the outcomes and whether findings were “significant” or not (in the Null Hypothesis Significance Testing sense).

Following that, some asked for my advice about what to do when findings are not significant, not sure how to interpret null findings and communicate those to the academic community in a journal submission. Can’t say that I’m an expert but I’ve been following the academic discussion on the issue for a while now, and so will summarize my limited understanding below.

I would generally say that we need to be cautious when referring to p-values, especially when there are issues of power (smaller sample size than required to confidently detect an effect). Psychological science has generally been moving towards reporting effect-sizes with confidence intervals that provide more information than a dichotomous significant/non-significant and “dancing” p-values that are very sensitive to sample size. 

Readings about interpreting null effects:

If you’re interested in quantifying the evidence in support of the null, then Bayesian seems like the way to go:

Notify of
Inline Feedbacks
View all comments