I’ve had a run of these manuscripts to review recently. A crafty slide of hand.
You run an experiment with a couple of manipulations. You analyse your results and find that the interaction isn’t significant, but at least one of the main effects is significant.
All that time spent on designing, running, analysing and writing up wasted.
Firstly, you might query that null findings are important too. I agree. Unfortunately, unless you have an air tight methodology, or it is a very controversial area, no-one is going to publish it. Even with one or other of those two things, it’s still a hard sell. I might not agree with it, but that is the current state of psychology, although there is a bit of movement recently.
However, all is not lost! With a bit of crafty editing, you can rejig your hypotheses to make out like the interaction wasn’t what you were interested in in the first place! Don’t report it! Concentrate on that one lonely main effect!
Unfortunately, there are people like me about.
If an interaction wasn’t what you were interested in then you need to justify why you looked at the same two variables in the same experiment. Not reporting it and ignoring it just isn’t good enough.
Basically, if your interaction isn’t significant, you need to back away from the word processor, and give some thought as to what is actually going on. Why is there no interaction? Is it simply a power issue? Perhaps it is an important finding that there is no interaction? What the heck is going on?
So, if your interaction isn’t significant, don’t rush to publish. You have half a story. More work is needed. There is a chance that you *might* be able to slip it past the reviewers and editor but if they catch you, there is a big bitch slap down coming your way.
And it is never nice to be told you’re shit at stats.
We’re psychologists….we know this already