You know something is bad when even your taxi driver knows about it….
Apparently, psychology has a problem with scientific fraud at the moment, as Jens Forster at Amsterdam is the latest to have the finger pointed at him.
This doesn’t look good for social psychology or Holland.
But it isn’t just social psychology that this is a problem for, it is a potential problem for psychology as a whole and for science in general.
Forster claims that he is the victim of a witch hunt. While I don’t know whether this is true, it can sometimes be the case. I’ve known two people accused of fraud who have been cleared. In one case, this person has literally been stalked by his accuser for the best part of 2 decades, with trivial complaints regularly registered and investigated for which none of them have been upheld. This is a particular problem if you are publishing in a very fraught field with factions. If they can’t poke holes in your theory, they can now come gunning for you personally and undermine your reputation.
The Forster case also makes me wonder whether some of these accusations are a case of the “file drawer problem”. You run 10 experiments and only 2 produce significant differences….which ones do you think will end up published? You run 30, 10 work out, but a smaller number produce really clear cut neat data….which ones do you choose to publish?
While this is of questionable practice, and researchers need a slap on the hand for it, it is rampant, and it is encouraged by journals refusing to send articles out for review that do not find significant findings. It is this latter issue which encourages only the “best looking” data to get published and for rather parochial views of findings to occur that give an incomplete view.
So, I’m placing the blame at the door of journals. Although they don’t, obviously, have a policy for only publishing good looking data, it is pretty explicit. I myself had reviewers comment that they don’t want my article published because the data is “messy”. Well, yes, data regularly is messy, but that is because I haven’t massaged or manipulated it. Data is what it is, and when it isn’t, it’s been fudged in some ways. Likewise, reviewers have declined to recommend publication because the data is nonsignificant, or doesn’t fully support a theory, because some of the assumptions were not supported. If journals weren’t so parochial in their approach many of these issues would not be occurring now. We are reaping the whirlwind of decades of poor journal policy.
Perhaps I am naive or overly optimistic, but I’m not inclined to think fraud is rampant in science. Yes, poor research practices are, but out and out fraud, I certainly hope not (although only 14% of medical research can be replicated…so if you want to point the finger anywhere, look there first!).
Some universities are trying to provide space on campus to store data for the foreseeable future. This seems like a good idea. What with the pressure on space on campuses holding onto large data sets (which table top psychology experiments with lots of participants produce oodles of it), there needs to be some way of storing this data. Most academic offices can only contain so much of it before it either goes in the trash or makes its way to the academics home for storage in the attic, which they shouldn’t have to do).
Meanwhile, I will be renaming all my excel files with something more meaningful than Book1…..