most of my online presence is on twitter these days, since that format seems to suit my current wavelength of internetting behavior better than the blog setting.
writing this paper while dealing with the random stressful events that life tends to throw at a person from time to time had me blowing off a bit of steam. and i got to thinking about this- often we scientists write papers as small stories of data- “we did A, and that was interesting, so then we did B, which led us to C, and ta-da now we have new knowledge!” but really, it’s bullshit. chances are we did B first and asked wtf that result meant, then we did A, and C was something that the lab had been sitting on for a year since it wasn’t related to anything that was actually going out for publication at the time.
regardless, we’re compelled by convention to make it out like every step of everything was thoroughly justified and went just as planned.
#overlyhonestmethods started as being about revealing the “between the lines” in our methods sections. nobody states that procedures are scheduled around things like lunch breaks or other human elements of our day. but we all know when we read “behavioral assessments were collected at 0900 and 1300″ that the authors wanted to sit the fuck down and eat something in the middle of their day. we may report that “we” did some tedious video scoring, but you know it was the editorial “we” – aka, some undergrad did it. sometimes we set out to run ten replicates, and we fuck one up. or something else crazy happened in the middle, like an earthquake or a fire drill. well, now we’re reporting nine. but nowhere in the methods will you read “so we kinda blew that first attempt and had to try again” because that’s the lab lore, not the scientific literature. because in the scientific literature we’re professional, we Totally Meant To Do It That Way.
it’s been a ton of fun to see the outpouring of lab lore behind what’s in our methods sections. i’ve seen some snark and sarcasm and written my own, especially regarding statistics (who has reviewed more than a few papers and hasn’t run across ONE where the authors were clearly looking for p<0.05, come on). someone somewhere is going to take that literally, and won’t that be fun. meanwhile i don’t think you’ll see a published mention of how the reported incubation time was defined by how long the author spent drinking a cup of afternoon tea, but i think #overlyhonestmethods gets it out there that there are real people doing the science. for the most part we’re not as stiff and sterile as our publications (the measure of our careers) make us out to be.