I recently caught up with two colleague/friends of mine from McGill University, who are about to revisit and repeat some analysis we did together on Course Evaluations at the university level.
At the time we did the original analysis in 2009-10, I was the data provider. I helped orient them to their data set, as we defined it together, and I gave them some help getting their analysis off the ground. Then they took that and ran with some really interesting research questions.
They turned it into a published paper, and it (gleefully) upended some old assumptions about which students respond to end-of-semester course evaluation surveys, and what you can learn from that. It even caused some waves on campus when the findings were shared locally. Great for them, all around. 🙂
I’m not an expert in the meta-issues of doing research on an educational institution (as opposed to doing research at an educational instutition). I would have assumed, and maybe they did too, back in 2010, that this kind of analysis could easily jump the barrier between behind-the-scenes Higher Ed Administration and Educational Research.
But it looks like that transition is not as easy to pull off as it once was.
Take this in the direction of:
LW and LdG saying that the era of ad hoc research is over. Nowadays with Big Data, the assumption in peer review is not just that your results will be reproducible elsewhere, but that other researchers will be able to review and re-analyze your very own data. That certainly changes how you go into a project–you have to begin with the end in mind. Plan for anonymization, etc.