Over the course of this semester, I’ve been involved in some important institutional research/assessment in which I co-facilitated some focus groups and co-designed some surveys.
Now, I know there are more rigorous ways of designing surveys and validating them and such, but that’s for the postpositivists, to whom I don’t belong. Any survey is heavily influenced by the values of those who put the questions, and any survey question based on perception rather than fact is loaded with assumptions from the writer and reader. So I almost never feel like validation or such is needed or valuable for the kinds of questions I care to ask, which are usually questions interested in exploring the respondent’s socially constructed understanding of the thing, rather than interested in predictions or control.
But beyond all of those things, I feel like even with my interpretivist/constructivist mindset, I add on the really important aspect of how answering a survey one day is pretty much different from answering it another day. Answering it in the morning may differ from the evening. Answering it after a particular conversation on that topic is different from answering it out if the blue. Answering my OWN survey is hilarious 🙂
And so what bothers me about surveys is that even though they offer us (hopefully) useful data about a large population, in “digestible” ways, they’re always always always not a representation of objective reality (which I know, plus I don’t believe that a static objective reality exists for the kinds of research questions I have), but the problem is that we report on them as if they are, you know? And then those charts and numbers become our guide for important decision-making.
I hate anonymous surveys because I can’t ask someone to elaborate or explain something.
Two ways surveys are disempowering are
- The respondent doesn’t normally have an opportunity to ask a question or frame it the way they like, to ask the question they want to answer about the topic. I usually add an open-ended comment at the end for further comments. But by that time people are often exhausted
- The respondent has no say in how we end up interpreting the data they gave us for free, whom we group their responses with
- The respondent is categorized in particular ways through the gaze of the researcher, when they may wish to highlight different dimensions of themselves. For example, if we’re surveying faculty at our institution, do we need to also check if they’re alumni or parents at the same time? How might that affect their answers? How might thar frustrate them in their responses?
None of this is a surprise or rocket science. I’m unsure if scalability and speed are an excuse for doing this kind of thing. I guess I am frustrating myself by writing this up, but I needed to get it off my chest. I’d much rather be given a semester to do ethnographic research, get a big team of people to write weekly research journals, than to run a survey or two or five.
But that is just me.