Estimated reading time: 2 minutes, 36 seconds
Over the course of this semester, I’ve been involved in some important institutional research/assessment in which I co-facilitated some focus groups and co-designed some surveys.
Now, I know there are more rigorous ways of designing surveys and validating them and such, but that’s for the postpositivists, to whom I don’t belong. Any survey is heavily influenced by the values of those who put the questions, and any survey question based on perception rather than fact is loaded with assumptions from the writer and reader. So I almost never feel like validation or such is needed or valuable for the kinds of questions I care to ask, which are usually questions interested in exploring the respondent’s socially constructed understanding of the thing, rather than interested in predictions or control.
But beyond all of those things, I feel like even with my interpretivist/constructivist mindset, I add on the really important aspect of how answering a survey one day is pretty much different from answering it another day. Answering it in the morning may differ from the evening. Answering it after a particular conversation on that topic is different from answering it out if the blue. Answering my OWN survey is hilarious š
And so what bothers me about surveys is that even though they offer us (hopefully) useful data about a large population, in “digestible” ways, they’re always always always not a representation of objective reality (which I know, plus I don’t believe that a static objective reality exists for the kinds of research questions I have), but the problem is that we report on them as if they are, you know? And then those charts and numbers become our guide for important decision-making.
I hate anonymous surveys because I can’t ask someone to elaborate or explain something.
Two ways surveys are disempowering are
- The respondent doesn’t normally have an opportunity to ask a question or frame it the way they like, to ask the question they want to answer about the topic. I usually add an open-ended comment at the end for further comments. But by that time people are often exhausted
- The respondent has no say in how we end up interpreting the data they gave us for free, whom we group their responses with
- The respondent is categorized in particular ways through the gaze of the researcher, when they may wish to highlight different dimensions of themselves. For example, if we’re surveying faculty at our institution, do we need to also check if they’re alumni or parents at the same time? How might that affect their answers? How might thar frustrate them in their responses?
None of this is a surprise or rocket science. I’m unsure if scalability and speed are an excuse for doing this kind of thing. I guess I am frustrating myself by writing this up, but I needed to get it off my chest. I’d much rather be given a semester to do ethnographic research, get a big team of people to write weekly research journals, than to run a survey or two or five.
But that is just me.
Ha. Can’t wait to read this.
That Maha links to my work at the moment. Look forward to reading u in more detail. Am on plane off to Corsica. Wanted to get this done before airplane mode lock-down
http://tachesdesens.blogspot.fr/2017/08/survey-and-surveillance.html
tribute to your post, Simon: https://lumen5.com/v/txtf/ (my first ever)
Thanks for reflecting on this here. I thought your article in The Chronicle of Higher Education with Autumm Caines was really important for the way it exposed bias in a survey about ed tech https://www.chronicle.com/blogs/profhacker/seeking-integrity-in-edtech-research-again/64497.
In this post I think you’ve highlighted the way survey respondents might respond differently based on the time of day, or their moods. Humans and opinions are slippery! Another key point you make is the way respondents don’t have an opportunity to interpret the data they share. This is particularly problematic because of the power dynamics that exist between respondents and researchers. BTW I put a note in the margins here with an example of an interesting survey approach championed by Dave Snowden that might interest you.
Thank you Joe! Will check out the margins. I do know the tool Snowden’s organizations use, which allows the participant to interpret their data, right? I forget its name but I know Kristen Eshleman talks about it, and I’ve used it as a participant before…thanks for the reminder! I might be at a stage in my institution to suggest we experiment with it!
The tool is Sensemaker. Snowden makes a compelling argument about how it allows participants to self-index their stories, which is important when, for example, the Australian government wants to study Aboriginal groups using ethnographic methods. White researchers invariably attach different meaning to the stories they collect than the Aborigines themselves would. Here’s a demo site for Sensemaker: http://kafka.sensemaker-suite.com/Collector/collector.gsp?projectID=cultureSCAN&language=en#Collector
Yesss Sensemaker. I kept trying to remember the name. Thanks for posting as a comment so more ppl can see. I do like that tool. Kristen at Davidson uses it for students to index their own stories
Am still mulling it over…
Maha, I agree with everything you’ve written, but what’s the alternative if we want feedback? I hope it’s to do the best we can to create better surveys, while remaining cognizant of the very real concerns you express here.
Organize time to speak with people?