Estimated reading time: 8 minutes, 26 seconds

Continuing the @vconnecting convo w @gsiemens on edu research approaches + use of data

Estimated reading time: 8 minutes, 26 seconds

So this is a continuation of a conversation we started on Vconnecting w George Siemens yday from #awear16. I am focusing on the aspect of importance of quantiative and “rigorous” educational research. I had audio problems and just listened back and realize he wasn’t at all trashing qualitative/narrative/interpretive research,  but rather he was saying 

  1. He believes some educators shy away from quantiative research because they don’t understand stats. I say “where is evidence, where are the STATS for that?” but I also affirm that I was always a whiz at maths, took stats courses as a comp Sci undergrad and I don’t believe in the use of stats extensively to explain, describe or predict human behavior. For epistemological, ontological, ethical, practical and pedagogical reasons (unsure if i can unpack all those today!)
  2. He says that we who promote the use of tech in learning need to be aware that by doing so we encourage more data collection about students. Good point there. And maybe the point he means is that we then need to be the ones (educators) who decide how that data can/should be used in useful and ethical ways rather than allow others who DON’T understand education to bend the data to their purposes. That’s a really good point 
  3. At another point in the conversation, we mentioned how a lot of educational research is irrelevant. Between George and Rebecca and me, we mentioned policy makers and others who do edu research but know nothing about edu and edu researchers who aren’t teachers and their work isn’t relevant to the classroom experience of teachers 
  4. At another point we talked about how we won the battle for open content but lost the war of open because of the blackboxing of algorithms used for analyzing things in LMS and more

Whew. Ok. I need some time to unpack that. But you should watch this edutaining “episode” of Vconnecting that is, imho, the John Oliver HBO version of a Vconnecting session with F word, adult diapers, laughter and interruption/debate. Really a fun session to watch!

I want to just record this thought. That

  • Every minute we decide to spend teaching quantitative reasoning is a minute we could have been teaching moral reasoning. Both are important. But I just saw that my institution’s Scientific Thinking course removed almost all their bioethics content (arguably some of the most important ethical questions of our time) in order to focus on quantiative reasoning (for some reason). I would argue there is a way to teach both in tandem. I would argue that ethical manipulation of data is necessary and asking ethical questions about which data we collect, how we collect it and how we interrogate and represent it are all essential questions of our time. But also we want to ask ethical questions about the thing ITSELF and not just the data we have on it and how we use it. Otherwise we risk skipping over the immeasurable things. The unquantifiable but important things.
  • Yes we live in a neoliberal age. Yes policy is driven by measurement and prediction and a positivist worldview, a la Habermas’s technical knowledge constitutive interests. Yes. Policymakers are looking at data. I am unsure why we continue, as educators, to roll over and give them data when we know it’s pointless and useless to the endeavor of education to do so. What the heck does anyone care if x% of blended courses are perceived to be better quality than fully f2f or fully online? The number of variables and assumptions behind those kinds of statements is horrendous and makes these stats entirely unhelpful. Each blended courses defines blending differently, implements blending differently, in a different context with different students, and there is very little in common among all such courses studied that the stats are meaningless if not outright misleading. I could list many more such things
  • I know there is a post by Dave Cormier and Lawrie Phipps that I should read soon. About importance of narrative in our age. I don’t need to read it in the sense that I am already on that side of things. I already embrace that worldview and have dug deep inside myself as I finished my PhD to unpack my personal epistemological and ontological and ethical assumptions to understand why I believe what I believe and do research and pedagogy the way I do. But maybe I will find something interesting in it that I wasn’t expecting. 

Then this morning I wake up to this tweet. At first, trigger happy, I retweet it (Mike Caulfield will kill me) then I stop and unretweet it coz I realize I don’t agree with it


Let’s unpack this shall we? First of all, how was this data collected? And does anyone realize kids often ask the SAME questions every day? 🙂 But the “almost none” is really questionable. 
Let me unpack the 40,000 questions from age 2-5. That’s 4 years. So 10,000 questions a year. A year is 365 days. So divide 10,000/365 and you get 27 questions a day. Of course kids sleep about 10 hours so we are talking about something like 27 questions over 14 hours. Just 2 questions an hour. Sounds reasonable. And it would probably be entirely unreasonable to expect teens to ask that many questions to ADULTS at that rate of 2 questions an hour. But I am guessing they ask them of each other. They ask Google. They ask them in their heads. They also ask more trivial questions like “how much does this cost?” and “are you feeling Ok Today?” and “do I look good in this dress?” and “can I take you to dinner tomorrow?”. Those are questions,  right? You have to consider them legitimate. Because among children’s questions are really insightful ones like “why does this bird have a colorful neck?” and “why won’t that magnet stick to the wooden table?” and “why do I have to go to bed now?” and “why does my sunny-side up egg have jagged edges today when it didn’t yesterday?”. You know?

So really. I think narrative would have achieved the purpose just fine. Anyone who has observed a child under age 5 knows they ask lots of questions… If someone listens and responds in encouraging ways they ask more; if not they may or may not lose hope (research probably happening there and I suspect income level and number of children per household matter but the unquantifiable presence of a loving parent or guardian is likely an important factor. Just off top of my head).

But let me go back to a point to illustrate why I have problems with privileging quantiative data in a positivist manner in education. Notice I said privileging and positivist. Having quantiative data is useful. Having it drive the questions and priorities is not. Imho

While researching factors that influence critical thinking development for AUC students I found lots of articles making these correlations (and I know stats can be much stronger than correlations, but that’s what had been done)

  1. Performance on a particular critical thinking standardized test correlated with having intensive writing courses at a college/univ
  2. Performance on a critical thinking test tied to participation in extracurricular activities
  3. Performance on a critical thinking test tied to having taken a course that teaches critical thinking 

The last one is easy to attack and has been. Teaching to the test.

Done 

Moving on. Standardized testing of critical thinking? First of all, so many of those. Second, real life doesn’t ask you to think critically in discrete fragmented, decotnextualized ways that come in multiple choices. Not usually. Just the act of trying to measure critical thinking has changed our understanding of what it is or what it can be. When critical thinking is so much more than what most of these test measure. There are a few narrative/writing-based ones. But remember that (a non standardized one) people like Perry based his model of intellectual development based on interviewing a tiny number of male Harvard students – and then the rest of us just followed the model and built upon it (kudos to Baxter Magolda who hacked it by merging with Women’s Ways of Knowing). But yeah. Not only ate standardized tests problematic here. Even non standardized measures are! Value laden. 

Remember that quantum mechanics tells us something like you can’t measure a thing’s velocity (or location? Can’t remember) while it’s moving and that to measure it you alter it. Or something like that. You know what I am getting at, right? That to make something measurable you alter it to the point where it may no longer be recognizable.

And then there’s generalizing about extracurricular activities and writing courses. Even if you divided extracurricular activities into categories like sports, simulations,  community service,  debate club…even then. Each community service activity is different. In my interviews with students (and my own experience as an active person in college) I learned how one person’s role and commitment in an activity influences how much they learn. I also learned that students don’t have equal access to these experiences. And that some didn’t get much support reflecting on learning in those activities. And of course some promoted certain aspects of critical thinking but not others. 

When we define something in order to measure it we vastly reduce our potential understanding of its complexity. And we make numbers that summarize stories that are so nuanced they cannot be aggregated ethically.

Gosh i have so much more to say but I need to stop coz I just arrived at work.

Comments and pushback welcome. These aren’t my complete thoughts and will write more soon

Notes 

Check out the #SoNar hashtag and their presentation at #OpenEd16

Check out Paul Prinsloo who wrote a couple of times on use of data in ethical ways. Here is latest one. Also see the one before which is beautiful. 

Lawrie and Dave on narrative – i will read this eventually 

3 thoughts on “Continuing the @vconnecting convo w @gsiemens on edu research approaches + use of data

  1. I think there were a lot of terms thrown around in the conversation that really need to be unpacked as far as what was meant. For me, I’m not sure if I can really agree that we can label an entire body of research “mostly irrelevant” or “mostly relevant” for that matter. I think about it this way: 99.99% of all medical research is completely irrelevant to me… until… I get sick with something new, or injure something… and then a slice of that research that was previously irrelevant to me is suddenly very, very relevant. Or if I need a surgery, suddenly a lot of research in 10 different fields of study within medical research suddenly becomes very, very relevant to me.

    Research is a very contextual activity, and even then each individual study is only going to be relevant to very specific contexts. Once you get to larger ideas like “self-regulation,” you might say “well, the research on that is very relevant,” but we are saying that more out of a popularity contest based on the amount of research that has gone into that topic versus other areas that are also very important but just not as researched. So how close does relevancy get to popularity before we say “well, it should be about what really matters and not what gets the most attention.”

    But then the related problem is that during the session we discuss that most educational research is crap and then five minutes later touch on the idea of self-regulated learning as if it were a settled deal that it is important to education. But how do we know that it is? Well, a huge chunk of that “crap” research says it is. Most of our assumptions and side comments and main points even are so steeped in the constructs of what research has proven is valid already. Saying that most research is crap usually gets contradictory within a few minutes – until we really unpack what is meant by that. Which is obviously hard to do in a quick VC session.

    So, for me, I think we really need to prove that we can accurately uses such broad stroke classifications of educational research in the first place – relevant, irrelevant, crap, good, etc – for more than individual studies as applied in individual contexts. Can we really say “most educational research is irrelevant” or “most educational research is relevant” or “most educational research is crap” or “most educational research is good”? Are those assessment fair or too broad because of context?

    1. Great point! Which is why my post referred to particular research questions about particular things and the studies done around them. Does that help contextualize it? About when particular research done for a particular purpose can be more or less useful?

  2. “At first, trigger happy, I retweet it (Mike Caulfield will kill me) then I stop and unretweet it coz I realize I don’t agree with it” — I like how I’m the Ghost of Bad Retweets Past. 😉

    I think (as is clear to you as well) that a lot of this is not just quantitative positivism, but *bad* quantitative positivism. It fails even on its own terms. The thing is a good, contextually aware positivist approach is really useful to some things. But man, there are so many crap ones. There are some decent methods to tease out confounds in quantitative research and get at something that resembles truth. But for whatever reason when we bring social science into the public arena we just always seem to drop those methods.

    The thing is, I think the problem is demand-driven. There’s a decent number of people that can do and produce good quantitative work. But the problem is that in education very often better work is not rewarded over inferior work, so we end up with this whole array (as Matt notes) of crap studies that maybe say something that might be useful if you squint enough and overlook a dozen issues. We get crappy quantitative studies because we are crappy consumers of studies.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.