So this is a continuation of a conversation we started on Vconnecting w George Siemens yday from #awear16. I am focusing on the aspect of importance of quantiative and “rigorous” educational research. I had audio problems and just listened back and realize he wasn’t at all trashing qualitative/narrative/interpretive research, but rather he was saying
- He believes some educators shy away from quantiative research because they don’t understand stats. I say “where is evidence, where are the STATS for that?” but I also affirm that I was always a whiz at maths, took stats courses as a comp Sci undergrad and I don’t believe in the use of stats extensively to explain, describe or predict human behavior. For epistemological, ontological, ethical, practical and pedagogical reasons (unsure if i can unpack all those today!)
- He says that we who promote the use of tech in learning need to be aware that by doing so we encourage more data collection about students. Good point there. And maybe the point he means is that we then need to be the ones (educators) who decide how that data can/should be used in useful and ethical ways rather than allow others who DON’T understand education to bend the data to their purposes. That’s a really good point
- At another point in the conversation, we mentioned how a lot of educational research is irrelevant. Between George and Rebecca and me, we mentioned policy makers and others who do edu research but know nothing about edu and edu researchers who aren’t teachers and their work isn’t relevant to the classroom experience of teachers
- At another point we talked about how we won the battle for open content but lost the war of open because of the blackboxing of algorithms used for analyzing things in LMS and more
Whew. Ok. I need some time to unpack that. But you should watch this edutaining “episode” of Vconnecting that is, imho, the John Oliver HBO version of a Vconnecting session with F word, adult diapers, laughter and interruption/debate. Really a fun session to watch!
I want to just record this thought. That
- Every minute we decide to spend teaching quantitative reasoning is a minute we could have been teaching moral reasoning. Both are important. But I just saw that my institution’s Scientific Thinking course removed almost all their bioethics content (arguably some of the most important ethical questions of our time) in order to focus on quantiative reasoning (for some reason). I would argue there is a way to teach both in tandem. I would argue that ethical manipulation of data is necessary and asking ethical questions about which data we collect, how we collect it and how we interrogate and represent it are all essential questions of our time. But also we want to ask ethical questions about the thing ITSELF and not just the data we have on it and how we use it. Otherwise we risk skipping over the immeasurable things. The unquantifiable but important things.
- Yes we live in a neoliberal age. Yes policy is driven by measurement and prediction and a positivist worldview, a la Habermas’s technical knowledge constitutive interests. Yes. Policymakers are looking at data. I am unsure why we continue, as educators, to roll over and give them data when we know it’s pointless and useless to the endeavor of education to do so. What the heck does anyone care if x% of blended courses are perceived to be better quality than fully f2f or fully online? The number of variables and assumptions behind those kinds of statements is horrendous and makes these stats entirely unhelpful. Each blended courses defines blending differently, implements blending differently, in a different context with different students, and there is very little in common among all such courses studied that the stats are meaningless if not outright misleading. I could list many more such things
- I know there is a post by Dave Cormier and Lawrie Phipps that I should read soon. About importance of narrative in our age. I don’t need to read it in the sense that I am already on that side of things. I already embrace that worldview and have dug deep inside myself as I finished my PhD to unpack my personal epistemological and ontological and ethical assumptions to understand why I believe what I believe and do research and pedagogy the way I do. But maybe I will find something interesting in it that I wasn’t expecting.
Then this morning I wake up to this tweet. At first, trigger happy, I retweet it (Mike Caulfield will kill me) then I stop and unretweet it coz I realize I don’t agree with it
— Education Journal ME (@EJMEcom) November 15, 2016
Let’s unpack this shall we? First of all, how was this data collected? And does anyone realize kids often ask the SAME questions every day? 🙂 But the “almost none” is really questionable.
Let me unpack the 40,000 questions from age 2-5. That’s 4 years. So 10,000 questions a year. A year is 365 days. So divide 10,000/365 and you get 27 questions a day. Of course kids sleep about 10 hours so we are talking about something like 27 questions over 14 hours. Just 2 questions an hour. Sounds reasonable. And it would probably be entirely unreasonable to expect teens to ask that many questions to ADULTS at that rate of 2 questions an hour. But I am guessing they ask them of each other. They ask Google. They ask them in their heads. They also ask more trivial questions like “how much does this cost?” and “are you feeling Ok Today?” and “do I look good in this dress?” and “can I take you to dinner tomorrow?”. Those are questions, right? You have to consider them legitimate. Because among children’s questions are really insightful ones like “why does this bird have a colorful neck?” and “why won’t that magnet stick to the wooden table?” and “why do I have to go to bed now?” and “why does my sunny-side up egg have jagged edges today when it didn’t yesterday?”. You know?
So really. I think narrative would have achieved the purpose just fine. Anyone who has observed a child under age 5 knows they ask lots of questions… If someone listens and responds in encouraging ways they ask more; if not they may or may not lose hope (research probably happening there and I suspect income level and number of children per household matter but the unquantifiable presence of a loving parent or guardian is likely an important factor. Just off top of my head).
But let me go back to a point to illustrate why I have problems with privileging quantiative data in a positivist manner in education. Notice I said privileging and positivist. Having quantiative data is useful. Having it drive the questions and priorities is not. Imho
While researching factors that influence critical thinking development for AUC students I found lots of articles making these correlations (and I know stats can be much stronger than correlations, but that’s what had been done)
- Performance on a particular critical thinking standardized test correlated with having intensive writing courses at a college/univ
- Performance on a critical thinking test tied to participation in extracurricular activities
- Performance on a critical thinking test tied to having taken a course that teaches critical thinking
The last one is easy to attack and has been. Teaching to the test.
Moving on. Standardized testing of critical thinking? First of all, so many of those. Second, real life doesn’t ask you to think critically in discrete fragmented, decotnextualized ways that come in multiple choices. Not usually. Just the act of trying to measure critical thinking has changed our understanding of what it is or what it can be. When critical thinking is so much more than what most of these test measure. There are a few narrative/writing-based ones. But remember that (a non standardized one) people like Perry based his model of intellectual development based on interviewing a tiny number of male Harvard students – and then the rest of us just followed the model and built upon it (kudos to Baxter Magolda who hacked it by merging with Women’s Ways of Knowing). But yeah. Not only ate standardized tests problematic here. Even non standardized measures are! Value laden.
Remember that quantum mechanics tells us something like you can’t measure a thing’s velocity (or location? Can’t remember) while it’s moving and that to measure it you alter it. Or something like that. You know what I am getting at, right? That to make something measurable you alter it to the point where it may no longer be recognizable.
And then there’s generalizing about extracurricular activities and writing courses. Even if you divided extracurricular activities into categories like sports, simulations, community service, debate club…even then. Each community service activity is different. In my interviews with students (and my own experience as an active person in college) I learned how one person’s role and commitment in an activity influences how much they learn. I also learned that students don’t have equal access to these experiences. And that some didn’t get much support reflecting on learning in those activities. And of course some promoted certain aspects of critical thinking but not others.
When we define something in order to measure it we vastly reduce our potential understanding of its complexity. And we make numbers that summarize stories that are so nuanced they cannot be aggregated ethically.
Gosh i have so much more to say but I need to stop coz I just arrived at work.
Comments and pushback welcome. These aren’t my complete thoughts and will write more soon
Check out the #SoNar hashtag and their presentation at #OpenEd16
Check out Paul Prinsloo who wrote a couple of times on use of data in ethical ways. Here is latest one. Also see the one before which is beautiful.
Lawrie and Dave on narrative – i will read this eventually