Estimated reading time: 4 minutes, 12 seconds
There was an interesting discussion that came up today at the open access event at AUC, when I asked the speakers about quality and sustainability in crowdsourced initiatives.
One of the speakers was following the twitter feed when I said that, and knows I mentioned that I know it has worked well for open source software, and has worked well for things like Wikipedia… of course, this is often one of the first responses people give: it has worked for open source and wikipedia, so there is no reason to think it won’t work for other things.
Another response (by Seif Abou Zeid of Tahrir Academy) related to the importance of considering that “traditional” notions of quality would not be suitable for the alternative approaches to openness in education we are talking about here. He also felt that crowdsourcing required a leap of faith in the capacity of the community…
The other two panelists (Ahmed Hussein and Tarek Amr) disagreed on the “leap of faith” argument – Ahmed cited the open source movement; Tarek suggested that proofs need to be either based on logic or empirical research; crowdsourcing does not necessarily logically seem to work, but empirical evidence can be gathered to prove its value.
There was a bit of a disagreement here, as Seif seemed to insist that we needed new approaches, that the leap of faith aspect of his argument was important; that we needed to re-think our “criteria” for “quality” and that those in power attempt to maintain control by controlling the discourse about quality, etc.
Speaking to Seif and Ahmed later, I said I agreed with both aspects of the approach to the issue: both that quality perception need to be different for non-traditional approaches to education; and also that empirical evidence can help convince others of that quality (whatever the criteria turn out to be; fixed or flexible).
This, of course, prompted me to cite my favorite article of all time: Beyond Rigor, which suggests that:
“The voices that decry collective, playful learning, often do so from the soapbox of rigor: How can this sort of wild learning — that doesn’t aim at specific objectives, that focuses on dialogue and creativity instead of content mastery — ever pass muster as meaningful academic work?
In truth, it cannot. But not because the product of playful learning isn’t meaningful, but because our notion of academic rigor is irrelevant to that product. We must move past our traditional definition of rigorous academic work, and recognize that a learning experience or a pedagogical methodology can be both playful and also have the qualities of the best academic work, if not the reagents of traditional rigor. We hear “rigor,” and the word feels vague and unnerving; or worse, exclusionary. The work we’re describing here is expansive and not exacting — experimental and not insoluble — the moment before (and even anathema to) understanding. This is work where excellence is measured by exception.”
And even regarding what counts as empirical evidence, those processes and standards also need to change.
But there is one important point I wanted to make here. First, that the “success” of the open source movement cannot wholly be used to support arguments about success of crowdsourcing in different contexts, because crowdsourcing is contextually-dependent, isn’t it? Crowdsourcing depends on the community and the process and the success of that combination.
Let’s take the example of wikipedia. There are many Rafael Nadal (tennis player) fans, who will update the wikipedia page with his latest news and correct wrong information about him when they see it. They visit his page often enough to do so. However, there are probably many more players who are obscure and who may not have strong following; their pages may not be updated as frequently. Instances have occurred where certain wikipedia pages have held erroneous information for long periods of time. This is not an argument against crowdsourcing, or the “quality” of wikipedia, but an argument for the contextualized nature of crowdsourcing: it works better in some contexts than others. Its sustainability, too, depends on context. The passion and commitment and self-motivation of the people you draw into the crowdsourcing.
I just recently “crowdsourced” an interview with Dave Cormier and it worked… it might not have worked: if I did not have enough followers on twitter or my blog; if Dave was not such a popular character; if the ppl who know Dave and me didn’t have interesting questions, if they were too shy to ask them publicly. More than that, it would have been nothing if the journal hadn’t agreed to do this experiment, if Dave hadn’t.
So this is my point. There is both a “leap of faith” here, and a need to “experiment” and see if it works, and learn from that. But every time will be a bit of a gamble. It’s just a gamble that is really worth taking. What do others think?