Estimated reading time: 4 minutes, 12 seconds
There was an interesting discussion that came up today at the open access event at AUC, when I asked the speakers about quality and sustainability in crowdsourced initiatives.
One of the speakers was following the twitter feed when I said that, and knows I mentioned that I know it has worked well for open source software, and has worked well for things like Wikipedia… of course, this is often one of the first responses people give: it has worked for open source and wikipedia, so there is no reason to think it won’t work for other things.
Another response (by Seif Abou Zeid of Tahrir Academy) related to the importance of considering that “traditional” notions of quality would not be suitable for the alternative approaches to openness in education we are talking about here. He also felt that crowdsourcing required a leap of faith in the capacity of the community…
The other two panelists (Ahmed Hussein and Tarek Amr) disagreed on the “leap of faith” argument – Ahmed cited the open source movement; Tarek suggested that proofs need to be either based on logic or empirical research; crowdsourcing does not necessarily logically seem to work, but empirical evidence can be gathered to prove its value.
There was a bit of a disagreement here, as Seif seemed to insist that we needed new approaches, that the leap of faith aspect of his argument was important; that we needed to re-think our “criteria” for “quality” and that those in power attempt to maintain control by controlling the discourse about quality, etc.
Speaking to Seif and Ahmed later, I said I agreed with both aspects of the approach to the issue: both that quality perception need to be different for non-traditional approaches to education; and also that empirical evidence can help convince others of that quality (whatever the criteria turn out to be; fixed or flexible).
This, of course, prompted me to cite my favorite article of all time: Beyond Rigor, which suggests that:
“The voices that decry collective, playful learning, often do so from the soapbox of rigor: How can this sort of wild learning — that doesn’t aim at specific objectives, that focuses on dialogue and creativity instead of content mastery — ever pass muster as meaningful academic work?
In truth, it cannot. But not because the product of playful learning isn’t meaningful, but because our notion of academic rigor is irrelevant to that product. We must move past our traditional definition of rigorous academic work, and recognize that a learning experience or a pedagogical methodology can be both playful and also have the qualities of the best academic work, if not the reagents of traditional rigor. We hear “rigor,” and the word feels vague and unnerving; or worse, exclusionary. The work we’re describing here is expansive and not exacting — experimental and not insoluble — the moment before (and even anathema to) understanding. This is work where excellence is measured by exception.”
And even regarding what counts as empirical evidence, those processes and standards also need to change.
But there is one important point I wanted to make here. First, that the “success” of the open source movement cannot wholly be used to support arguments about success of crowdsourcing in different contexts, because crowdsourcing is contextually-dependent, isn’t it? Crowdsourcing depends on the community and the process and the success of that combination.
Let’s take the example of wikipedia. There are many Rafael Nadal (tennis player) fans, who will update the wikipedia page with his latest news and correct wrong information about him when they see it. They visit his page often enough to do so. However, there are probably many more players who are obscure and who may not have strong following; their pages may not be updated as frequently. Instances have occurred where certain wikipedia pages have held erroneous information for long periods of time. This is not an argument against crowdsourcing, or the “quality” of wikipedia, but an argument for the contextualized nature of crowdsourcing: it works better in some contexts than others. Its sustainability, too, depends on context. The passion and commitment and self-motivation of the people you draw into the crowdsourcing.
I just recently “crowdsourced” an interview with Dave Cormier and it worked… it might not have worked: if I did not have enough followers on twitter or my blog; if Dave was not such a popular character; if the ppl who know Dave and me didn’t have interesting questions, if they were too shy to ask them publicly. More than that, it would have been nothing if the journal hadn’t agreed to do this experiment, if Dave hadn’t.
So this is my point. There is both a “leap of faith” here, and a need to “experiment” and see if it works, and learn from that. But every time will be a bit of a gamble. It’s just a gamble that is really worth taking. What do others think?
4 thoughts on “Reflections on Crowdsourcing after first day of Open Access Event”
There is one confession that I have to make here. When I was responding to your question, I was partly playing the devil’s advocate there. Let me explain more. As I mentioned in one of the tweets. I know a group of people who are currently working on the creation of an Open Access Journal. Their initial idea, was to crowdsource the peer-review part there, well, may be I better just call it review in such case, as there is no peer there. Anyhow, we had a similar discussion to what we have here, however, in that discussion, I had less faith in how the crowdsourcing thing will work for the review process. Apparently, they too were a bit suspicious, so, in the end, they decided to do some mixture between crowdsourcing and traditional peer-reviews. I mentioned this, to say that we have to differentiate between the different facades of the use of crowdsourcing in science. For example, when it comes to using crowdsourcing platforms such as Tahrir Academy to create online lectures and coursework, the crowdsourcing approach in such case may seem to be more logical than, say, the use of the wisdom of the crowd to do journals reviews. How about doing the actual research, can this be crowdsourced? No? I am not sure how this will work, yet, I still hold to what I have said during the session, let our hypothesis be that crowdsourcing is going to work in scientific research, journals reviews or any other part of the academia. Then test our hypothesis, and decide based on the empirical results. I know Wikipedia is not your favourite example, but, back before it started, no one had any trust that a crowdsourced encyclopedia can be of any use. However, now, in some fields the quality of the articles in Wikipedia are much better than they are in Encyclopedia Britannica.
Hi Tarek, thanks for posting your comment. I agree with the general idea of “let’s try, and then test empirically” – that is actually the gist of what I am saying.
But also to question tradition. Traditional peer review has lots of problems with it. We shouldn’t assume it is better just because it has always been done like that. For example, i have tried open peer review and loved it (but Ramy finds it problematic). I would be willing to try crowdsourcing peer review if it will be OPEN, so you know who from the crowd said what. It has been done before, I believe!
I even saw someone crowdsource his “references” for applying for a job 🙂
There are some great crowds out there, and we’re lucky when we belong to a “generous” one 🙂
Let me add something to the wikipedia example. It has changed our view of what a good encyclopediacan or should be, because we offen say it is more up to date than Britannica. So to give the sports example again, it would almost always have updated sports stats, and id not, the reader can add them or correcr them. That has value. It’s a differenr view on what encyclopedias can be.