Erosion of thick legitimacy by Coursera

In my previous post, I raised the possibility that the thick legitimacy of academics would be eroded by meddling it with the thin legitimacy of Coursera (or other MOOC providers). This echoes a useful distinction that was introduced by Jay Rosen, in the wake of the Facebook Emotions experiment.

Is this a theoretical risk? No, unfortunately. I can offer concrete cases tied to my Coursera MOOC Massive Teaching: New skills required. These should explain why that course went South.

What happened in my course is certainly complex, and this not meant to be an account of it. It discusses my reasoning about ethics, and focuses on the complexity of that interaction between thick and thin legitimacies. It ignores much of what happened when interacting with the students themselves. I finished my previous post with a video from my course outlining how different the startup and scientific processes are.



For my own teaching, I had adopted a model that I will call Agile Teaching. In fact, I explained what I meant by this in a separate video for the course:



You can see several concepts there that reflect startup culture and the Agile methodology: failure and risks are OK, rapid iteraton, responsivity. This is similar to what Derek Bruff calls Agile Teaching (I think he originated the term). In my case, I included delivery and production as part of the concept. This actually worked very well within my course, and the technique can certainly be applied to many topics.

However, since the topic was partly the business model of MOOC providers, and I was teaching so responsively, this technique is bound to hit some snags. During the preparation of the course, many interactions with the Coursera staff made me uncomfortable with their willingness to be transparent. This is bad practice but understandable, as I knew I was pushing them too fast and they were not used to that. I found other solutions, which included the a-posteriori-analysis-of-already-collected-data trick described in my previous post, for teaching purposes. When, during the course, the Facebook experiment makes the news, I went to re-read the section on education research in the Coursera Terms of Use.

Records of your participation in Online Courses may be used for researching online education. In the interests of this research, you may be exposed to slight variations in the course materials that will not substantially alter your learning experience. All research findings will be reported at the aggregate level and will not expose your personal identity.

Online Education Research in Coursera Terms of Use

The abrupt change in the course comes from my initial misunderstanding of this passage. The first time I read it, months before the course started, I understood that it referred to research performed by my peers, with thick legitimacy. But the Facebook experiment made it clear to me that I was mistaken: this is also and primarily used to justify research/product improvement with thin legitimacy, which can be very thin indeed. Coursera is also more dangerous in that sense than Facebook, since it actively attempts to clout itself with thick legitimacy and blend the two. To give a few examples:

  • there are university administrators sitting on its University Advisory Board, but that power is, as far as I know, untested;
  • they rely on AAUP membership to select US partners;
  • they rely on Shangai rankings to select non-US partners.

Between June 29th and July 3rd, I followed the same train of thought as Jay Rosen when writing his July 3rd Washington Post article, and ultimately see three ethical dangers tied to this blending of thick and thin legitimacies:

  • of abuses by academics, through the a-posteriori-analysis-of-already-collected-data trick (was what I myself did in the first week ethical? What about what I was going to do in the second week?)
  • of abuses against the learners by Coursera, just as with the Facebook experiment (do A/B tests of teaching interventions, robots on Coursera forums and a non-transparent list of other experiments require internal ethical approval? Where is Coursera going with the right to replay student comments at will?);
  • of abuses against professsors or their profession by Coursera (by devaluing the thicker legitimacy, the parallel with journalism and photography is strong).

I resolved that ultimately, in those conditions, I could not compromise my own thick legitimacy with the thinner legitimacy of Coursera. This was probably exacerbated by the topic of my course: by encouraging discussion about MOOCs in the course, and encouraging sharing of information about wishes and wants of students in the domain of MOOCs, I was in fact actively contributing to this blending of thick and thin legitimacies.

My response was to pull content from the course, while not abandonning my students (by still trying to explain this on the forums, within legal limits). This action prompted a reaction from Coursera, which was to give me a deadline to reinstate the content. Once given that deadline, I attempted to raise ethical concerns within my own university (address the issue at the level of thick legitimacy) to resolve the situation. Unfortunately, through different channels the thinner legitimacy prevailed, Coursera did not respect its own deadline and I was prevented from explaining myself with my students. Ironically, pretty much at the exact same time, Rosen was putting up this on the Washington Post site:

When the study’s methods became controversial in the public square, in the press, and in online conversation, that should be a moment for the university to shine. Our strengths include: Academic freedom. Knowing what you’re talking about. “Yes, we thought of that.” To the press or to anyone else who has questions about it, we should be happy to explain our research, including ethical issues as they arise. As academics, we pride ourselves on thinking these things through. And we have procedures! If you experiment on human beings you have to follow them. Academic research is not some free-for-all. It has to meet certain standards. When those standards become controversial in the public square we are happy to explain them. Because we know what we’re doing—

Except when we don’t. Reached by the Atlantic magazine, one of the academics researchers on the Facebook study chose silence rather than “let me explain our research design.”

I didn't choose silence, and this was not research with thick legitimacy. This was teaching with thick legitimacy. But Coursera itself is doing research with thin legitimacy, now or in the future, that I could not allow without more guarantees, that they had already refused four or five days before the Facebook experiment made the news. Possibly that stance is exaggerated, but I would welcome a discussion on this any time.

In any case, when the first blog post about the course came out, this forced silence (and PR statements issued by the thin legitimacy outfit) led to suspicion that I was myself performing experiments, violating my own thick legitimacy by not following IRB protocol. The result? The author suggests the following in the last paragraph:

From a research point of view this is FASCINATING. I would love to get a hold of the discussion forum data for both discourse and corpus linguistics analyses. On the other hand, I fear that coursera, and all involved parties, are handling this one wrong again. We are now entering the third and final week of this MOOC on MOOCs. Let's see how this pans out.

In other words, the author simultaneously complains about a situation, but wishes he could put his hands on the dataset so he could do research with thick legitimacy. He will of course never have access to that data, since Coursera owns it, from day one. Actually, there are many things that are not very inspiring about the technical architecture of the platform itself, so it might very well be that once an instructor deletes a forum, its content is unrecoverable. That's why I deleted forums when I could: to protect students, despite not having the opportunity to explain exactly why I did it (because of legal risk). Nothing has shown to me since that Coursera was able to retrieve that content.

The third week of the course was supposed to contain a final, peer-feedback quiz. That quiz would be asking students generic questions about MOOCs and what they learned in the course. The same author talks about that assignment here:

With regard to this "Assignment" I feel rather cynical on all fronts. On the one hand it feels like this is just another data-gathering stunt. So Paul ran his "experiment" and now he is collecting data to see what the learners say. The learners that didn't un-enroll from the course that is. On the other hand, even if this is an earnest attempt to have learners introspect on this whole process, the attempt falls really flat on its face because this data is tainted. The questions don't address anything that happened in the course. It feels like these were written with the original learning objectives in mind, and as such it reduces this final exercise into a farce. It is a farce that does not respect the learners, and it is a farce of the educational process.

It's all of this, actually, and more. This was merely a draft, meant to validate the idea with Coursera staff of data-gathering in this way. But it was also an earnest attempt to have learners introspect on this whole process, written with the original learning objectives in mind. I intended to tweak this draft later (maybe in a later iteration) into a more formal process, to be able to offer at scale personalised degrees that would reflect the learning that actually happened in the course. Once everything else happened, indeed the data was tainted and it didn't address anything that happened in the course. The final exercise was reduced into a farce that did not respect the learners. By that time though, I was not involved anymore, and had objected multiple times to running the course and this assignment without me.

After the assignment was completed by the students, that same author and some commenters on his blog managed to hack into Coursera to get access to the data of all the students of the course. They were surprised at the diversity of responses, reflecting their own biases in understanding the complexity of the situation. On top, their interst in this data validated my own concerns: this data, beyond being useful for teaching, was valuable to research on the learner experience yet would be collected without IRB approval (it was merely used for teaching). My own thick legitimacy would enable its collection, yet I would have no option available to protect it from the eyes of Coursera, whose legitimacy is different and who might have genuine interest in that data.