RhizoResearch - some thoughts brought on by Sunlight and Shade.

It is a bit of an odd thing to admit, but ever since I started formal school again in order to pursue a doctorate the amount of pleasure reading has gone down.  Now, this is to be expected, time resources need to be allocated differently in order to meet the rigorous demands of a doctoral program.  That said, my pleasure reading was research articles anyway, so it's kind of hard to out down your candy (research articles about MOOCs and online learning) in order to have your balanced meal consisting of research in other fields that you aren't necessarily aware of.  This is a good thing, but the amount of research on MOOCs keeps piling up in my dissertation drawer at work.  Summer project!

Anyway, I digress! I saw that France Bell and Jenny Mackness had a recent article in Open Praxis about Rhizo14.  I actually did with it what I do with all MOOC articles these days - download the PDF, archive it, print it out, add to my "to read" pile. Normally that would have been the end of that but two things happened: First, this article seemed to generate a lot of chat in the Rhizo14 community on Facebook, which is still going strong despite the course being over for close to a year now.  Second, all this EDDE 802 work is making me think even more deeply about research articles I read - as though thinking about them "deeply" was not deep enough (I guess we're going from Scuba range to submarine range...). Then of course there was a comment from France on her blog that she would not engage directly on the Rhizo14 facebook page about this work (see here) which really raised an eye-brow. So,  I picked up the article during my morning commutes and over the days I finally was able to read it (it wasn't long, just had other things on my plate).

From reading this article I have some reactions.  In the past I've been part of a MOOC community that has been studied.  I think that for FSLT12 I was surveyed by the people who offered the MOOC so that they  could produce the final report.  The same was true for OLDSMOOC if I remember correctly.  However both of those MOOCs, despite the connectedness felt at the time, didn't feel as connected as Rhizo. With the exception of Rebecca I actually did not know others all that well prior to Rhizo14. So, Rhizo was a little different, and this probably has an effect on how I interpret the research findings, but I really tried to put my EDDE 802 cap on and look at the findings strickly from a researcher point of view.

The article frames Rhizo14 as an experimental open course, and that there are "light" and "dark" sides to participating in an experimental MOOC.  The article seems to be written in a "one the one hand, on the other hand" manner.  For instance examine the following quote:
There were plenty of learning moments and evidence of joy and creativity, but we also experienced and observed some tensions, clashes and painful interactions, where participants seemed to expect different things from the course and were sometimes disappointed by the actions and behaviours of other participants  
 The way this is written looks perhaps at two dichotomies, the good and the bad side, but is this realy what happened?  Can this only be interpreted as one or the other?  Were there some tensions in Rhizo14?  Well, I can think of at least one.  But Painful Interactions?  Painful to whom? and in what way?  Is painful used in the sense of awkward, and thus in a more literary style? Or is it used in a more concrete style, as in causing harm to someone?  This being a research article, I tended to take painful to mean causing harm, and thus this seemed a bit like an exaggeration to me, given that I have seen most Rhizo discussions over the past year.

Other issues  that came up, other than the tone, are methodological.  For instance the literature review is, for me, incomplete - or at the very least not totally accurate if you are considering all of the cMOOC literature.  For instance Rhizo14 is described as:
Rhizo14 also differed from prior cMOOCs in that it was “home-grown.” Dave Cormier ran the MOOC in his own time, often convening the weekly Hangouts in the evening from his own home. Despite this, his intention was that there would be no centre to the course; he would be one of the participants
If we look at the history of MOOCs, I would say that everything prior to coursera was home-grown.  CCK was home-grown, despite its affiliation with the University of Manitoba, PLENK was home-grown, the various MobiMOOC incarnations were home-grown, and so on. I honestly didn't think that Rhizo14 differed a lot in its setup compared to other, previous, cMOOCs that I had been part of over the years.  The execution was certainly different, but the setup didn't seem different to me. I think that a review of the MOOC literature to date could have painted a broader picture, but I am willing to accept that there were space constraints in this article, and things just needed to get cut out in order to make it to print (I did get a call for papers for Open Praxis and I recall seeing a 5000 word limit which is crazy for a qualitative paper!)

Another issue that came up is the play-time that different views got.  For instance in the article teh authors write that:
The distributed nature of the spaces, the mix of public / private, and the number of survey respondents (47) combine to remind us that we must be missing some important perspectives. What does encourage us is that despite this partial view, our decision to allow for confidential and electively anonymous responses to our surveys, has enabled a light to be cast on what people are thinking, and not saying, in public and semi-public forums. This research will make a contribution to the hidden MOOC experience
By my count, as of today, there are 432 Rhizo14 participants on P2PU, and 321 in the facebook group.  It it hard to tell how many people there are actually in this MOOC, I just know the visible participants of who was actively participating in P2PU or facebook.  Assuming that the P2PU number is the canonical number, 47 respondents only represents  around 10% of the people who signed-up for the MOOC.  Furthermore the researchers do not discuss how, if any, coding was done for the interview and free-form text data in the survey, to determine  the overall themes and positive and negative feelings toward the course, the conveners, and fellow participants. Equal air-time is given to both those who have positive things to say, and negative things to say; however we do not know quantitatively how many people were in each camp (positive/negative).  Were there more in the negative/dissatisfied camp? Or more in the positive?  Or were they equally distributed across this self-selected sample?  Other things that that should have been explored more, such as people feeling isolated despite their "experience MOOCer" status went unexplored.  For instance who deems these individuals as experienced?  Experience in the cMOOC? the xMOOC? both?  And how is that measured?

I don't know if this was the intent of the authors, or if it was an unfortunately side-effect of cutting and selectively editing in order to make a word-limit, but the article has the tone of an article written with "moral panic" as the intended outcome.  The selected quotes which included profanity, and the language of experimentation and participants as lab-rats has the effect of evoking negative feelings toward this MOOC, the people that convened it, and to some extent those participants who were active in the course. 

Finally, since this post seems to be getting long, there are two areas that I think need addressing:  In an article like this, how does one tackle the issue of validity?  One of the ways, in 802, I've seen validity addressed in qualitative research is to have the people interviewed and sampled read the findings and then discuss whether or not those findings resonate with what they've experienced and what they reported or whether they do not.  I don't know if those 47 respondents got a chance to vet this interpretation of what the survey results say.  I may be one of those who took the survey (I don't remember, but chances are high that I did) but I have not seen any indication that I was asked to interpret the interpretation of the survey results. From the reactions I've seen from people in Rhizo, it seems that this paper is not indicative of their experiences, or how they observed interactions in the course, so to some extent the paper seems to lack some validity. The other odd thing, that raises a bit of a red flag for me, is France's disengagement from the Rhizo community on this matter.  It seems that if you study a community you have the ethical obligation to discuss and debate with them your findings on their turf, so to speak, and not on your own.

The other way to address validity is to have other researchers review the anonymous survey data in order to cross-validate the findings in this research.  I know that at least a couple of people have asked to see the data (one I thought was in jest, but at least one seemed serious) only to be turned down due to privacy concerns.  If the data is anonymous then there ought not be privacy concerns. This, in turn, makes things seem suspicious in some sense.  I am not sure what the ethical implications are.  If I run a survey, and conduct a set of interviews, for my dissertation (or any research project for that matter), if other researchers want to see the data in order to validate my findings, should I not oblige? (especially if those asking are on my review/exam committee!). As a researcher I should anonymize the data, and while I wouldn't provide the cypher to the data (so as to render them eponymous), I think that providing anonymized data for analysis is indeed something that falls within ethical guidelines.

At the end of the day, this article has proven to be an interesting case study for my research methods course.

Thoughts? Views? Opinions?






Comments

I just linked this blog to my blogroll and after that I noticed that you
are dialing with the same theme as I did. I linked you because I
consider you are a research-oriented guy and I am following MOOC
research or online learning generally.

I wonder why the number of
participants is so much assessed, because the results are qualitative.
Not every research is statistically based.

Another comment about
Frances Not being interested in discussing in the FB group. In my eyes
she did it very patiently but the level of attacks against this research
was do inferior quality that I was very astonished. For instance
someone claimed that J and F tell lies. It is a question of law in my
country.

It is not possible to discuss about research with people who don't know what research is but are sure being against it.

I hope my English can be understood, pls tell if u can't read it
Well- written, AK. I don't agree with all you write, but agree w most of it. Is it ok if i focus on what i don't agree with? Keep in mind i did not follow the facebook discussion at the time so i don't know what ppl said to Frances and what she said back.

Re: validity, there are many ways of doing it. Member checking is done, but usually only for the parts you are citing or quoting a person. It makes absolutely no sense for someone to ask me if i agree about what YOU said. Does that make sense?

I have also asked the questn of numbers but agree with Heli it is not necessary for qualitative research. It would be helpful to have a rough idea, e.g. If they are closer to 4 people or 20. I would be totally comfortable with a paper that said, "even though only 4 ppl had negative experiences, we would like to highlight them because many ppl who have negative experiences would never even respond to a survey like this."

Re: open data, i don't think there is any obligation on their part to share the anonymized data. I think IRB would require them not to, actually, and to destroy that data in a few years, time, even.
Thank you for posting Heli :)


I am not sure I would classify the discussion that happened on the Rhizo group as attacks, but text-based media are not perfect for communication. A lot of nuance and meaning is lost in text, and that leaves a lot of room for interpretation, so I don't doubt that some of what was said could be interpreted as an attack.


Validity is not a clear-cut thing. From my perspective, it is potentially possible in qualitative and quantitative research to have a smaller sample size if the data collection sampling accounts for it (one of my textbook's chapters goes into eye-bleeding detail about this). I think that the sample used in this research paper was a convenience sample (or maybe started as convenience and went into snowball sample) which doesn't necessarily catch all views. This has the potential for painting an image that may not in fact be true. Small part of it may be, but not necessarily as a whole.


As I said, to be asked to write only 5000 words on a qualitative paper that explores learner experiences has the potential to really gut the substance of the paper and only leave a small husk left. It is possible that there was something there to provide for a thicker (as Maha wrote above) account of people's experiences but in its current, published, version it's not there, and as a reader this article left more questions than it answered.
Maha I agree with what you write, to some extent :) I agree that it makes no sense to ask me if what you were quoted as saying or feeling is correct. I have no way to know what's in another participant's head.


Relating to this, when it comes to communication, we see that people don't necessarily get other people because we try to disambiguate when we have discussions with other interlocutors, in the coding/decoding process our messages can get garbled.


However, I disagree that validity, as a whole article can't be measured in some way. Perhaps the way I wrote things near the end (felt a bit rushed to get this out before homework started again :) ), but there ought to be some way of checking the result against some rubric. I think, with the access to data, I was thinking that other researchers could provide their interpretations and in this way provide some level of inter-rater reliability. In thinking about the smallish sample size, it's not necessarily a problem if other explanations are provided - as you wrote - X out of Y has this sort of sentiment. This way, you are not minimizing the effect that a course had on someone, but at least you know some proportion. At the moment it reads (to me) like there was a 50-50 proportion of positive and negative views of the mooc, and in the mooc.


As far as Open Data goes - I am conflicted - truly am. Part of me wants to conduct an analysis of this data; similar one might argue to the coursera data that coursera probably collected in the #massiveteaching MOOC - I think POD had mentioned my curiosity on his blog, and how it seemed incongruous with the other things I said on my blog about privacy, experimentation, and such (fair critique btw). On the other hand, yes this type of data has a lifecycle and should be destroyed at some time, but how is that determined and by whom? I think that in a strictly qualitative ("natural sciences") experiment it's easier to replicate and experiment until some generalization can be proven. In educational research generalization isn't always possible (for one thing), and even if we wanted to re-tread some experiment we can't. Learner populations aren't static, and as such this was a sample in a moment of time. That is to say, how does one gauge the accuracy of representation of a research article in such a case? I don't ask as a member of Rhizo14 - regardless of this paper we will continue to MOOC along as fellow MOOC learners; I ask as an EdD student who will have to defend his research design in a year's time - and as such want to make my own learning impactful. This real world case, with people I "know" makes it a good place to have this discussion on method, reliability, validity, and ethics.




just some thoughts :)
I agree particularly with the issue about giving equal weight to both size without quantifying it. In many ways this is how researchers show their bias. I think it is good to hear both sides, just that equal space doesn't mean equal effect, but in this particular article it comes across as such.

Popular posts from this blog

Academic Facepalm (evaluation edition)

Discussion forums in MOOCs are counter-productive...well, sort of...

Latour: Third Source of Uncertainty - Objects have agency too!