My 9 month PhD Poster

My 9 month PhD Poster/Tim O’Riordan ©2014/Creative Commons by-nc-nd License.

A few months ago I reached a milestone in my PhD by passing my 9 month viva, and last week I was reminded (along with the rest of the lab) that my old poster was “looking as retro as a set of Alexis Carrington‘s shoulder pads” (to quote Prof Les Carr). So I set to, downloaded a trial version of Adobe Photoshop, and got designing.

Essentially I’ve retained the style of my previous poster and added some new words, scatter plots and logos to reflect my progress over the past few months. My supervisors love it, and in less than a week it’s had outing at the LACE SoLAR Flare 2015, and at JP Rangaswami’s Web Science Institute Distinguished Lecture.

What are my key findings?

Building on my earlier learning analytics work that used a single approach to rate comments associated with learning objects on a Massive Open Online Course (MOOC) in an attempt to identify ‘attention to learning’, I undertook further content analysis. The main idea was to use 3 highly cited pedagogically-based methods (Blooms Taxonomy, SOLO Taxonomy, and Community of Inquiry (CoI)) in addition to the less well-known DiAL-e method (that I had used in an earlier study), to see if there was any correlation between them, to test intra-rater reliability, and to see how these methods squared up against typical measures of online learning engagement.

I discovered that my intra-rater reliability was high, as were correlations between methods. That is, all methods of rating  learners’ comments produced very similar results – with Bloom and CoI producing the best results out of the 4 methods. Correlations with other measures (sentiment, words per sentence, and ‘likes‘) confirmed my earlier work: language used in comments appears to provide a good indication of depth of learning, and people ‘like’ online comments for many reasons, not necessarily for the depth of learning demonstrated by the comment maker.

So, I’m about half way through my PhD and still have a lot of work to do. The next stage involves employing some willing research assistants to rate many more comments derived from many more MOOCs than I am able to do.  The aim is collect enough data to train Machine Learning algorithms to rate comments automatically.

Why is this important?

Making education and training more widely available is vital for human development, and the Web has a significant part to play in delivering these opportunities. Running a successful online learning programme (e.g. a MOOC) should involve managing a great deal of learner interaction – answering questions, making suggestions, and generally guiding learners along their paths. But coping effectively with high levels of engagement is time intensive and involves the attention of highly qualified (and expensive) teachers and educational technologists. My hope is that through my research an automated means of showing how well and to what extend learners are attending to learning can be developed that will make a useful contribution to managing online teaching and learning.