PhD Research Update

Banna Beach at Sunset Andrew Bennett ©2009 BY 2.0

Banna Beach at Sunset/Andrew Bennett ©2009/cc-by 2.0

It’s been quite a while since I posted, for which I partly blame: writing up the second stage of my research for publication, and for my 18 month upgrade, plus taking on a part time role as Web Science Trust project support officer.

I handed in my upgrade a few weeks ago and had a viva to defend my thesis last week.  The 18 month viva is not as intense as the final grilling you get at the end of the PhD, but provides, as the University of Southampton website says, “a great opportunity to talk about your work in-depth with experts in your field, who have read, and paid great attention to, your work”.  This is true, but I also found it quite unnerving, as it made me realise I still had a long way to go to have confidence in my thesis. Despite what I thought was a fairly lacklustre performance, I somehow managed to pass and am now in the final stretch working towards my final PhD hand in next year. My final piece of work includes a fairly complex and challenging Machine Learning experiment and a series of interviews with MOOC instructors. More of this later.

Going back to my last experiment, this involved  a large scale content analysis of MOOC discussion forum comments which I wrote about in a previous post. Between last November and January this year I recruited and trained a group of 8 research assistants to rate comments in MOOC discussion forums according to two content analysis methods. Overall 1500 comments were rated, and correlations of various strengths were established between the analysis methods and with linguistic indicators of critical thinking. The outputs have provided a useful basis for the next stage – developing a method to automate comment rating that approximates human rating.

A paper on the initial stages of my research that I submitted to Research in Learning Technology has been peer reviewed and accepted, and I am awaiting the outcome of deliberations on the changes I’ve made prior to publication later this year. A paper I hoped to get into the Learning Analytics Special Edition of Transactions on Learning Technologies was rejected (2 to 1 against publication – can’t win ’em all!). But they’ve suggested I re-submit following changes to the text. I’ve just re-written the abstract, which goes like this:

Typically, learners’ progression within Computer-Supported Collaborative Learning (CSCL) environments is measured via analysis and interpretation of quantitative web interaction measures (e.g. counting the number of logins, mouse clicks, and accessed resources). However, the usefulness of these ‘proxies for learning’ is questioned as they only depict a narrow spectrum of behaviour and do not facilitate the qualitative evaluation of critical reflection and dialogue – an essential component of collaborative learning. Research indicates that pedagogical content analysis methods have value in measuring critical discourse in small scale, formal, online learning environments, but little research has been carried out on high volume, informal, Massive Open Online Course (MOOC) forums. The challenge in this setting is to develop valid and reliable indicators that operate successfully at scale. In this paper we test two established pedagogical content analysis methods in a large-scale review of comment data randomly selected from a number of MOOCs. Pedagogical Scores (PS) are derived from ratings applied to comments by a group of coders, and correlated with linguistic and interaction indicators. Results show that the content analysis methods are reliable, and are very strongly correlated with each other, suggesting that their specific format is not significant. In addition, the methods are strongly associated with some relevant linguistic indicators of higher levels of learning (e.g. word count and occurrence of first-person pronouns), and have weaker correlations with other linguistic and interaction metrics (e.g. sentiment, ‘likes’, words per sentence, long words). This suggests promise for further research in the development of content analysis methods better suited to informal MOOC forum settings, and the practical application of linguistic proxies for learning. Specifically using Machine Learning techniques to automatically approximate human coding, and provide realistic feedback to instructors, learners and learning designers.

Just need to re-do the rest now…

I’ve also undertaken two online introductory courses in using the Weka machine learning workbench application and am currently waiting for the Advanced course to start. I’m also attending the Learning Analytics and Knowledge Conference (LAK16) in Edinburgh next week, where I’m very much looking forward to taking a workshop in data mining (using Weka), as well as attending loads of presentations and engaging in some serious networking.

Also, I’m very much looking forward to the summer (hence the photo at the top of the page).

 

Advertisements
Posted in: PhD

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s