Periscoping to support MOOC

Working with University of Southampton Digichamp Hannah Watts, and FutureLearn Digital Marketing MOOC educator Dr Lisa Harris I helped run probably the first video broadcast using Periscope to support an online course in the UK. The Digital Marketing MOOC (Massive Open Online Course), asks learners to try out new social tools and think about how they may (or may not) work in a learning or a business context. So to demonstrate what this involves, Lisa decided to give this relatively new social video broadcasting app a try.

Periscope allows you to watch live videos from your mobile device and interact with the presenters in real time, either directly within the app or via Twitter. I’d had a go with the app a few times over the summer and found that it worked well in connecting with a reasonably large audience. We’d also seen Inger Mewburn on the edX Surviving your PhD MOOC, and the BBC Outside Source broadcasts, and decided it was time to take the plunge into a more planned approach to this new form of social broadcasting.

Using Periscope is very straightforward; you just download the app to your mobile device, sign into Twitter, log in to Periscope and then start broadcasting. But if you have an expectant audience and a message to deliver, you can’t leave much to chance. The plan was for Lisa to discuss questions from the MOOC, and from the live Twitter feed with facilitator Chris Phethelan at a prearranged time (15:00 GMT, 5 November 2015). We’d had network problems with previous attempts, so we had an additional camera on stand-by to ensure we had something for our audience to watch if the broadcast failed.

Periscope holding screen

Periscope holding screen

With a crew of two (me supervising the broadcast, and Hannah noting comments as they appeared – and passing on questions) we used an iPhone 5s as the broadcast camera set up in horizontal mode (Periscope broadcast in vertical video, but correct this on playback). In order to let our audience know where to find the broadcast (and with the iPhone pointing at a ‘holding screen’), we hit the ‘start broadcasting’ button 15 minutes before the discussion was due to begin. This automatically created a tweet on the Digital Marketing MOOC twitter account containing a link to the broadcast – which we copied and posted on the MOOC’s comment forum.

About 30 seconds before the start of the discussion I started recording on the standby camera, and used Quicktime to screen record the Periscope browser window. At 3pm the holding screen was removed from in front of the camera to reveal Lisa and Chris ready to start. Within seconds sound was turned on and the discussion could begin.

During the broadcast Lisa and Chris discussed comments from the previous week on the MOOC and also able to answer questions posted on Twitter during the broadcast. Altogether we had over 90 viewers watching and a high number of interactions during transmission – plus some very positive feedback.

You may wonder why we went to such great lengths to record the broadcast. Firstly, Periscope broadcasts only stay online for 24 hours, so we needed a copy to put on YouTube for those who missed it. Also, while the iPhone records the video, the quality is quite poor – and it doesn’t record the questions, comments and other feedback that are visible in the Periscope broadcast. So we needed to record the browser window off screen at high resolution (MacBook Pro with Retina screen) to ensure we had a copy that could be used later in the course – or possibly to support later iterations. Finally, apologies for the jerkiness of the video – although we were on a very high speed network, this seems to be how Periscope currently works.

My 9 month PhD Poster

My 9 month PhD Poster/Tim O’Riordan ©2014/Creative Commons by-nc-nd License.

A few months ago I reached a milestone in my PhD by passing my 9 month viva, and last week I was reminded (along with the rest of the lab) that my old poster was “looking as retro as a set of Alexis Carrington‘s shoulder pads” (to quote Prof Les Carr). So I set to, downloaded a trial version of Adobe Photoshop, and got designing.

Essentially I’ve retained the style of my previous poster and added some new words, scatter plots and logos to reflect my progress over the past few months. My supervisors love it, and in less than a week it’s had outing at the LACE SoLAR Flare 2015, and at JP Rangaswami’s Web Science Institute Distinguished Lecture.

What are my key findings?

Building on my earlier learning analytics work that used a single approach to rate comments associated with learning objects on a Massive Open Online Course (MOOC) in an attempt to identify ‘attention to learning’, I undertook further content analysis. The main idea was to use 3 highly cited pedagogically-based methods (Blooms Taxonomy, SOLO Taxonomy, and Community of Inquiry (CoI)) in addition to the less well-known DiAL-e method (that I had used in an earlier study), to see if there was any correlation between them, to test intra-rater reliability, and to see how these methods squared up against typical measures of online learning engagement.

I discovered that my intra-rater reliability was high, as were correlations between methods. That is, all methods of rating  learners’ comments produced very similar results – with Bloom and CoI producing the best results out of the 4 methods. Correlations with other measures (sentiment, words per sentence, and ‘likes‘) confirmed my earlier work: language used in comments appears to provide a good indication of depth of learning, and people ‘like’ online comments for many reasons, not necessarily for the depth of learning demonstrated by the comment maker.

So, I’m about half way through my PhD and still have a lot of work to do. The next stage involves employing some willing research assistants to rate many more comments derived from many more MOOCs than I am able to do.  The aim is collect enough data to train Machine Learning algorithms to rate comments automatically.

Why is this important?

Making education and training more widely available is vital for human development, and the Web has a significant part to play in delivering these opportunities. Running a successful online learning programme (e.g. a MOOC) should involve managing a great deal of learner interaction – answering questions, making suggestions, and generally guiding learners along their paths. But coping effectively with high levels of engagement is time intensive and involves the attention of highly qualified (and expensive) teachers and educational technologists. My hope is that through my research an automated means of showing how well and to what extend learners are attending to learning can be developed that will make a useful contribution to managing online teaching and learning.

3 stars and a wish for ALTC 2015

Talking Content Analysis #altc

Talking about content analysis #altc

Yesterday I attended my first Association for Learning Technology conference (ALTC) which this year was held at the University of Manchester. As the object of my research is to develop a real and relevant approach to automatically measuring and visualising learning activity online, it is essential that as well as being grounded in pedagogic theory, the approach should make sense at a practical level to its users. So, it’s important that I get out from my research lab, share my findings and connect with users; that is: learners, teachers, administrators – and on this occasion, learning technologists. ALTC is arguably the biggest, most connected Learning Technology conference in Europe, if not the world, so having my proposal accepted and being invited to give an extended, 30 minute presentation with the possibility of being selected for publication in ALT’s journal (Research in Learning Technology) was a huge privilege.

One of the key features of ALT’s Extended Presentation format is that at least half of the time should taken up with debate and interaction with the audience (no ‘death by powerpoint’!), and as this years’ conference theme was ‘shaping the future together’ I set about producing a highly participative presentation. I shared my slides beforehand with the 20 or so delegates who had indicated they would be joining the presentation (using ALTC’s excellent web site), and I set up and tested (on my long-suffering colleagues) a set of questions using the Socrative audience response system. ALTC’s ‘Guidelines for Presenters’ also called for a visual approach that eschewed bullet points, so I spent a considerable amount of time scouring Flickr Commons for usable, Creative Commons licensed images to illustrate my talk. Sadly my research fund could only stretch to attending one day at the conference, but I was determined to make the most of the few hours I had.

three stars

GoldStarAs I mentioned, ALTC is a big conference, with what looked to me like over 2,000 delegates from UK educational institutions – but also with a global presence, both in person and online. This is a highly knowledgeable and engaged audience. Because they work at supporting teaching and learning through the use of technology day after day, they have a exceptional understanding of the practicalities of integrating advanced tools within the curriculum. They ask questions, challenge assumptions, and can back up their arguments with evidence. In short it’s the best audience any learning technology researcher could stand in front of.

GoldStarAs well as talking the talk – ALTC walks the walk. They actually use technology to enhance the conference experience. In addition to providing each presentation with a dedicated web presence, they live stream all their keynotes and invited speakers, they add value through a number of applications (including this Flickr reader) and encourage communication between delegates and the rest of the world with the #altc hashtag (which on day one was ‘trending on Twitter’). This Google spread sheet set up by Martin Hawksey itemises tens of thousands of tweets generated by the event.

GoldStarBetween sessions, I got to have a very pleasant one-to-one chat with keynote speaker, learning technology guru, and generally all round nice guy, Steve Wheeler – primarily about how intense live blogging can be (think I might give it a try one day).

A wish

I really wish I could have more than 22 minutes to give my presentation and answer questions. I think the organisers were pushing their luck a bit by programming two 30 minute extended presentations into a 60 minute slot. Once you factor in a crashed pc and further delays it proved to be impossible to give the presentation I had intended – which was a great shame. However, TweetDeck 2015-09-11 13-33-41there was a keen interest from the 40-plus audience for what I had to say. The Socrative audience response system worked well, provided some interesting feedback, and highlighted a key point of my talk – that visualised feedback affects behaviour. Despite having to take ‘an early bath’ I had the opportunity to discuss my work afterwards, was asked some constructive and challenging questions, and made some good connections. My hope now is that I the go-ahead to publish in the RLT journal, but we’ll have to see.

ICALT 2015 paper accepted

Hualien city, by Luis Jou García ©2010, CC BY-NC-SA 2.0

Hualien city, by Luis Jou García ©2010, cc by-nc-sa 2.0

I’ve just had the good news that a paper based on my summer project,  Can you tell if they’re learning?, (co-written with my supervisors) has been accepted as a short paper at ICALT2015. The conference takes place at the start of July in Hualien City, Taiwan – about 10,000 km from home. Here’s the abstract:

The proliferation of Web-based objects designed for learning, makes finding and evaluating online resources a considerable hurdle to overcome. While established Learning Analytics methods use Web interaction data as proxies for learner engagement, there is uncertainty regarding the appropriateness of these measures. In this paper we propose a method for evaluating pedagogical activity in Web-based comments using a pedagogical framework, and present a preliminary study using this approach that assigns a Pedagogical Value (PV) to each comment. This has value as it categorises discussion in terms of focus on pedagogical activity rather than interaction on the Web. Using the DiAL-e Framework we code a selection of comments associated with learning objects within a Massive Open Online Course and test the resulting Pedagogical Value against established Language Analysis and Learning Analytics methods. Results show that PV is distinct from typical interactional measures; there are negative or insignificant correlations with established Learning Analytics methods, but strong correlations with linguistic indicators of pedagogical activity. This suggests that adopting pedagogical frameworks may produce more accurate measures of pedagogical activity than interaction analysis, and that linguistic rather than interaction analysis has the potential to automatically identify learning behaviour.

Obviously I’m looking forward to interacting with key academics in the field of technology in education, but I also hear that the surfing’s quite good. What’s not to like?