Tag Archives: research

Research plan – They’re back! To what extent and in what ways do particular pedagogical dimensions, or combinations of, contribute to repeated learner participation in an online course over consecutive or non-consecutive runs?


Since their emergence in the mid-2000s, massive open online courses (MOOCs) have been predicated on making learning available to everyone, and at scale. Much effort has been spent analysing data generated by MOOC participants (eg., Guo, Kim & Rubin, 2014; Savage, 2009; Wang, 2017) to determine if video production methods, format, style, type or even duration has the capacity to solely influence student learning and engagement. While the use of instructional video by educators isn’t new to face-to-face or online learning experiences, it does serve as a critical and primary means of content delivery in a MOOC. Research across a variety of MOOCS (eg., Engle, Mankoff & Carbrey 2015; Hew & Cheung, 2008; Taib, Chuah & Aziz, 2017) has also been conducted into better understanding the impact of pedagogical dimensions such as cooperative learning, feedback, activities and assessments on learner participation, therefore suggesting that there may be more factors to cultivating an engaging learning experience than video instruction alone. These factors may seem elementary for a single instance or one-off run of a MOOC, but what of repeat learners – those learners who repeatedly join and then continue to actively participate in the same MOOC over multiple instances? An example of this cohort would be the learners who continue to participate in Monash University’s ‘Mindfulness for Wellbeing and Peak Performance’ (MIND) and ‘Maintaining a Mindful Life’ (MINDLIFE) courses on FutureLearn.

I currently work in a central unit within Monash University responsible for the design, development and delivery of short online courses with education partner FutureLearn. MIND is one of our longest running courses and continues to enjoy a large learner cohort who actively and consistently engage with the course content, other learners and the course team. The positive learner response to MIND stimulated the release of MINDLIFE, a complementary course which has also been well received by learners. While the design, development and delivery of Monash’s other non-mindfulness courses are remarkably similar to the two mindfulness offerings, it has become evident that these courses may not attract repeat learners (over consecutive or non-consecutive runs) in a similar way. But why? The purpose of this research is to identify, explore and provide a detailed account of the extent and ways in which particular pedagogical dimensions, or combinations of, contribute to repeated learner participation in an online course over consecutive or non-consecutive runs.

This research has a number of potential benefits, for individual respondents, education/teaching and learning, and designers and developers of online courses. Respondents may find it interesting to reflect on their ongoing participation in an online course and note any particular elements or experiences they’ve find to be particularly important to them or that have influenced their ongoing course participation. More broadly, this research should help improve our understanding of online learners’ motivation to continue to participate in an online course over consecutive or non-consecutive runs which could then be used to increase ongoing course participation as well as guide the design, development and delivery of alternate or expanded online course offerings.

Literature review

The following identifies key themes from literature related to pedagogical dimensions, or combinations of, inherent in online courses, thus providing context for my research.

The use of video

Although the use of video isn’t exclusive to MOOCs, video is the primary means of content delivery. Each MOOC is unique, where educators, course content designers and producers work together to create video content that makes use of a diverse combination of video types (text, static image, image and presenter face/talking head and fully animated) and video design elements (static slides, digital ink/scribbler/analysis, discussion/discursive), that complements the pedagogical approach. The impact of video production on learning is of great interest to educators and MOOC producers. Guo, Kim and Rubin (2014) found that videos of short duration are engaging to learners. An equal level of engagement was identified for videos types that featured the ‘talking head’ of a presenter or used digital ink/scribbler/analysis to provide a more detailed explanation. Neilson (2014) and Savage (2009) support the approach of videos of short duration. To maintain learner engagement, Neilson (2014) recommended that videos should combine different video types and design elements, such as a presenter face/talking head with a digital ink/scribbler/analysis. Multiple studies have investigated the role of cinematography and cinema production design/methodology on learner engagement. In his research, Wang (2017) identified camera shot style and background design ‘down to the granularity of how the image is captured, recorded and delivered’ (Wang 2017) to be as equally as important as video types and video design elements on learner attention and engagement. In addition, Wang, Chen and Wu (2015) observed the impact of different video types and video design elements on learning. They concluded that the production cost and intensity associated with creating ‘lecture capture and picture-in-picture videos may be worthwhile for online learning from the perspectives of improved learning performance and reduced cognitive load’ (Wang, Chen and Wu 2015). The use of video isn’t reduced only to pre-prepared and pre-recorded content that contributes to the course material. Video can also be create in responsive to learner activity and play a critical role in enabling educators and instructors to provide feedback to learners throughout course delivery. Henderson and Phillips (2015) reported that the affordances of video enable instructors to convey detailed and elaborate feedback as well as encouragement and praise to learners more so than written commentary. Henderson and Phillips (2015) also reported positive learner response to video-based feedback, where learners felt they had a closer connection with the instructors or educators. Echoing the reports from Henderson and Phillips are learner reviews posted online ‘The final feedback video each week is the jewel in their crown. It sums up the week’s thoughts and mentions some of the issues and talking points brought up in the online comments’ and ‘The weekly feedback sessions allowed the participants to feel engaged and gave a more personal feel to an online course’ (Class Central 2018). The use of video-based feedback in this capacity not only addresses any potential knowledge deficit experienced by learners, makes the ‘makes the massive feel intimate’ (Pappano 2018), but also personalised the course in a meaningful and practical way, while actively contributing to the change in learner perceptions of the course as the type where an ‘instructor is not as available because there are tens of thousands of others in the class’ (Pappano 2012).

The role of the facilitator

When thinking about why learners persist with a course or even choose to repeat the same course again, it’s critical to consider the role of instructors, course mentors or any of the multitudes of names for the course team members responsible for facilitating discussions and interacting with learners. According to Hew (2014), instructor accessibility and passion are some of the features that were identified as key for promoting learner engagement, where engagement is defined as an observable action in the course. While engagement is different from completion and retention (which in MOOCS, are an often misconstrued metric devised by educators and platform providers for defining their view of learner success, satisfaction, needs or goals) an analysis by Adamopoulos (2013) revealed that the role of the instructor did have the largest possible effect on the likelihood of completion. Learner reviews posted online ‘In addition to this there are two outstanding mentors who support the learning and help clarify the more scientific aspects of the course and make lots of useful links and resourses [sic] available to enable the independant [sic] exploration of a complex subject’ (Class Central 2018), ‘The moderators are extremely active in their support for learners’ (Class Central 2018) and ‘The mentors had valuable insights and comments and added a lot’ (Class Central 2018) echo the findings of Adamopoulos (2013) and Pilli & Admiraal (2017) in relation to learner interaction with an instructor, where learner retention and satisfaction is higher when instructors are highly responsive. However, some earlier literature on MOOCS (Kop, 2011; Mackness, 2013; Milligan & Littlejohn, 2014) seemingly bemoan the absence of interaction between instructors and learners, asserting that learners may be prevented from having a quality learning experience due to their limited capacity to undertake self-directed learning. In the past, a diminished learning experience may have been inevitable, considering the vintage of the online environment in which their learner cohort was constrained (the anti-social EdX platform and the hardcore Connectivist Edupunk-ness of the open web). Since the launch of FutureLearn, a platform that’s been ‘inspired by Laurillard’s Conversational Framework’ (Sánchez-Vera, Maria del Mar, Leon Urrutia, Manuel and Davis, Hugh C 2015), a diminished learning experience seems less likely as the platform is predicated on a social learning experience, instructor-learner, learner-learner and learner-content.

A learning continuum

According to Horrigan (2016), people undertake learning for personal and professional reasons. Personal learners often choose learning opportunities that are likely increase their knowledge and skills that benefit themselves and others. Professional learners often choose learning opportunities that are likely to maintain or improve their knowledge and skills. Irrespective of the learner’s inclination for undertaking learning, the opportunity must be flexible and permit the learner drop-in and out at their discretion, often at intervals throughout their life, much like ‘a personal learning continuum, rather than as unrelated, separate gatherings’ (Aras Bozkurt & Jeffrey Keefer 2018). Communities that foster a culture of participatory learning contribute to the premise of lifelong learning, something to be part of, an idea ‘that the teaching and learning experience, the idea of imagining where you stand within in that environment is something that’s akin to the DIY punk experience‘ (Groom 2016). While the course creates an environment for this type of experience, its an environment that is suited for a learner that is self-determined, attentive, can think critically, reason, and is ready to collaborate (Pegrum, 2009). The presence of instructors is once again critical, where they support learners, the development of a community of learning, all the while gently guiding learners through the course. This raises the question as to why learners continue to participate and persist with a course and continue to engage in the community of learning. As MOOCS are voluntary, is it possible that learners simply enjoy being part of a community or their goals to deepen their knowledge and skills are being met, or is it the convenience, ease of use and flexibility afforded by the platform, or possibly even something else entirely that’s related to their individual needs that’s being served? Learners (Class Central, 2018) who participated in either or both of the mindfulness-related offerings by Monash reported ‘The two courses that I have been blessed with the opportunity to do on a number of ocassions [sic] have really enhanced the quality of not just my life but those who share their lives with me in any way. I am simply a better person for the learnings that I have had through undertaking the courses’, ‘I have found the leaders, Craig and Richard, really engaging and great role models for their subject, being calm, measured and reassuring, but also inspirational and encouraging. The strength of the course is their rapport in the short videos and audios which acknowledge human frailty and make the continuing practice of mindfulness seem vital and attainable in daily life.’ and ‘I plan to take the course again when it is offered in November, so I can delve more deeply into this more authentic way of viewing the world and living my life’, which provides some insight into their ongoing participation. Furthermore, the learner sentiment expressed towards the mindfulness related offerings is somewhat akin to the model proposed by Peltier, Drago, and Schibrowsky (2003), that identified student-to-student interactions, student-instructor interactions and instructor support and mentoring, as well as course content, course structure and information delivery technology as key components to learner perceived quality of the online learning experience.

Research methodology

This research is situated in the postpositivist paradigm, which is predicated on the need to identify and assess the causes of outcomes (Creswell 2018). This seems to be the most appropriate for my research because, as noted by Phillips and Bourbles (2000), data, evidence and rational considerations shape knowledge. For my research, knowledge will be accrued from data gathered from measurement tools (surveys) completed by respondents or from direct observation (semi-structured interviews). Creswell (2018) astutely synthesised elements from work of Phillips and Bourbles (2000) that argues research seeks to develop relevant, true statements, ones that can serve to explain the situation of concern or that describe the causal relationships of interest. Explaining the causal relationships is most critical for my research, particularly when I’ll be required propose a possible relationship between pedagogical dimensions, or combinations of, that contribute to repeated learner participation. This research will make use of an explanatory sequential mixed methods design approach, which leverages quantitative and qualitative research data. Creswell (2000) notes the overall intent of this design is to have the qualitative data help explain in more detail the initial quantitative results. Fittingly, the affordances of the FutureLearn platform means that quantitative data about course participant activity is readily available which will streamline the first phase of the two-phase design approach. A two-phase design approach with advantages, that according to (Morse 1991) include straightforwardness and opportunities for the exploration of the quantitative results in more detail.

*Based upon the ‘Visual model for mixed-methods sequential explanatory design procedures’ outlined by Ivankova, Creswell & Stick (2006).

Data collection procedures

Initially, quantitative data will be collected from datasets automatically generated from learner activity in the platform. Additional quantitative and principal qualitative data will then be collected from learners who have previously taken part in either Monash mindfulness course. They will be invited to participate in an anonymous 10-15 minute online survey via custom course notices (scheduled emails sent to learners), together with learners currently completing the course. Learners will also be invited to participate in the survey from strategically placed call-to-actions/call-outs within course content combined with carefully prepared ‘nudges’ included in scheduled course notices throughout course delivery. At the end of the survey, learners will be invited to take part in a semi-structured interview, scheduled on their availability. Data will be gathered from Monash-based Qualtrics surveys and datasets (comments, weekly sentiment surveys, step activity, post-course survey data etc) readily available from the stats dashboard of each run of each mindfulness course on FutureLearn. Qualtrics was chosen as the online survey tool due to its organisational endorsement and support as well its accessibility features and responsive design, which contributes to the ease of survey completion by respondents who may choose to answer using a range of devices (mobile phones, tablets and computers). This project is considered to be low risk and respondents are unlikely to experience any serious harm from participating.

The participants

Since it’s first run in 2015, over 254,000 learners have participated in MIND, with over 27,000 learners taking part in MINDLIFE since its inception in 2017. These learners, and any new learners that join any new run of either mindfulness course will be invited to participate in the online survey.

The setting

FutureLearn is the platform which hosts the two mindfulness-based courses offered by Monash (MIND and MINDLIFE) and will serve as an appropriate and relevant context to engage with learners.

Data analysis

Online survey responses, FutureLearn datasets, interview transcripts and audio recordings will be imported into NVivo for thematic analysis based on identified pedagogical themes, with the provision for the creation of new themes or modification of existing themes as they arise.

Limitations of the study

One key limitation of this research is the discipline specific nature of the course offerings. The mindfulness courses are made up of a number of pedagogical dimensions that create opportunities for learners to investigate the science and theoretical framework of mindfulness, but also cultivate and apply the skills as a deeply personal ongoing practise. While a lifelong commitment to learning and practising a particular skill is in no way exclusive to mindfulness or the course offerings, some disciplines may not lend themselves to a similar level of commitment. These differences would give strong caution for research findings that reduce or over simplify any recommendations that suggest a simple transposition of the course design.


Adamopoulos, P. (2013). What makes a great MOOC? An interdisciplinary analysis of student retention in online courses. In Thirty fourth international conference on information systems, Milan, 2013.

Aras Bozkurt & Jeffrey Keefer (2018) Participatory learning culture and community formation in connectivist MOOCs, Interactive Learning Environments, 26:6, 776-788.

Creswell, J.D., & Creswell, J.W. (2018). Research design: qualitative, quantitative & mixed methods approaches, 5th edn, Sage, Melbourne.

Class Central 2018, Maintaining a Mindful Life, retrieved 20 July 2018, <https://www.class-central.com/course/futurelearn-maintaining-a-mindful-life-9078#reviews>

Class Central 2018, Mindfulness for Wellbeing and Peak Performance, retrieved 20 July 2018, <https://www.class-central.com/course/futurelearn-mindfulness-for-wellbeing-and-peak-performance-3714#reviews>

Guo, P. J., Kim, J., & Rubin, R. (2014). How video production affects student engagement: An empirical study of MOOC videos. In Proceedings of the first ACM conference on Learning@ scale (pp. 41-50).

Henderson, M., & Phillips, M. (2015). Video-based feedback on student assessment: scarily personal. Australasian Journal of Educational Technology, 31(1), 51-66.

Hew, K. F. (2014). Promoting engagement in online courses: what strategies can we learn from three highly rated MOOCS. British Journal of Educational Technology (Online First). http://doi.org/10.1111/bjet.12235.

IMDB 2018, The Young Ones – Nasty, retrieved 21 July 2018, <https://www.imdb.com/title/tt0752259/>.

Kop, R. (2011). The challenges to connectivist learning on open online networks: learning experiences during a massive open online course. The International Review of Research in Open and Distance Learning, 12(3), 19e38.

Mackness, J. (2013). cMOOCs and xMOOCs-key differences. Available from: https://jennymackness.wordpress.com/2013/10/22/cmoocs-and-xmoocs-key-differences/.

Milligan, C., & Littlejohn, A. (2014). Supporting professional learning in a massive open online course. The International Review of Research in Open and Distance Learning, 15(5), 197e213.

Neilson, B. (2014). Video Production and Learner Engagement in MOOCs, retrieved 22 July 2018, <http://www.yourtrainingedge.com/video-production-and-learner-engagement-in-moocs/>

Pegrum, M. (2009). From blogs to bombs: The future of digital technologies in education. Perth,
Australia: University of Western Australia Publishing.

Peltier, J.W., Drago, W., & Schibrowsky, J. A. (2003). Virtual communities and the assessment of online marketing education. Journal of Marketing Education, 25(3), 260e276.

Pew Research Center 2016, Lifelong Learning and Technology, retrieved 20 July 2018, <http://www.pewinternet.org/2016/03/22/lifelong-learning-and-technology/>

Phillips, D. C., & Burbles, N. C. (2000) Postpositivism and educational research. Lanham, MD: Rowman & Littlefield.

Pilli, O., & Admiraal, W. (2017). Students’ learning outcomes in massive open online courses (MOOCs): Some suggestions for course design. Journal of Higher Education, 7(1), 46–71.

Sánchez-Vera, Maria del Mar, Leon Urrutia, Manuel and Davis, Hugh C. (2015) Challenges in the creation, development and implementation of MOOCs: Web Science course at the University of Southampton. Comunicar, 22 (44), 37-43.

The New York Times 2012, The Year of the MOOC, retrieved 21 July 2018, <https://www.nytimes.com/2012/11/04/education/edlife/massive-open-online-courses-are-multiplying-at-a-rapid-pace.html>

This Course Could Be Your Life, Keynote – Jim Groom 2016, YouTube, CIRT Lab, 3 March, retrieved 20 July 2018, <https://www.youtube.com/watch?v=Xk97NwetXtE>.

W. Wang, C. Chen and C. Wu, “Effects of Different Video Lecture Types on Sustained Attention, Emotion, Cognitive Load, and Learning Performance,” 2015 IIAI 4th International Congress on Advanced Applied Informatics, Okayama, 2015, pp. 385-390.

Wang, P.-Y. (2017). The impact of camera shot and background design for MOOC videos on student recall and flow experience. Journal of Educational Media and Library Science, 54(3):237-268

Hi Rowan

You did a very good job of using the literature to complement the purpose for a research project. However, I found it difficult to locate a clearly written research question within the text of the submitted paper; the research question is somewhat implied. You used research theory well to justified a mixed methods procedure but there were elements within the design that required more information, for instance:

  • whether the online survey is going to be designed to answer open or closed questions (or a combination of both)
  • how many participants will be interviewed
  • what process will be used to select participants for interviews
  • whether descriptive and/or inferential statistics will be used to analyse quantitative data, and
  • a review of the ethical considerations.

Targeting 250,000 participants to respond to an online survey is massive and not suited to a Master’s level research project seeing it will take considerable time to gather and analyse data. Random sampling should have been considered and discussed. The research project is of value within the field of education however the process needs to be narrowed down and clearly articulated.


I’m interested in the teaching and learning as well as art applications of this technology. What’s interesting about this as an exercise is the wasted opportunity to celebrate and showcase respondents of their input into the data gathering and research process. Metro and Monash could have actively showcase the exercise by representing the number of passengers/users on the platform in a creative way, personally and publically – on their device and on large screens and through speakers. Doing this may mitigate some of the possible resistance by passengers who only find out about their contribution to research after the fact – they see a small sign that gives them an option of opting-out by turning-off their WIFI (and therefore disabling their own WIFI connectivity – work/study), which isn’t really fair.


Who is conducting this trial?

This trial is a joint initiative between Metro, Public Transport Victoria (PTV) and Monash University.

What is this trial about?

Metro, PTV and Monash University are conducting research to gain real time data of passenger numbers on platforms at Richmond station and on board trains travelling from Richmond across the network.

This is about using technology to provide better information to improve the services we provide to customers.

What will the trial do?

Information will be collected on how people are using Richmond Station by counting the number of Wi-Fi enabled devices on the platforms and trains.

As a result of this collection of data, Metro will be able to further analyse how it can improve the customer experience by:

collecting data to better inform future network service planning
improving information available at stations and allocation of customer service staff
Identifying crowd movements on and around platforms
providing customers with a better overall service

When will the trial take place?

The trial will commence on the 17th of February 2017 and run until the 30th of June 2017.

Where will the trial be conducted?

The trial will focus only on passenger flow on platforms 7,8,9 and 10 plus the concourse at Richmond Station and on board four trains.

All areas where this technology is active will be clearly marked to advise customers.

How does it work?

Wi-Fi routers will be installed on platforms 7,8,9 and 10, plus the concourse at Richmond Station and on four trains. These devices will be able to count the number of active Wi-Fi devices in the vicinity of these platforms.

How can I opt out?

All you need to do is to turn off the Wi-Fi on your personal device and you will not be included in this trial.

Can you access my personal information from my electronic devices?

No. Personal information is never traced or tracked.

Your devices unique identification number (MAC address) is put through two levels of encoding. This ensures your personal information cannot be traced or tracked.

How will I know what trains I’m being tracked on?

All trains where this technology is active will be clearly marked with posters to advise customers.

Will I start being registered as soon as I step on one of the 4 trains or only when we approach Richmond Station?

The technology counts the number of active WiFi devices on board the trains regardless of their location on the network. However during analysis only trains that travel through Richmond station will be considered.

Who has access to this information?

The number of devices will be the only information collected. Your personal information is never traced or tracked and will remain completely anonymous.

The raw data is only held by Monash University and will be deleted 90 days after finalisation of the trial. The statistical information will be used to gauge the accuracy of this technology and will be shared with Metro and the PTV.

How can I get more information?

For more information you can contact the PTV call centre: 1800 800 007

Passenger advisory in context.


Detail of the exceptionally small notice advising passengers that they’re part of a trial.