Category Archives: elearning

Seminar – Micro-credentials within the AQF: Who’s the winner here?

On Tuesday March 12, I attended the inaugural Victorian Edtech Seminar – ‘Micro-credentials within the AQF: Who’s the winner here?’ seminar hosted by Study Melbourne and Edugrowth.

The seminar provides an opportunity for interested people to hear from an erudite panel of representatives from across the education and edtech sectors on a topic of growing importance in education and technology: micro-credentials.

The focus of the inaugural seminar was micro-credentials within the Australian Qualifications Framework (AQF), the national policy for regulated qualifications in Australian education and training.

Why do we need to talk about this?

The Australian Government is reviewing the AQF, and is currently exploring whether or not the AQF should include shorter form credentials (i.e. micro-credentials).

Shorter form credentials could enable easy recognition of credentials across sectors and providers, and could be included or linked to full qualifications.

  • However, what impact would this have on the edtech and education sector?
  • Would it stifle innovation or bring a badge of quality?
  • Would it provide an additional review stream for education providers or lessen the value of education products?

Who was there

The speakers were Prof. Liz Johnson (DVC Education, Deakin University), Amanda Pickrell (Director International Education, Victorian Government) and David Linke (Managing Director, Edugrowth) while the panel was made-up of Dr Asheley Jones (Head, Professional Practice and Industry Partnerships, DeakinCo), Anthony Morris (CEO, Cahoot Learning), Rohan Chandler (VP Partnerships, Go1 (Formerly SEEK) and Andy Giddy (Executive Director Business Innovation, La Trobe University).

Key takeaways

My key takeaways from the seminar were:

  • The big question for the seminar ‘Is there a winner?’ wasn’t answered.
  • Market or providers don’t necessarily want AQF governance, but it’s coming (so be a part of it).
  • Governance can be considered a good thing, particularly for demonstration of academic integrity/rigour and competitive point of difference from other ‘options’.
  • Branding and customer perception is important, unsurprisingly.
  • Universities should reconsider pre-occupation with degree only micro-credential offerings and be receptive to also offering micro-credentials for CPD (that demonstrate currency/capacity) for relevant industries.
  • If they’re not already, research-focused universities should consider how micro-credentials (short online courses) can support ‘research translation’ and getting their research out into public.
  • Industry/the market will determine and inform micro-credentials and collaborating/working with them is critical (Microsoft Certified Professional and Cisco certification are examples of industry doing it themselves).

Literature review: They always come: An assessment of pedagogical dimensions, or combinations of, that could possibly contribute to repeated participation by learners in an online course over multiple instances


Since their emergence in the mid-2000s, massive open online courses (MOOC) have been predicated on making learning available to everyone, and at scale. While the use of instructional video by educators isn’t new to face-to-face or online learning experiences, it does serve as a critical and primary means of content delivery in a MOOC.
Much effort has been spent analysing data generated by MOOC participants (eg., Guo, Kim & Rubin, 2014; Savage, 2009; Wang, 2017) to determine if video production methods, format, style, type or even duration has the capacity to solely influence student learning and engagement. Research across a variety of MOOCS (eg., Engle, Mankoff & Carbrey 2015; Hew & Cheung, 2008; Taib, Chuah & Aziz, 2017) has also been conducted into better understanding the impact of pedagogical dimensions such as cooperative learning, feedback, activities and assessments on learner participation, therefore suggesting that there may be more factors to cultivating an engaging learning experience than video instruction alone.
These factors may seem elementary for a single instance or one-off run of a MOOC, but what of repeat learners – those learners who repeatedly join and then continue to actively participate in the same MOOC over multiple instances? An example of this cohort would be the learners who continue to participate in Monash University’s ‘Mindfulness for Wellbeing and Peak Performance’ and ‘Maintaining a Mindful Life’ courses on FutureLearn.
This review aims to assess which pedagogical dimensions, or combinations of, that could possibly contribute to repeated participation by learners in an online course over multiple instances to assist the rise of continued ongoing course participation and guide course design for alternate and expanded online offerings.

Review of literature

‘Have we got a video? Yes, we’ve got a video!’ (Young Ones – Nasty 1986)

Although the use of video isn’t exclusive to MOOCs, video is the primary means of content delivery. Each MOOC is unique, where educators, course content designers and producers work together to create video content that makes use of a diverse combination of video types (text, static image, image and presenter face/talking head and fully animated) and video design elements (static slides, digital ink/scribbler/analysis, discussion/discursive), that complements the pedagogical approach. The impact of video production on learning is of great interest to educators and MOOC producers. Guo, Kim and Rubin (2014) found that videos of short duration are engaging to learners. An equal level of engagement was identified for videos types that featured the ‘talking head’ of a presenter or used digital ink/scribbler/analysis to provide a more detailed explanation. Neilson (2014) and Savage (2009) support the approach of videos of short duration. To maintain learner engagement, Neilson (2014) recommended that videos should combine different video types and design elements, such as a presenter face/talking head with a digital ink/scribbler/analysis. Multiple studies have investigated the role of cinematography and cinema production design/methodology on learner engagement. In his research, Wang (2017) identified camera shot style and background design ‘down to the granularity of how the image is captured, recorded and delivered’ (Wang 2017) to be as equally as important as video types and video design elements on learner attention and engagement. In addition, Wang, Chen and Wu (2015) observed the impact of different video types and video design elements on learning. They concluded that the production cost and intensity associated with creating ‘lecture capture and picture-in-picture videos may be worthwhile for online learning from the perspectives of improved learning performance and reduced cognitive load’ (Wang, Chen and Wu 2015). The use of video isn’t reduced only to pre-prepared and pre-recorded content that contributes to the course material. Video can also be create in responsive to learner activity and play a critical role in enabling educators and course instructors to provide feedback to learners throughout the delivery of the course. Henderson and Phillips (2015) reported that the affordances of video enable instructors to convey detailed and elaborate feedback as well as encouragement and praise to learners more so than written commentary. Henderson and Phillips (2015) also reported positive learner response to video-based feedback, where learners felt they had a closer connection with the instructors or educators. Echoing the reports from Henderson and Phillips are learner reviews posted online ‘The final feedback video each week is the jewel in their crown. It sums up the week’s thoughts and mentions some of the issues and talking points brought up in the online comments’, ‘Weekly feedback on YouTube responds to learners questions and comments as they have arisen that week’ and ‘The weekly feedback sessions allowed the participants to feel engaged and gave a more personal feel to an online course’ (Class Central 2018). The use of video-based feedback in this capacity not only addresses any potential knowledge deficit experienced by learners, makes the ‘makes the massive feel intimate’ (Pappano 2018), but also personalised the course in a meaningful and practical way, while actively contributing to the change in learner perceptions of the course as the type where an ‘instructor is not as available because there are tens of thousands of others in the class’ (Pappano 2012).

‘Everybody online. Looking good.’ (Aliens 1986)

When thinking about why learners persist with a course or even choose to repeat the same course again, it’s critical to consider the role of instructors, course mentors or any of the multitudes of names for the course team members who are responsible for facilitating discussions and interacting with learners on the platform. According to Hew (2014), instructor accessibility and passion are some of the features that were identified as key for promoting learner engagement, where engagement is defined as an observable action in the course. While engagement is different from completion and retention (which in MOOCS, are an often misconstrued metric devised by educators and platform providers for defining their view of learner success, satisfaction, needs or goals) an analysis by Adamopoulos (2013) revealed that the role of the instructor did have the largest possible effect on the likelihood of completion. Learner reviews posted online ‘In addition to this there are two outstanding mentors who support the learning and help clarify the more scientific aspects of the course and make lots of useful links and resourses [sic] available to enable the independant [sic] exploration of a complex subject’ (Class Central 2018), ‘The moderators are extremely active in their support for learners’ (Class Central 2018) and ‘The mentors had valuable insights and comments and added a lot’ (Class Central 2018) echo the findings of Adamopoulos (2013) and Pilli & Admiraal (2017) in relation to learner interaction with an instructor, where learner retention and satisfaction is higher when instructors are highly responsive. However, some earlier literature on MOOCS (Kop, 2011; Mackness, 2013; Milligan & Littlejohn, 2014) seemingly bemoan the absence of interaction between instructors and learners, asserting that learners may be prevented from having a quality learning experience due to their limited capacity to undertake self-directed learning. In the past, a diminished learning experience may have been inevitable, considering the vintage of the online environment in which their learner cohort was constrained (the anti-social EdX platform and the hardcore Connectivist Edupunk-ness of the open web). Since the launch of FutureLearn, a platform that’s been ‘inspired by Laurillard’s Conversational Framework’ (Sánchez-Vera, Maria del Mar, Leon Urrutia, Manuel and Davis, Hugh C 2015), a diminished learning experience seems less likely as the platform is predicated on a social learning experience, instructor-learner, learner-learner and learner-content.

‘This course could be your life’ (Groom 2016)

According to Horrigan (2016), people undertake learning for personal and professional reasons. Personal learners often choose learning opportunities that are likely increase their knowledge and skills that benefit themselves and others. Professional learners often choose learning opportunities that are likely to maintain or improve their knowledge and skills. Irrespective of the learner’s inclination for undertaking learning, the opportunity must be flexible and permit the learner drop-in and out at their discretion, often at intervals throughout their life, much like ‘a personal learning continuum, rather than as unrelated, separate gatherings’ (Aras Bozkurt & Jeffrey Keefer 2018). Communities that foster a culture of participatory learning contribute to the premise of lifelong learning, something to be part of, an idea ‘that the teaching and learning experience, the idea of imagining where you stand within in that environment is something that’s akin to the DIY punk experience‘ (Groom 2016). While the course creates an environment for this type of experience, its an environment that is suited for a learner that is self-determined, attentive, can think critically, reason, and is ready to collaborate (Pegrum, 2009). The presence of instructors is once again critical, where they support learners, the development of a community of learning, all the while gently guiding learners through the course. This raises the question as to why learners continue to participate and persist with a course and continue to engage in the community of learning. As MOOCS are voluntary, is it possible that learners simply enjoy being part of a community or their goals to deepen their knowledge and skills are being met, or is it the convenience, ease of use and flexibility afforded by the platform, or possibly even something else entirely that’s related to their individual needs that’s being served? Learners (Class Central, 2018) who participated in either or both of the mindfulness-related offerings by Monash reported ‘The two courses that I have been blessed with the opportunity to do on a number of ocassions [sic] have really enhanced the quality of not just my life but those who share their lives with me in any way. I am simply a better person for the learnings that I have had through undertaking the courses’, ‘I have found the leaders, Craig and Richard, really engaging and great role models for their subject, being calm, measured and reassuring, but also inspirational and encouraging. The strength of the course is their rapport in the short videos and audios which acknowledge human frailty and make the continuing practice of mindfulness seem vital and attainable in daily life.’ and ‘I plan to take the course again when it is offered in November, so I can delve more deeply into this more authentic way of viewing the world and living my life’, which goes some way to providing some insight into their ongoing participation. Furthermore, the learner sentiment expressed towards the mindfulness related offerings is somewhat akin to the model proposed by Peltier, Drago, and Schibrowsky (2003), that identified student-to-student interactions, student-instructor interactions and instructor support and mentoring, as well as course content, course structure and information delivery technology to key components to learner perceived quality of the online learning experience.


This review aimed to assess which pedagogical dimensions, or combinations of, that could possibly contribute to repeated participation by learners in an online course over multiple instances to assist the rise of continued ongoing course participation and guide course design for alternate and expanded online offerings. While findings are inconclusive in regards to the particular group of learners who continue to participate in Monash’s ‘Mindfulness for Wellbeing and Peak Performance’ and ‘Maintaining a Mindful Life’ courses, video production that carefully considers video type and video design is critical for learning materials that contribute to learner engagement. The role of instructors, course mentors or course team members responsible for interacting with learners was also found to be critical to learner engagement, completion and a quality learning experience. The affordances of a digital online space that’s flexible enough to permit a learner to drop-in and out at their choosing, at intervals throughout their life is a key factor to creating a community that fosters a culture of participatory learning, and one that may encourage learners to continue to be a part of. While anecdotes posted online do celebrate rich and impactful experiences that resonate with the repeat learners, literature that articulates or attempts to articulate an explanatory model for that particularly cohort is scarce. It is proposed that future studies could further examine in detail the repeat learner cohort, which could allow for more specific identification and understanding of the pedagogical dimensions, or combinations of that inform their ongoing commitment to a course, which could then be further analysed and potentially applied to courses with different subject matter contexts.


Adamopoulos, P. (2013). What makes a great MOOC? An interdisciplinary analysis of student retention in online courses. In Thirty fourth international conference on information systems, Milan, 2013.
Aliens 1986, film, Brandywine Productions, USA
Aras Bozkurt & Jeffrey Keefer (2018) Participatory learning culture and community formation in connectivist MOOCs, Interactive Learning Environments, 26:6, 776-788.
W. Wang, C. Chen and C. Wu, “Effects of Different Video Lecture Types on Sustained Attention, Emotion, Cognitive Load, and Learning Performance,” 2015 IIAI 4th International Congress on Advanced Applied Informatics, Okayama, 2015, pp. 385-390.
Class Central 2018, Maintaining a Mindful Life, retrieved 20 July 2018, <>
Class Central 2018, Mindfulness for Wellbeing and Peak Performance, retrieved 20 July 2018, <>
Guo, P. J., Kim, J., & Rubin, R. (2014). How video production affects student engagement: An empirical study of MOOC videos. In Proceedings of the first ACM conference on Learning@ scale (pp. 41-50).
Henderson, M., & Phillips, M. (2015). Video-based feedback on student assessment: scarily personal. Australasian Journal of Educational Technology, 31(1), 51-66.
Hew, K. F. (2014). Promoting engagement in online courses: what strategies can we learn from three highly rated MOOCS. British Journal of Educational Technology (Online First).
IMDB 2018, The Young Ones – Nasty, retrieved 21 July 2018, <>.
Kop, R. (2011). The challenges to connectivist learning on open online networks: learning experiences during a massive open online course. The International Review of Research in Open and Distance Learning, 12(3), 19e38.
Mackness, J. (2013). cMOOCs and xMOOCs-key differences. Available from:
Milligan, C., & Littlejohn, A. (2014). Supporting professional learning in a massive open online course. The International Review of Research in Open and Distance Learning, 15(5), 197e213.
Neilson, B. (2014). Video Production and Learner Engagement in MOOCs, retrieved 22 July 2018, <>
Pegrum, M. (2009). From blogs to bombs: The future of digital technologies in education. Perth,
Australia: University of Western Australia Publishing.
Peltier, J.W., Drago, W., & Schibrowsky, J. A. (2003). Virtual communities and the assessment of online marketing education. Journal of Marketing Education, 25(3), 260e276.
Pew Research Center 2016, Lifelong Learning and Technology, retrieved 20 July 2018, <>
Pilli, O., & Admiraal, W. (2017). Students’ learning outcomes in massive open online courses (MOOCs): Some suggestions for course design. Journal of Higher Education, 7(1), 46–71.
Sánchez-Vera, Maria del Mar, Leon Urrutia, Manuel and Davis, Hugh C. (2015) Challenges in the creation, development and implementation of MOOCs: Web Science course at the University of Southampton. Comunicar, 22 (44), 37-43.
The New York Times 2012, The Year of the MOOC, retrieved 21 July 2018, <>
This Course Could Be Your Life, Keynote – Jim Groom 2016, YouTube, CIRT Lab, 3 March, retrieved 20 July 2018, <>.
Wang, P.-Y. (2017). The impact of camera shot and background design for MOOC videos on student recall and flow experience. Journal of Educational Media and Library Science, 54(3):237-268


I’m interested in the teaching and learning as well as art applications of this technology. What’s interesting about this as an exercise is the wasted opportunity to celebrate and showcase respondents of their input into the data gathering and research process. Metro and Monash could have actively showcase the exercise by representing the number of passengers/users on the platform in a creative way, personally and publically – on their device and on large screens and through speakers. Doing this may mitigate some of the possible resistance by passengers who only find out about their contribution to research after the fact – they see a small sign that gives them an option of opting-out by turning-off their WIFI (and therefore disabling their own WIFI connectivity – work/study), which isn’t really fair.


Who is conducting this trial?

This trial is a joint initiative between Metro, Public Transport Victoria (PTV) and Monash University.

What is this trial about?

Metro, PTV and Monash University are conducting research to gain real time data of passenger numbers on platforms at Richmond station and on board trains travelling from Richmond across the network.

This is about using technology to provide better information to improve the services we provide to customers.

What will the trial do?

Information will be collected on how people are using Richmond Station by counting the number of Wi-Fi enabled devices on the platforms and trains.

As a result of this collection of data, Metro will be able to further analyse how it can improve the customer experience by:

collecting data to better inform future network service planning
improving information available at stations and allocation of customer service staff
Identifying crowd movements on and around platforms
providing customers with a better overall service

When will the trial take place?

The trial will commence on the 17th of February 2017 and run until the 30th of June 2017.

Where will the trial be conducted?

The trial will focus only on passenger flow on platforms 7,8,9 and 10 plus the concourse at Richmond Station and on board four trains.

All areas where this technology is active will be clearly marked to advise customers.

How does it work?

Wi-Fi routers will be installed on platforms 7,8,9 and 10, plus the concourse at Richmond Station and on four trains. These devices will be able to count the number of active Wi-Fi devices in the vicinity of these platforms.

How can I opt out?

All you need to do is to turn off the Wi-Fi on your personal device and you will not be included in this trial.

Can you access my personal information from my electronic devices?

No. Personal information is never traced or tracked.

Your devices unique identification number (MAC address) is put through two levels of encoding. This ensures your personal information cannot be traced or tracked.

How will I know what trains I’m being tracked on?

All trains where this technology is active will be clearly marked with posters to advise customers.

Will I start being registered as soon as I step on one of the 4 trains or only when we approach Richmond Station?

The technology counts the number of active WiFi devices on board the trains regardless of their location on the network. However during analysis only trains that travel through Richmond station will be considered.

Who has access to this information?

The number of devices will be the only information collected. Your personal information is never traced or tracked and will remain completely anonymous.

The raw data is only held by Monash University and will be deleted 90 days after finalisation of the trial. The statistical information will be used to gauge the accuracy of this technology and will be shared with Metro and the PTV.

How can I get more information?

For more information you can contact the PTV call centre: 1800 800 007

Passenger advisory in context.


Detail of the exceptionally small notice advising passengers that they’re part of a trial.

EduGrowth pre-accelerator pitch night

And the winner is…

Seminar – Rethinking Online Learning: Melding the Best of Teaching, Television and Testing

Today I attended the Rethinking Online Learning: Melding the Best of Teaching, Television and Testing seminar presented by Professor Gosling (Department of Psychology, University of Texas, Austin, USA) as part of the Innovations in Teaching and Learning series of seminars presented by Melbourne University.

Professor Gosling’s seminar was based on the work he’s doing with a colleague from University of Texas in the way of rethinking online learning, particularly a synchronous broadcast delivered to a large number of students. In the description for the seminar, Professor Gosling described his work in the following way:

We teach a Synchronous Massive Online Course (or SMOC), broadcasts live to about 2000 students. With daily quizzes and a television show format, we find that absentee rates are low, test performance high, study habits greatly improved, with large drops in achievement gaps between rich and poor students. The synchronous broadcast model offers a number benefits including facilitating interactive elements and addressing concerns about cheating. Many challenges remain but our experiences (and data) suggest that large online classes taught using this format have great potential.

The seminar

In his seminar Professor Gosling’s spoke about the design, development and delivery of a Synchronous Massive Online Course (or SMOC) for the Introduction to Psychology course at University of Texas. The SMOC was a response to what he called the Big Old Class (BOC) where there was high student attrition and low achievement. Built on Canvas (the learning management system (LMS) by Instructure), Gosling and his colleague were able to broadcast their lecture (in a chat show format with segments such daily news items, lab experiments and interviews with experts) from a studio at the University to a live student audience and students tuned in online. Within the Canvas LMS, students were also able to form mentor-based study groups (known as pods), complete surveys, access online textbooks and resources and complete daily tests (known as benchmarking). Benchmarking featured questions individualised to the student and contained feedback with support that enabled the student to undertake self-regulated learning. Professor Gosling advocated daily benchmarking as a method of providing students with feedback and measurement on their performance in contrast to more traditional mid-term examinations where performance was often measured  too late (which often made it more difficult for the student to do something about it).

The format and production of the SMOC was similar to  live television.
The format and production of the SMOC was similar to live television.

Problems and issues

The only problem or issue with the television show format that was mentioned by Professor Gosling was the cost of production, particularly the team (analagous to live television production) required to coordinate and sequence the broadcast the show.

Although Professor Gosling didn’t mention cheating and collusion as a problem or issue for the course, it isn’t something that’s specific to this course format. It just becomes a little more complex when student behaviour is somewhat obfuscated by online delivery (Professor Gosling did go on to talk about his approach to managing cheating and collusion between his students).


Professor Gosling went on to tell us that the course had a success with its increase in student retention rates and grades. He attributed the success to the television show format, the intensive benchmarking with feedback (which encouraged self-regulated learning), the student mentoring and facilitated discussions (via the study-group pods). The course was also a success as far as gathering data about student behaviour (online) that could be used for further research and continued course enhancement. Although not mentioned by Professor Gosling, this data could also serve as a potential revenue stream. Based on the success of the course, Professor Gosling told us this model was being strongly considered for adoption by other faculties at his university.

Feedback based on results from benchmarking played an important role in encouraging students to undertake their own self regulated study.
Feedback based on results from benchmarking played an important role in encouraging students to undertake their own self regulated study.

Managing cheating and collusion

Professor Gosling and his team managed cheating and collusion between students throughout the daily benchmarking by consigning someone to write some software that monitored and compared in real-time the order of each question that was completed by each student and the amount of time it took each of them them to complete the question. The software then identified patterns of completion and was able to determine the likelihood of collusion between students during benchmarking. Professor Gosling and his team then decided if they were going to send the suspicious student an email warning them their behaviour was being closely monitored.


There’s no denying the the flexible and fun nature of the online television show style broadcast would have been contributing a factor to the increase in student performance, but I can’t help but think the  mentored study groups and the rather rigorous and regimented daily benchmarking would have also been a major contributing factor to the increase in student performance particularly when the benchmarking provided feedback that helped students undertake their own self-regulated learning). Besides, a fair, reasonable and diverse assessment strategy would probably measure student performance and provide them with feedback and support their self-regulated learning anyway.

From what Professor Gosling told us, the SMOC has been a success, but I can’t also help but think the broadcast model is somewhat traditional and doesn’t consider constructivist and connectivist approaches to course design that incorporate the network (as a learning environment with peers) and the large number tools available to enable students to become authors and contribute to course content.

There’s certainly a place for student generated, curated, moderated and broadcast content (with the teacher and other students as well in response to content) particularly with a premise of a television show format. Unfortunately, the broadcast (without feedback or input from students) method of guiding, monitoring and directing students could be considered a fairly regular and popular instructional strategy for those yearning to repetitively deliver learning at scale.

User flow for the completion of a safe work method statement (SWMS)

This sketch demonstrates the preliminary user flow for a web application/mobile experience that permits the completion and submission of a safe work method statement (SWMS) as part of a vocational training and assessment experience.

A SWMS is a site-specific form that must be completed before any high-risk construction work is commenced. Generally, the completion and submission of a SWMS is a paper-based.

This web application/mobile experience seeks to take advantages of the affordances of mobile technology and allow users (students in a vocational training and assessment context) to complete this form prior to commencement of work.

In a training and assessment context, the completion and submission of the SMWS is predicated on learning management system (LMS) connectivity and established user permissions.


Based on the preparatory user flow sketch, I then worked with developers and designers to extend the Mobas web application with the SWMS template.

Sequence of screens that make up the SWMS completion experience for the Mobas web application.

User flow for a practical task capture: Technical report

This sketch demonstrates the preliminary user flow for a web application/mobile experience for a practical task capture, more specifically the completion of a technical report or similar documentation as part of a vocational training and assessment experience.


Like the safe work method statement (SWMS) template, this web application/mobile experience seeks to take advantages of the affordances of mobile technology and allow users (students in a vocational training and assessment context) to complete this form in the workplace or training environment.

In a training and assessment context, the completion and submission of a practical task capture is predicated on learning management system (LMS) connectivity and established user permissions.

Based on the preparatory user flow sketch, I then worked with developers and designers to extend the Mobas web application with the practical task capture template.

Sequence of screens that make up the practical task capture experience for the Mobas web application.

Modelling a simple door/entry area to Dr Softain’s lab with Blender for an augment with Aurasma Studio

A work-in-progress render of the scene and 50 frame animated geometry for an augment of door/entry area to Dr Softain’s laboratory. Thematically, this scene takes place around the same time as Dr Softain’s emergency broadcast.

The modelling is based on measurements and reference photos taken at the scene. The animated door opening was achieved by creating a simple bone system, skinning the mesh and then animated the bones. I did this because I thought animated rigid geometry wasn’t supported by Aurasma Studio. I was wrong. Animated rigid geometry is supported by Aurasma Studio. I may continue to use bones to animate the opening of the door and other geometry that makes up the scene.

If all goes to plan, the final .dae export and augment with Aurasma Studio of the alternate animated door/entry area should replace the real door/entry area entirely.

Work to be completed

The completed scene will be made up of a partially visible collapsed Dr Softain, handing lights and elements such as strange equipment and tools you may expect to find in a laboratory. I’m also considering replacing animated versions of the fridge and bin seen in the reference photos. Each element will need to be low-poly and  combined with other geometry into a single mesh to meet the 3D guidelines for Aurasma Studio. Further visual effects such as dirt, spilt chemicals, blood etc will be painted onto 512 x 512 material that is then applied to the mesh. The animation looks a bit stiff, so I’ll give that a bit of tweak too!

Thinking out loud

Sketching out the door/entry scene and thinking about the limitations of designing and developing augments. There’s something about them that makes them merely passive observational pieces. They seem read-only. Web 1.0. Augments and the fictional layer should be read/write by those who interact with the space. That’s more web 2.0 – beyond. I guess that’s the challenge. Integrate them into/with something else where action is required and/or make the diorama read/write.

Dr Softain's surgery
An example of the type of elements that could used in the scene.
New equipment for Dr Softain's laboratory.
An example of the type of elements that could used in the scene.