In 2013 I was fortunate enough to be granted a VET Development Centre Specialist Scholarship. Specialist Scholarships are available to non-teaching staff who wish to develop their skills, capability and professional standing within the vocational education and training (VET) system in Australia. The Specialist Scholarship Program focuses on the professional development of non-teaching staff in the context of high level administrative and specialist tasks required of them by internal and external stakeholders.
As part of the deliverables for the Specialist Scholarship, I have prepared a report that provides a succinct summary of my experiences and output from the professional development activity. Download my Project Report – Specialist Scholars 2013 (PDF, 8MB) and my report from the Specialist Scholars Knowledge Sharing event (PDF, 6.7MB).
What does a Specialist Scholar need to do?
As a specialist scholar I needed to complete a formal component as well as a professional development program of my choosing. The formal component was made up of three compulsory events (Event 1 – Induction, Event 2 – Professional development and Event 3 – Knowledge sharing) that all recipients were required to attend. These events gave me the opportunity to gain valuable skills and knowledge I used to complete the scholarship as well as share what I’ve learned with others.
What did I do for my professional development program?
I decided to separate my professional development program into two complementary components. An education component, where I would learn about AR as a technology and an application component where I would then apply what I’ve learned and continue to experiment with the technology to create a simple AR artifact for use in teaching and learning contexts.
The concept of augmented contextual instruction
Augmented contextual instruction provides learners with context specific procedural instruction based on recognised images and objects. More specifically, augmented contextual instruction could be used to provide instruction to pre-apprentices in the disassembly and assembly of components that make up an electrical or mechanical element such as a rear brake caliper from a motor vehicle.
Learners can use the Junaio app to search for specific channels on the Metaio platform for available instruction and then activate the relevant instruction based their image or object of interest. Once recognised, the contextual instruction will guide the learner through a procedure or process such as an assembly or disassembly of a component. Access to underpinning knowledge, resources relevant to the each stage of the procedure or process and functionality to repeat/reverse the instruction will be provided.
Download the Augmented contextual instruction Design document in PDF (476 KB) or Microsoft Word (704 KB) format to learn more about functionality, delivery approaches, development requirements, limitations and constraints and future features.
Part 1: Education
Attending ARcamp 2.0 at the Inspire Centre at the University of Canberra from Monday 20 May to Tuesday 21 May was one activity I completed as part of my education component for the professional development program. The camp featured two days of AR related presentations, activities and workshops. While the camp wasn’t as developer-focused as I had hoped, the presentations and workshops over the two days of camp provided me with an overview of AR and how it’s being used in some educational and commercial contexts. ARcamp 2.0 also gave me the opportunity to hear Trak Lord (US Marketing and Communications) from Metaio mention the object tracking features of Metaio/Junaio. The very same object tracking features that would underpin the development of my augmented contextual instruction.
Self-directed investigation and experimentation with AR technology was an ongoing and thoroughly important activity throughout the education and application component of the professional development program. Although some of my investigation and experimentation with aspects of AR had begun prior to the Specialist Scholar program, it helped to inform my ongoing investigation, experimentation and development activities throughout the program. It also helped to inform with my final selection of metaio’s junaio as the most suitable (considering the limited scope of the project) proprietary AR platform and browser for my augmented contextual instruction.
Part 2: Application
The actual application or creation of a functional AR experience was an extremely important part of the Specialist Scholarship program for me. My goal was to create a tangible and functional experience that could serve as simple and functional example of an AR experience that could be used by students and teachers that could also be refined technically and pedagogically in the future.
With the assistance of a colleague, I was able to work with a teacher to apply the concept of augmented contextual instruction to a simple procedure that is carried out in the automotive industry – The disassembly of a rear brake caliper.
Considering the limitations and constraints of the project, I chose a proprietary AR platform metaio and their AR authoring tool Creator to create my augments and their AR browser junaio to browse my augmented contextual instruction over a number of channels. Due to limited access to a range of mobile devices for testing, I also decided to develop the augmented contextual instruction only for the iPad. This decision wasn’t without it’s own set of problems around authoring, publishing and browsing my content.
Another part of augmented contextual instruction is the supporting instructional materials for the learner. These materials include a link to a website with a list of learning resources for rear brake calipers (that would be populated by the teacher or trainer) and also short instructional videos that demonstrate how the learner is to complete each step of the disassembly process. The instructional videos were recorded with the help from teaching staff.
Accessing the augmented contextual instruction
The augmented contextual instruction can be accessed by browsing channels in the junaio AR browser or from an instructional poster. The instructional poster provides the user with an entry point to the object-based or image-based ‘How to disassemble a rear brake caliper’ augmented contextual instruction. The learner just needs to answer Yes or No to the question ‘Do you have a rear brake caliper’. In trainer-led and self-directed learning contexts the instructional poster is probably most expedient as it can be provided to learners as part of their existing course materials.
The ‘How to disassemble a rear brake caliper’ instructional poster is available for download in PDF (322 KB) and free to use. It can be printed in colour or black and white or displayed on another tablet device or phone placed on a flat surface like a table, bench or floor. You can then use your iPad to scan the instructional poster to access the object tracking or image tracking channels for the augmented contextual instruction.
Although the augmented contextual instruction is intended to be an object tracking experience, I also created an image tracking equivalent for those who do not necessarily have access to a rear brake caliper but would still like to learn about the process of disassembling a rear brake caliper.
The following functionality and features could be considered for development in future iterations of augmented contextual instruction:
- Depart from use of proprietary augmented reality platforms and pricing paywalls. Pursue development of device and platform agnostic augmented contextual instruction and additional augmented reality experiences for display in web browsers built on Web sockets, Web Real-Time Communication (WebRTC) and the Web Graphics Library (WebGL).
- Explore the use of XBOX 360 Kinect to create/scan 3D geometry and also capture 3D point cloud data required for image and object recognition as an alternative to Metaio Toolbox
- Explore use of wearable technology such as Google Glass to provide hands free contextual instruction without obstruction or interruption. For example, the learner would no longer need to put down their tools or object of interest to activate next step or stage of the contextual instruction.
- Create surfaces that are live (read/write) and not static like a diorama. Surfaces that permit the creation, customisation and submission of peer and trainer reviewed learner generated contextual instruction to existing infrastructure
- Integrate augmented contextual instruction with existing institute products
- Personalise learner’s contextual instruction through analysis and aggregation of their academic submissions to institute infrastructure.
If you’re interested in the minutiae of the development of the augmented contextual instruction or my Specialist Scholarship experience, you can contact me with your questions or work your way through the blog posts I posted throughout the Specialist Scholarship program.