Explorations in augmented reality presentation for the E-LEARNING INDUSTRY ASSOCIATION event

This afternoon I attended the E-LEARNING INDUSTRY ASSOCIATION PechaKucha event at Centre for Adult Education (CAE) in Melbourne’s CBD and presented Explorations in augmented reality. Explorations in augmented reality described my experience of prototyping an augmented reality experience as part of my VET Development Centre Specialist Scholarship. The event also featured presentations from Aliki Kompogiorgas (Box Hill Institute) and Scott Wallace (Nine Lanterns). Aliki presented on the six niche MOOCS for VET currently being developed by the Institute, while Scott presented on interative storytelling and the use of narrative in learning…

A solution to the challenge of using markers placed on the floor to trigger and then engage with an augment containing a 3D object modelled to scale

Using Simultaneous localization and mapping (SLAM) or mapping a room (creating a 3D point cloud) or environment with Metaio Toolbox could be a solution to the challenge of using markers placed on the floor to trigger and then engage with an augment containing a 3D object modelled to scale.

Augmented contextual instruction user experience (Image tracking)

Screen captures from the completed image-tracking augmented contextual instruction user experience.  The augmented contextual instruction is made up of a sequence of junaio channels that can be browsed in the junaio AR browser. For this example, the channels were browsed using junaio on an iPad.

Augmented contextual instruction user experience (Object tracking)

Screen captures from the completed object-tracking augmented contextual instruction user experience.  The augmented contextual instruction is made up of a sequence of junaio channels that can be browsed in the junaio AR browser. For this example, the channels were browsed using junaio on an iPad.

Save. Upload. Wait. Test. Tweak. Save again. Upload again. Wait again. Test again. Tweak again. Almost there!

Save. Upload. Wait. Test. Tweak. Save again. Upload again. Wait again. Test again. Tweak again. Almost there! And another thing. It’s been 11 days since I posted my issue with Z translation of buttons in UI Designer of Metaio Creator 2.6 to the Helpdesk of the Metaio  Developer Portal and there hasn’t been one response! Damn.

Placing geometry in the 3D point cloud in Metaio Creator 2.6

Placing geometry in the 3D point cloud in Metaio Creator 2.6 is a little cumbersome, bewildering and often inaccurate. It takes lots of clicking and seemingly random fiddling to place geometry. You also can’t really determine if your geometry is placed correctly within the 3D point cloud until you upload to your channel and then view your channel in Junaio on your device. If it’s not placed correctly, you need to tweak it in Creator, publish it and then view it again in Junaio. Repeat process until it looks like…

Testing image tracking with UI elements for each step of the disassembly process

Screen captures from my image tracking tests in Junaio for each step of the disassembly process with UI elements. Unfortunately, due to my Z translation problems in UI Designer of Creator 2.6 I had to remove the background image which was placed underneath the buttons. Doing this gives the buttons a floaty and slightly random feel, but at least the buttons function as they should!

Lost in Metaio Creator’s Z translation

I’m currently wrestling with the correct Z translation of images and buttons in UI Designer of Creator 2.6. I’ve placed some buttons on top of an background image in the UI Designer of Creator and have encountered a problem where the image obscures the buttons and prevents them from working. I’ve checked the Z translation of the image in relation to the buttons and have made sure the buttons translated above the background image, but this doesn’t seem to make a difference when the published channel is viewed in Junaio…

Two channels published without the extras

After two days of waiting, my Step 4: Remove piston seal from caliper and Step 1: Remove bracket from caliper channels on Junaio have been made public. This is good news, but I think I went a little early on the publish because I failed to include extras such buttons for the instructional video, learner resources and link to next step in the process. The geometry is also misplaced. Now trying to upload to update the published channel and it doesn’t seem to be working. You might have to unpublish the channel,…

Geometry for brake caliper augment: Step 4: Remove piston seal from caliper

Previsualising the pointed tool, piston seal and rear brake caliper geometry for Step 4: Remove piston seal from caliper stage of the brake caliper augment. Geometry will be exported as FBX from Blender, prepared by FBXMeshCoverter for import into Creator for  upload into my Metaio channel for final use as an augment. Figuring out the production workflow for each 3D model used in each step of the augmented contextual instruction. Important tip: Remember to export your fbx from Blender to your Desktop and not anywhere else on your computer. For…

Geometry for brake caliper augment: Step 3: Remove piston from caliper

Previsualising piston, boot and arrow geometry for Step 3: Remove piston from caliper stage of the brake caliper augment. Geometry will be exported as FBX from Blender, prepared by FBXMeshCoverter for import into Creator for  upload into my Metaio channel for final use as an augment. Figuring out the production workflow for each 3D model used in each step of the augmented contextual instruction.

Preparing low-poly geometry for augmented contextual instruction in the disassembly and assembly of a vehicle’s rear brake caliper.

Preparing low-poly geometry for augmented contextual instruction in the disassembly and assembly of a vehicle’s rear brake caliper. Augmented contextual instruction is to be prepared in Metaio Creator, published to a Metaio channel and then accessed by learners through the Junaio app. 3D point cloud data is gathered with Metaio Toolbox. This video conceptualises one stage of the disassembly of a brake caliper. [Edit: Tuesday 13 August 2013] This video incorrectly conceptualises one part of the brake caliper disassembly process. A screwdriver is not used to extract the piston, seal and rubber…

Prototyping augmented contextual instruction with Metaio Creator (Demo mode) and Toolbox

For this prototype, I’m using Metaio Creator image and object recognition features of Toolbox in restrictive Demo Mode to further explore some aspects of my concept of augmented contextual instruction. Unfortunately, in Demo Mode I can’t use the excellent 3D point cloud data  captured in Toolbox to prototype all aspects of my augmented contextual instruction. In Demo Mode augments can only be triggered by a QR code, which is kinda okay for testing while you’re building. I’m thinking about buying a license. This video shows some of the features of the augmented…

Blender selection modes: The sync selection button

Once you’ve created seams in your mesh and then unwrapped it, you’ll then probably want to move contiguous groups of faces or UV islands into a location that will make it easier for you to create and then paint a texture. Before you can start moving your UV islands you’ll need to change your selection mode. Sync selection is an extremely useful selection mode that allows you to select separate UV islands and then rotate, scale and transform them in the UV/Image editor without affecting the corresponding elements in the…

Kollum

Kollum is a concept for a collaborative location-based realtime audio experience that takes place in urban or suburban environments. Kollum is an attempt to conceptualise and capture the elements of location-based audio experiences that incorporate elevation or altitude through cumulative and persistant columns of sound. Users can use the audio recording features of their smart phone or mobile device to create a new audio block that makes up a column at their location  or add an audio block  to an existing column nearby. This animation is an attempt to visually represent the assembly…

In the absence of Glass

In the absence of my own Google Glass, I’d like some type of wearable rig that can hold a mobile device such as a tablet or mobile phone. This rig would sit on the wearers shoulder or be strapped to their chest. It would allow the wearer to view any assistive augment displayed on the screen of the mobile device and also allow the wearer to use both hands to interact with the point of interest. The rig would need to be able to be adjusted to suit the wearer and multiple…

Sew, grow and mow: An AR lawn art experience

Sew, grow and mow is a multi-user virtual lawn art experience played in real locations with real dimensions in real time. Sew, grow and mow is a continuation of an existing idea about persistently augmenting a space. The experience Location Users choose a location nearby or a location they can easily access to place their virtual lawn. Suitable locations can be determined by browsing satellite photographs such as Google maps. They can then choose the design. Design Users can choose from a library of lawn art design templates or create their own with…

Mental note: AR homework

Mental note. I need to do some homework. I need to determine how I can use the Metaio SDK and Unity as an alternative to Blender and Aurasma for developing AR experiences for my VET Development Centre Specialist Scholarship. AR in the browser with something like JSARToolKit library with the WebRTC getUserMedia API Brekel Kinect Pro Face software Metaio SDK – Fundamentals of SDK and Unity Metaio SDK – Tutorial 1 – Hello, World! Metaio SDK – Setting up the Development Environment Metaio SDK – Getting started with the SDK…

ARcamp: Day 2 – Tuesday 21 May 2013

As part of my VET Development Centre Specialist Scholarship I attended ARcamp 2.0 at the Inspire Centre at the University of Canberra from Monday 20 May to Tuesday 21 May 2013. This blog post provides an overview of the presentations and activities that I participated in throughout the second day of camp. The work of Thomas Tucker While not specifically working in the filed of AR, Thomas Tucker from Winston-Salem State University spoke about his work with high-end technology such as the FARO scanner and consumer-level technology such as the…

ARcamp: Day 1 – Monday 20 May 2013

As part of my VET Development Centre Specialist Scholarship I attended ARcamp 2.0 at the Inspire Centre at the University of Canberra from Monday 20 May to Tuesday 21 May 2013. This blog post provides an overview of the presentations and activities that I participated in throughout the first day of camp. Welcome to ARcamp Danny Munnerley from ARstudio welcomed us all to camp. He spoke about ARstudio as a two-year practical and research project that was nearing completion. He also spoke about the eventual release of a resource that…

ARcamp is coming

I’m pretty stoked to be attending ARcamp 2.0 at the Inspire Centre at the University of Canberra from Monday 20 May to Tuesday 21 May as part of my VET Development Centre Specialist Scholarship. Judging by the recent camp update, it looks like we are all going to be immersed in two days of hands-on augmented reality workshop goodness. They’ve also provided us with a link to the ARcamp schedule and the Augmented Reality in Education Wiki and some links to AR industry players such as BuildAR, Metaio, Junaio and Aurasma to…

ARIG: A prototype camera rig for recording contextual tablet and mobile phone screen activity

ARIG is a camera rig for recording activity on and around the screen of a tablet or mobile phone screen. The concept for ARIG came from my need to record my experiments with marker and location-based augmented reality experiences. In this example, ARIG records a simple 3D cube augment produced with Blender 2.62 and Aurasma Studio.

A simple cube (Blender 2.62 and Aurasma Studio)

Blender 2.62 does a good job of exporting a 3D scene in the Collada (DAE) format for use as an overlay in Aurasma Studio. You just need to make sure you interpret the newest version of the Aurasma 3D Guidelines in a Blender 2.62 context. For a Blender 2.62 user the most important guidelines to follow are: Models need triangulated faces (Edit mode > Ctrl+T) No more than four lamps (lights) although three are recommended Models are to have no more than one map/texture Create a .tar archive to upload…

The challenge of using markers placed on the floor to trigger and then engage with an augment containing a 3D object modelled to scale

As part of my VET Development Centre Specialist Scholarship I’m in the process of developing my practical skills in designing and building augmented reality learning experiences. One of the experiences I’m currently prototyping is a workplace hazard identification activity. This has brought about an interesting challenge. I’m currently grappling with the challenge of using markers placed on the floor to trigger and then engage with an augment containing a 3D object modelled to scale. A marker needs to be in view and recognisable at all times for the augment to work.…

css.php