ARIG is a camera rig for recording activity on and around the screen of a tablet or mobile phone screen. The concept for ARIG came from my need to record my experiments with marker and location-based augmented reality experiences.
In this example, ARIG records a simple 3D cube augment produced with Blender 2.62 and Aurasma Studio.
The original concept sketch for ARIG.
Concept sketch for the Light sock component of ARIG. The Light sock prevents reflections on the iPad surface when recording augments outside.
Pattern block for the Light sock component of ARIG.
Unpainted ARIG with tablet mount.Unpainted ARIG with mobile phone mount.The complete ARIG kit with light sock components and iPad, mobile phone and DSLR required to record the experience.
A simple 3D cube augment created with Blender and Aurasma Studio
Blender 2.62 does a good job of exporting a 3D scene in the Collada (DAE) format for use as an overlay in Aurasma Studio. You just need to make sure you interpret the newest version of the Aurasma 3D Guidelines in a Blender 2.62 context. For a Blender 2.62 user the most important guidelines to follow are:
Models need triangulated faces (Edit mode > Ctrl+T)
No more than four lamps (lights) although three are recommended
Models are to have no more than one map/texture
Create a .tar archive to upload to Ausrasma Studio made up of .dae (Exported from Blender 2.62), .png (Texture) and a .png thumbnail (256 x 256).
This video is an example of a simple 3D cube augment produced with Blender 2.62 and Aurasma Studio.
Your device does not support this type of Aura. Testing a marker-based 3D augment with my mobile phone. Pity my phone doesn’t have the resources to manage a 3D augment. If it did, it would’ve appeared in Aurasma’s list of supported devices.
One day everything changed. Something went wrong in Dr Softain’s laboratory. An experimental medical procedure went awry. An outbreak. Amidst all the chaos and confusion Dr Softain recorded his final broadcast. An emergency broadcast warning us of an unknown peril.
Dr Softain’s emergency broadcast
Outside Dr Softain’s laboratory
All that was remained was a pool of blood and Dr Softain’s laboratory pass.
User engagement
I recorded and edited Dr Softain’s emergency video on my iPad. I then used Aurasma to create the video augment. Early one morning I set up the installation made up of the pool of blood, Softain’s lab pass, iPad with Aurasma and instructions outside a fire door in the kitchen of my workplace and then waited for work colleagues to discover the installation as they visited the kitchen for their morning coffee.
Reflection
Thinking about it, this wasn’t really an augment. It didn’t augment reality with a fictional layer. The layer was simply triggered, much like a QR code. Also the metaphorical container for displaying the augment was wrong. Lame. A better augment would have been to animate the pool of blood or design an intercom to display the video. The situation was also goofy. This kind of scenario (minus the zombies) could be used as an element of a larger campus-wide activities such as student induction or OHS/ workplace hazard identification audit.
Putting it together
I used my iPad to shoot the video and then edited the video and overdubbed my own sound effects with Pinnacle Studio 2.0.
I’m interested in exploring the use of augmented reality (AR) in learning experiences. I’ve decided to prototype my early simple AR experiments with Processing and Blender. These early explorations will make use of augments placed with fiducial markers. My goal is to then explore developing AR learning experiences with Layar that can then be viewed through iOS and Android mobile devices.
I’d then like to explore placing augments without using fiducial markers. These augment could be determined by location. One step at a time.
I used Processing, Blender and NyARToolkit to create this very simple zombie wound augment. This needs a bit more work as the augment is displaying bounding box information and the low-poly modelling is not as smooth as it should be. The augment could be made to look a little more integrated with my body with improved modelling and texturing. That will come with the next iteration.
My exploration of how we learn and how we design and develop digital stuff that helps us learn.