Tag Archives: ongoing activities

A solution to the challenge of using markers placed on the floor to trigger and then engage with an augment containing a 3D object modelled to scale

Using Simultaneous localization and mapping (SLAM) or mapping a room (creating a 3D point cloud) or environment with Metaio Toolbox could be a solution to the challenge of using markers placed on the floor to trigger and then engage with an augment containing a 3D object modelled to scale.

point_cloud_room_mapping
Screen capture from Metaio Creator. Untextured 3D geometry placed within the point cloud of a mapped environment. Once published to a junaio channel, the 3D geometry will be placed over the environment when browsed in the junaio AR browser.

Augmented contextual instruction user experience (Image tracking)

Screen captures from the completed image-tracking augmented contextual instruction user experience.  The augmented contextual instruction is made up of a sequence of junaio channels that can be browsed in the junaio AR browser. For this example, the channels were browsed using junaio on an iPad.

Step 1: Remove bracket from caliper
The first step in disassembling a rear brake caliper is to remove the bracket from the caliper. Use a spanner to loosen the retaining bolts.
Step 2: Inspect and clean retaining bolts
The second step in disassembling a rear brake caliper is to inspect and clean the retaining bolts and remove the rubber seal from the bracket.
Step 3: Remove piston from brake caliper.
Insert the air tool into the fluid inlet port of the caliper.
Step 4: Remove seal from brake caliper
Use a pointed tool to remove the piston seal from the caliper.
Instructional poster
The instructional poster provides the user with an entry point to the object-based or image-based ‘How to disassemble a rear brake caliper’ augmented contextual instruction.

(Unfortunately) I think someone may have heard correctly!

Someone in the Metaio Developer Portal said they’d heard the UI Designer in Creator was made for iPhone4 screens and the created UI design is not responsive to different screen sizes. I think that someone may have heard correctly. Damn!

2013-10-11 12.20.49
Channel viewed in Junaio AR browser (Portrait).
2013-10-11 12.22.39
Channel viewed in Junaio AR browser (Landscape).
2013-10-11 12.21.07
Channel viewed in Junaio AR browser (Portrait).
2013-10-11 12.22.04
Channel viewed in Junaio AR browser (Landscape).

Placing geometry in the 3D point cloud in Metaio Creator 2.6

Placing geometry in the 3D point cloud in Metaio Creator 2.6 is a little cumbersome, bewildering and often inaccurate. It takes lots of clicking and seemingly random fiddling to place geometry. You also can’t really determine if your geometry is placed correctly within the 3D point cloud until you upload to your channel and then view your channel in Junaio on your device. If it’s not placed correctly, you need to tweak it in Creator, publish it and then view it again in Junaio. Repeat process until it looks like it might almost be placed correctly. Curious.

step_01_point_cloud
The first step in disassembling a rear brake caliper is to remove the bracket from the caliper. Use a spanner to loosen the retaining bolts.
step_02_point_cloud
The second step in disassembling a rear brake caliper is to inspect and clean the retaining bolts and remove the rubber seal from the bracket.
step_03_point_cloud
The third step in disassembling a rear brake caliper is to remove the piston from the caliper.
step_04_point_cloud
The fourth and final step in disassembling a rear brake caliper is to remove the piston seal from the caliper.

Lost in Metaio Creator’s Z translation

I’m currently wrestling with the correct Z translation of images and buttons in UI Designer of Creator 2.6. I’ve placed some buttons on top of an background image in the UI Designer of Creator and have encountered a problem where the image obscures the buttons and prevents them from working. I’ve checked the Z translation of the image in relation to the buttons and have made sure the buttons translated above the background image, but this doesn’t seem to make a difference when the published channel is viewed in Junaio on my iPad.

I’ve posted my issue with Z translation of buttons in UI Designer of Creator 2.6 to the Metaio Helpdesk and look forward to hearing from someone in the community.

While attempting to solve this problem on my own in Creator, I experimented with placing the buttons partially over the background image and then publishing to my channel. This had an interesting result. The button displays and functions correctly!

Here are some screen captures of the properties windows for the background image and each button in the UI Designer of Creator and the published channel viewed in Junaio on my iPad.

ui_issue_button_background_image
The properties window for the background image. The image has a 0.0 Z translation.
ui_issue_button_over_image
The properties window for the button placed over the background image. The button has a 5.0 Z translation.
ui_issue_button_partial
The properties window for a button placed partially over the background image. The button has a 5.0 Z translation.
ui_issue_ipad
The palcement of UI elements in the published channel viewed in Junaio on my ipad.

Geometry for brake caliper augment: Step 4: Remove piston seal from caliper

Previsualising the pointed tool, piston seal and rear brake caliper geometry for Step 4: Remove piston seal from caliper stage of the brake caliper augment. Geometry will be exported as FBX from Blender, prepared by FBXMeshCoverter for import into Creator for  upload into my Metaio channel for final use as an augment. Figuring out the production workflow for each 3D model used in each step of the augmented contextual instruction.

Important tip: Remember to export your fbx from Blender to your Desktop and not anywhere else on your computer. For some unknown or unexplained reason fbx files exported to anywhere else but the Desktop do not seem to work properly when converted using FBXMeshConverter.

Screen Shot 2013-09-08 at 11.06.57 AM

Screen Shot 2013-09-07 at 12.33.09 PM

Gathering 3D point cloud with Metaio Toolbox

Screen captures from 3D point cloud data gathering with Metaio Toolbox for sequences to be used in my augmented contextual instruction on the rear brake caliper. The 3D point cloud data will then be used as tracking technology to place my augments.

3d_point_cloud_caliper_complete
A rear brake caliper. This 3D point cloud will be used to place an augment for the first step of the rear brake caliper disassembly sequence.
3d_point_cloud_clean_bot_pins
The bracket from a rear brake caliper. This 3D point cloud data will be used to place an augment for the second step of the rear brake caliper disassembly sequence.
3d_point_cloud_remove_piston
An overturned rear brake caliper. This point cloud data will be used to place an augment for the third step of the rear brake caliper disassembly sequence.
3d_point_cloud_remove_seal
A rear brake caliper with the piston removed. This point cloud data will be used to place an augment for the fourth and final step of the rear brake caliper disassembly sequence.
using_itunes_to_gather_point_cloud
Metaio specify iTunes for the transfer of 3D point cloud data from your iPad to your computer.

Preparing and testing captions for the augmented contextual help supplementary instructional videos

Captions can help to provide an equivalent learning experience for viewers who may be hearing impaired, speak other languages or use assistive technology. Captions are also valuable in a teaching and learning context where it may be impractical for learners to wear headphones or play video at high volume in an group training environment such as workshop, classroom or laboratory.

This screen capture shows the tools and process I used to prepare and test captions for my augmented contextual help supplementary instructional videos.

Soundbooth, TextEdit and Movist are the tools I used to create captions.

Prototyping augmented contextual instruction with Metaio Creator (Demo mode) and Toolbox

2013-07-31 18.32.49
The obtuse futuristic device that can only be serviced by a fearless technician with a little help from some augmented contextual instruction.

For this prototype, I’m using Metaio Creator image and object recognition features of Toolbox in restrictive Demo Mode to further explore some aspects of my concept of augmented contextual instruction. Unfortunately, in Demo Mode I can’t use the excellent 3D point cloud data  captured in Toolbox to prototype all aspects of my augmented contextual instruction. In Demo Mode augments can only be triggered by a QR code, which is kinda okay for testing while you’re building. I’m thinking about buying a license.

This video shows some of the features of the augmented contextual help I’m trying to prototype with Metaio Creator in Demo Mode.

Mental note: AR homework

mental_note
Post-it notes help to record mental notes.

Mental note. I need to do some homework. I need to determine how I can use the Metaio SDK and Unity as an alternative to Blender and Aurasma for developing AR experiences for my VET Development Centre Specialist Scholarship.

ARIG: A prototype camera rig for recording contextual tablet and mobile phone screen activity

ARIG is a camera rig for recording activity on and around the screen of a tablet or mobile phone screen. The concept for ARIG came from my need to record my experiments with marker and location-based augmented reality experiences.

In this example, ARIG records a simple 3D cube augment produced with Blender 2.62 and Aurasma Studio.

ARIG (Tablet mount)
Unpainted ARIG with tablet mount.
ARIG (Phone mount)
Unpainted ARIG with mobile phone mount.
The complete ARIG kit with iPad, mobile phone, DSLR and light sock components.
The complete ARIG kit with light sock components and iPad, mobile phone and DSLR required to record the experience.

A simple cube (Blender 2.62 and Aurasma Studio)

simple_cube
A simple 3D cube augment created with Blender and Aurasma Studio

Blender 2.62 does a good job of exporting a 3D scene in the Collada (DAE) format for use as an overlay in Aurasma Studio. You just need to make sure you interpret the newest version of the Aurasma 3D Guidelines in a Blender 2.62 context. For a Blender 2.62 user the most important guidelines to follow are:

  • Models need triangulated faces (Edit mode > Ctrl+T)
  • No more than four lamps (lights) although three are recommended
  • Models are to have no more than one map/texture
  • Create a .tar archive to upload to Ausrasma Studio made up of .dae (Exported from Blender 2.62), .png (Texture) and a .png thumbnail (256 x 256).

This video is an example of a simple 3D cube augment produced with Blender 2.62 and Aurasma Studio.

The challenge of using markers placed on the floor to trigger and then engage with an augment containing a 3D object modelled to scale

prototype_03
The 3D object used in this augment has been reduced in scale to enable the object to be viewed within the constraints of a marker-based augment.

As part of my VET Development Centre Specialist Scholarship I’m in the process of developing my practical skills in designing and building augmented reality learning experiences. One of the experiences I’m currently prototyping is a workplace hazard identification activity. This has brought about an interesting challenge. I’m currently grappling with the challenge of using markers placed on the floor to trigger and then engage with an augment containing a 3D object modelled to scale.

A marker needs to be in view and recognisable at all times for the augment to work. An augment containing a 3D object not modelled to scale can be easily triggered and engaged with by a marker placed on the floor as the marker will most likely remain in view and recognisable at all times. An augment containing a 3D object modelled to scale can also be easily triggered by a marker placed on the floor. The user then needs to move away from the marker to engage with the augment. As the user moves away from the marker it no longer remains in view and recognisable. This means the augment will fail.

In this example, a simple augment of an industrial workplace scene is triggered by the marker. The industrial workplace scene has been reduced in size and is no longer suitable.

Possible solutions?
Increase the size of the marker or place the marker on a wall to ensure the marker remains in view and recognisable at all times. Increasing the marker could be a solution, but then a specialist printer may be required instead of a standard domestic or office printer. Placing the marker on the wall could be a solution, but only if the experience was thematically relevant. A marker placed on wall could also be used to trigger an augment on the floor. This could also work, but would require strict placement to ensure the augment is placed in an accurate position on the floor relative to the marker and not floating in the air or buried in the floor.

Another possible solution could also be to trigger the augment containing the 3D object modelled to scale based on location. This solution could work if the designated location for the augment was outside  or if the location could be accurately determined when indoors.