-
A solution to the challenge of using markers placed on the floor to trigger and then engage with an augment containing a 3D object modelled to scale
Using Simultaneous localization and mapping (SLAM) or mapping a room (creating a 3D point cloud) or environment with Metaio Toolbox could be a solution to the challenge of using markers placed on the floor to trigger and then engage with an augment containing a 3D object modelled to scale.
-
Augmented contextual instruction user experience (Image tracking)
Screen captures from the completed image-tracking augmented contextual instruction user experience. The augmented contextual instruction is made up of a sequence of junaio channels that can be browsed in the junaio AR browser. For this example, the channels were browsed using junaio on an iPad.
-
(Unfortunately) I think someone may have heard correctly!
Someone in the Metaio Developer Portal said they’d heard the UI Designer in Creator was made for iPhone4 screens and the created UI design is not responsive to different screen sizes. I think that someone may have heard correctly. Damn!
-
Placing geometry in the 3D point cloud in Metaio Creator 2.6
Placing geometry in the 3D point cloud in Metaio Creator 2.6 is a little cumbersome, bewildering and often inaccurate. It takes lots of clicking and seemingly random fiddling to place geometry. You also can’t really determine if your geometry is placed correctly within the 3D point cloud until you upload to your channel and then view your channel in Junaio on your device. If it’s not placed correctly, you need to tweak it in Creator, publish it and then view it again in Junaio. Repeat process until it looks like it might almost be placed correctly. Curious.
-
Lost in Metaio Creator’s Z translation
I’m currently wrestling with the correct Z translation of images and buttons in UI Designer of Creator 2.6. I’ve placed some buttons on top of an background image in the UI Designer of Creator and have encountered a problem where the image obscures the buttons and prevents them from working. I’ve checked the Z translation of the image in relation to the buttons and have made sure the buttons translated above the background image, but this doesn’t seem to make a difference when the published channel is viewed in Junaio on my iPad. I’ve posted my issue with Z translation of buttons in UI Designer of Creator 2.6 to the…
-
Geometry for brake caliper augment: Step 4: Remove piston seal from caliper
Previsualising the pointed tool, piston seal and rear brake caliper geometry for Step 4: Remove piston seal from caliper stage of the brake caliper augment. Geometry will be exported as FBX from Blender, prepared by FBXMeshCoverter for import into Creator for upload into my Metaio channel for final use as an augment. Figuring out the production workflow for each 3D model used in each step of the augmented contextual instruction. Important tip: Remember to export your fbx from Blender to your Desktop and not anywhere else on your computer. For some unknown or unexplained reason fbx files exported to anywhere else but the Desktop do not seem to work properly…
-
Gathering 3D point cloud with Metaio Toolbox
Screen captures from 3D point cloud data gathering with Metaio Toolbox for sequences to be used in my augmented contextual instruction on the rear brake caliper. The 3D point cloud data will then be used as tracking technology to place my augments.
-
Supplementary material for augmented contextual instruction
These four instructional video are supplementary material for the four augmented contextual instruction sequences I’m developing as part of my VET Development Centre Specialist Scholarship.
-
Preparing and testing captions for the augmented contextual help supplementary instructional videos
Captions can help to provide an equivalent learning experience for viewers who may be hearing impaired, speak other languages or use assistive technology. Captions are also valuable in a teaching and learning context where it may be impractical for learners to wear headphones or play video at high volume in an group training environment such as workshop, classroom or laboratory. This screen capture shows the tools and process I used to prepare and test captions for my augmented contextual help supplementary instructional videos.
-
Prototyping augmented contextual instruction with Metaio Creator (Demo mode) and Toolbox
For this prototype, I’m using Metaio Creator image and object recognition features of Toolbox in restrictive Demo Mode to further explore some aspects of my concept of augmented contextual instruction. Unfortunately, in Demo Mode I can’t use the excellent 3D point cloud data captured in Toolbox to prototype all aspects of my augmented contextual instruction. In Demo Mode augments can only be triggered by a QR code, which is kinda okay for testing while you’re building. I’m thinking about buying a license. This video shows some of the features of the augmented contextual help I’m trying to prototype with Metaio Creator in Demo Mode.
-
Mental note: AR homework
Mental note. I need to do some homework. I need to determine how I can use the Metaio SDK and Unity as an alternative to Blender and Aurasma for developing AR experiences for my VET Development Centre Specialist Scholarship. AR in the browser with something like JSARToolKit library with the WebRTC getUserMedia API Brekel Kinect Pro Face software Metaio SDK – Fundamentals of SDK and Unity Metaio SDK – Tutorial 1 – Hello, World! Metaio SDK – Setting up the Development Environment Metaio SDK – Getting started with the SDK Metaio – Getting started Unity3D Unity – Learn modules
-
ARIG: A prototype camera rig for recording contextual tablet and mobile phone screen activity
ARIG is a camera rig for recording activity on and around the screen of a tablet or mobile phone screen. The concept for ARIG came from my need to record my experiments with marker and location-based augmented reality experiences. In this example, ARIG records a simple 3D cube augment produced with Blender 2.62 and Aurasma Studio.
-
A simple cube (Blender 2.62 and Aurasma Studio)
Blender 2.62 does a good job of exporting a 3D scene in the Collada (DAE) format for use as an overlay in Aurasma Studio. You just need to make sure you interpret the newest version of the Aurasma 3D Guidelines in a Blender 2.62 context. For a Blender 2.62 user the most important guidelines to follow are: Models need triangulated faces (Edit mode > Ctrl+T) No more than four lamps (lights) although three are recommended Models are to have no more than one map/texture Create a .tar archive to upload to Ausrasma Studio made up of .dae (Exported from Blender 2.62), .png (Texture) and a .png thumbnail (256 x 256).…
-
The challenge of using markers placed on the floor to trigger and then engage with an augment containing a 3D object modelled to scale
As part of my VET Development Centre Specialist Scholarship I’m in the process of developing my practical skills in designing and building augmented reality learning experiences. One of the experiences I’m currently prototyping is a workplace hazard identification activity. This has brought about an interesting challenge. I’m currently grappling with the challenge of using markers placed on the floor to trigger and then engage with an augment containing a 3D object modelled to scale. A marker needs to be in view and recognisable at all times for the augment to work. An augment containing a 3D object not modelled to scale can be easily triggered and engaged with by a marker…
-
Your device does not support this type of Aura
Your device does not support this type of Aura.