Using Simultaneous localization and mapping (SLAM) or mapping a room (creating a 3D point cloud) or environment with Metaio Toolbox could be a solution to the challenge of using markers placed on the floor to trigger and then engage with an augment containing a 3D object modelled to scale.
Someone in the Metaio Developer Portal said they’d heard the UI Designer in Creator was made for iPhone4 screens and the created UI design is not responsive to different screen sizes. I think that someone may have heard correctly. Damn!
Placing geometry in the 3D point cloud in Metaio Creator 2.6 is a little cumbersome, bewildering and often inaccurate. It takes lots of clicking and seemingly random fiddling to place geometry. You also can’t really determine if your geometry is placed correctly within the 3D point cloud until you upload to your channel and then view your channel in Junaio on your device. If it’s not placed correctly, you need to tweak it in Creator, publish it and then view it again in Junaio. Repeat process until it looks like it might almost be placed correctly. Curious.
I’m currently wrestling with the correct Z translation of images and buttons in UI Designer of Creator 2.6. I’ve placed some buttons on top of an background image in the UI Designer of Creator and have encountered a problem where the image obscures the buttons and prevents them from working. I’ve checked the Z translation of the image in relation to the buttons and have made sure the buttons translated above the background image, but this doesn’t seem to make a difference when the published channel is viewed in Junaio on my iPad.
I’ve posted my issue with Z translation of buttons in UI Designer of Creator 2.6 to the Metaio Helpdesk and look forward to hearing from someone in the community.
While attempting to solve this problem on my own in Creator, I experimented with placing the buttons partially over the background image and then publishing to my channel. This had an interesting result. The button displays and functions correctly!
Here are some screen captures of the properties windows for the background image and each button in the UI Designer of Creator and the published channel viewed in Junaio on my iPad.
Previsualising the pointed tool, piston seal and rear brake caliper geometry for Step 4: Remove piston seal from caliper stage of the brake caliper augment. Geometry will be exported as FBX from Blender, prepared by FBXMeshCoverter for import into Creator for upload into my Metaio channel for final use as an augment. Figuring out the production workflow for each 3D model used in each step of the augmented contextual instruction.
Important tip: Remember to export your fbx from Blender to your Desktop and not anywhere else on your computer. For some unknown or unexplained reason fbx files exported to anywhere else but the Desktop do not seem to work properly when converted using FBXMeshConverter.
Screen captures from 3D point cloud data gathering with Metaio Toolbox for sequences to be used in my augmented contextual instruction on the rear brake caliper. The 3D point cloud data will then be used as tracking technology to place my augments.
Captions can help to provide an equivalent learning experience for viewers who may be hearing impaired, speak other languages or use assistive technology. Captions are also valuable in a teaching and learning context where it may be impractical for learners to wear headphones or play video at high volume in an group training environment such as workshop, classroom or laboratory.
This screen capture shows the tools and process I used to prepare and test captions for my augmented contextual help supplementary instructional videos.
For this prototype, I’m using Metaio Creator
image and object recognition features of Toolbox in restrictive Demo Mode to further explore some aspects of my concept of augmented contextual instruction. Unfortunately, in Demo Mode I can’t use the excellent 3D point cloud data captured in Toolbox to prototype all aspects of my augmented contextual instruction. In Demo Mode augments can only be triggered by a QR code, which is kinda okay for testing while you’re building. I’m thinking about buying a license.
This video shows some of the features of the augmented contextual help I’m trying to prototype with Metaio Creator in Demo Mode.