Tag Archives: problem solving

I’m having trouble with the surfaceTouchEvent

I’m having trouble with the surfaceTouchEvent in the Responding to touch interaction recipe from the Processing 2 Creative programming cookbook source code on GitHub.

I’ve worked through the Troubleshooting,  Discussion and Known issues, Common problems, Understanding changes to processing.core sections of the Android page of the Processing Wiki, but still no success.

Unsurprisingly, someone else posted a similar issue with Android Multitouch on Processing 2.0.X on the Android Processing Forum. Unfortunately their solution (import. android.view.motionEvent) doesn’t seem to resolve my issue.

Screen-Shot-2014-05-16-at-6.25.08-PMScreen-Shot-2014-05-16-at-6.25.27-PM

Time to sort this one out.

A solution to the challenge of using markers placed on the floor to trigger and then engage with an augment containing a 3D object modelled to scale

Using Simultaneous localization and mapping (SLAM) or mapping a room (creating a 3D point cloud) or environment with Metaio Toolbox could be a solution to the challenge of using markers placed on the floor to trigger and then engage with an augment containing a 3D object modelled to scale.

point_cloud_room_mapping
Screen capture from Metaio Creator. Untextured 3D geometry placed within the point cloud of a mapped environment. Once published to a junaio channel, the 3D geometry will be placed over the environment when browsed in the junaio AR browser.

The challenge of using markers placed on the floor to trigger and then engage with an augment containing a 3D object modelled to scale

prototype_03
The 3D object used in this augment has been reduced in scale to enable the object to be viewed within the constraints of a marker-based augment.

As part of my VET Development Centre Specialist Scholarship I’m in the process of developing my practical skills in designing and building augmented reality learning experiences. One of the experiences I’m currently prototyping is a workplace hazard identification activity. This has brought about an interesting challenge. I’m currently grappling with the challenge of using markers placed on the floor to trigger and then engage with an augment containing a 3D object modelled to scale.

A marker needs to be in view and recognisable at all times for the augment to work. An augment containing a 3D object not modelled to scale can be easily triggered and engaged with by a marker placed on the floor as the marker will most likely remain in view and recognisable at all times. An augment containing a 3D object modelled to scale can also be easily triggered by a marker placed on the floor. The user then needs to move away from the marker to engage with the augment. As the user moves away from the marker it no longer remains in view and recognisable. This means the augment will fail.

In this example, a simple augment of an industrial workplace scene is triggered by the marker. The industrial workplace scene has been reduced in size and is no longer suitable.

Possible solutions?
Increase the size of the marker or place the marker on a wall to ensure the marker remains in view and recognisable at all times. Increasing the marker could be a solution, but then a specialist printer may be required instead of a standard domestic or office printer. Placing the marker on the wall could be a solution, but only if the experience was thematically relevant. A marker placed on wall could also be used to trigger an augment on the floor. This could also work, but would require strict placement to ensure the augment is placed in an accurate position on the floor relative to the marker and not floating in the air or buried in the floor.

Another possible solution could also be to trigger the augment containing the 3D object modelled to scale based on location. This solution could work if the designated location for the augment was outside  or if the location could be accurately determined when indoors.