Touching Virtual Objects in AR

Introduction

Hi exiii community, my name is Kyohei. I’m a software engineer at exiii. Currently, EXOS Gripper relies on Vive Tracker for position tracking. While it gives us great UX, it requires time and spaces for setup. On the other hand, today’s AR tech for mobile platforms is advancing quickly. But the experiences still remain to be on small mobile screens and not very immersive. On this blog post, I will introduce how I made AR objects touchable with EXOS Gripper (Gripper) while focusing on 2 things, “making EXOS demo with simpler setup” and “exploring  possibilities of touchable AR experiences.”

 

Position tracking based on Vuforia

For this project, I used Vuforia to track the position of Gripper with a smartphone.

 

 

What is Vuforia?

Vuforia is a cross platform library, which lets you recognize pre-registered images. As Unity supports Vuforia, it’s very easy to start using. Since Vuforia7, plane detection, position tracking, ARKit for iOS, and ARCore for Android have been supported. (Vuforia6 did not support ARKit, so it’s a great update!)

[PTC Announces a Major New Release to Vuforia Augmented Reality Platform]

 

 

Using a cube-type marker

Vuforia recognizes not just 2D images. By pasting images on cuboid or cylinder surfaces, you can make a 3D object as a marker. Since Gripper is covered by a hand and constantly moving, tracking by a 3D object probably would not work well. Instead, I decided to attach images on a cube-type marker.

[FYI: Example of tracking by a 3D object]

Types of markers you can register on Vuforia 

 

Selection of a marker image

You can choose any images for the marker, but not all images work equally. The marker image needs sufficient feature points for effective position tracking. With Target Manager by Vuforia, you can evaluate your image and see how well it helps position tracking. It seems that complicated images with clear lines will generate more feature points.

The number of stars shows how well the marker image would work. The right image shows feature points in yellow.

 

Once you decide a marker image, paste it to a cube and make marker data. You can export it as unitypackage data. I used a normal inkjet printer, which worked fine. You should probably use a piece of paper, which does not reflect too much ambient light.

[FYI: Optimizing Target Detection and Tracking Stability]

 

Setting a cube-type marker. You can specify cube size too.

 

 

Marker cube design

The biggest problem of marker based tracking is that it is effective only when the marker is visible from a camera. I had to think about the position and size of the marker to be as visible as possible when attached to a constantly moving and rotating hand.

We made the CAD data and 3D printed the maker cube and jig like following.

 

Requirements

  • The marker is not hidden by a hand or arm and visible from a camera even when the device is rotated
  • The marker does not hide an object when grabbing it
  • The marker is trackable even when an arm is extended

 

Position

The visible surface of the marker cube changes all the time. But in most cases, a thumb will not face down when grabbing an object, and it will not be hidden by a hand or an arm easily. However, if placed right next to a hand, the marker hides an object when grabbing it. For the best tracking performance, we extended the marker position a little bit and kept the distance between the marker and Gripper.

The final position of the marker cube. The marker and hand are both visible from a camera, and they do not overlap.

 

Size

A big marker hides a hand, and a small marker results in lower accuracy. And the image recognition heavily depends on camera performance too. For example, you need to bring the marker very close to a camera of MacBook while iPhone tracks the marker very well.

I evaluated 30mm, 40mm, 50mm cubes with iPhone 6s and concluded that 40mm is balanced and the best for this project.

 

Various sizes of cubes

 

 

Visualization of a hand and a device

In a VR environment, we usually use a virtual hand model to show device position and open/close status. However, in an AR environment, a physical hand and Gripper both will be visible through a camera. When a virtual hand was simply overlaid, the positions did not match well, causing an unnatural experience.

Overlaying a virtual hand on a physical hand.

 

Since we already had Gripper’s 3D data, I tried overlaying it on the device. By making the Gripper data open/close along with the device’s real-time status, and masking a target object behind it, it looked like I was touching the virtual object with a physical hand-held device.

 

 

A virtual object and a physical desk

Vuforia7 has a plane recognition API (Vuforia Ground Plane), which can be used on ARKit and ARCore. It should work on most of iOS and Android devices although there seem to be some limitations.

[FYI: Vuforia Fusion Supported Devices] 

 

Object Placement

Vuforia provides samples of Ground Plane. In addition to simply placing an object to a plane surface, you can also place it in mid-air, change direction and size etc. I built the demo based on this sample so that I can place an object on a focused surface by tapping a smartphone display.

 

Surface detection, not rectangle detection

Although Vuforia lets you get a coordinate of a surface, there are no APIs to detect it as a closed rectangle. Since ARKit has such APIs, Vuforia might have limited an access to it because of a compatibility with ARCore. 

Placing a UI object on a detected surface.

 

 

Summary

In this blog post, I introduced how to touch virtual objects with Gripper in an AR environment.

 

Vuforia is really useful

It does require some efforts to prepare a marker, but the rest of the work on Unity was just placing an object based on the marker position. Almost no coding was required.

 

 

Mobile based AR is easy to demo

I can demo this experience wherever I want as long as I have my iPhone and Gripper. Compared to a demo using Vive which requires Base Station setup etc., it’s a lot easier to prepare.

 

 

Lack of depth perception

Without binocular parallax, it is difficult to sense depth through mobile based AR. For haptic interaction with objects, understanding relative position of a hand and an object is critical. When I first tried, this lack of depth perception had significant impact to my experience with Gripper. However, adding occlusion to the environment helped a lot to understand object’s relative depth positions. Moreover, as I reached out my hand and started feeling haptic feedback from  an object, it got easier to sense depth.  

 

 

Boundaries between physical and virtual worlds

After building this demo, I started feeling that haptics can enable you to think virtual objects are actually “there” in front of you. I suddenly felt boundaries between physical and virtual worlds are gone.

 

In the near future, AR devices with binocular parallax, such as LeapMotion’s Project North Star, will be used by more and more people. And a bigger market will drive improvements of space recognition tech in an AR environment. When haptics is added there, AR will be much more natural and closer to our daily life.