News Ticker

Augmented / Virtual Reality : ARToolkit FirstUse

Open source A/VR software development kit

In our previous post we mentioned the ARToolkit, Google Cardboard toolkits for A/VR exploration. After a little bit of back and forth, we decided to dive into and play around with ARTookit first.

ARToolKit_Site

The basic pieces of the ARTookit are:

  • ReadMe file, ChangeLog
  • Source code, libraries, and binaries
  • Project file
  • Examples
  • Documentation

In order to have an understanding of how A/VR apps built using the toolkit really work it is important to muck around in the source to see how things are laid out, how the data moves through the code, and who is doing what. Fortunately for us, Apple’s free Xcode integrated development environment (IDE) is an excellent tool for doing exactly that.

The first thing we like to do with any new SDK is to build the examples. We do that to make sure the IDE is working properly, that all of the required libraries are present, and the compilation parameters are appropriately configured.

ARToolkit_Apps

The documentation for getting started, what are the various apps, and the features of the toolkit is quite thorough. We were able to get things set up, and all of the examples compiled without any major problems (ARToolKit v5.3.1, iOS, Xcode 7.2.1).

NOTE: There will likely be some Apple-Developer-related maintenance/cleanup in the project (provisioning profile, bundle ID, etc) before a build will go to completion. We had to chase down a few before we were able to get a clean build.

XCode_ARMovie

After spending a few days playing around with the examples of the open source ARToolkit, we managed to find our way out of the SDK with more understanding than when we started and got it to do a few of the things we wanted. Below are some of the things we did while playing around with the sample code.

1. Single Tile, single video

In the ARMovie example, the sample code app was designed to analyze a live scene using the iPhone’s camera, detect a particular tile pattern—in our case, the word “Hiro” inside a square with a thick black border—then put up a video clip positioned relative to the location of the tile.

DSCF5228

We poked around, saw how the video clip was being called in the code, and created the necessary steps so that it would put up a clip of our choosing. While it was a pretty simple exercise, it gave us a much better understanding of the principle pieces in ARToolkit, which ones were doing  “the heavy lifting,” and how this toolkit’s AR mechanism worked.

DSCF5227

Below is a video we took of the AR sample app running where you can see it detecting the pattern and “augmenting” the scene with a video. Note that as the paper or the phone’s orientation/position changes, so does that of the AR video relative to the pattern. Cool!

2. Multiple Tiles, single video

The clip below is an experiment we did of multiple patterns fitting the trigger criteria for scene augmentation. As the code limited the augmentation to “1 video,” it jumps to the pattern which “best fits” the trigger as the iPhone is moved.

The above experiment was to better understand the lower limit needed for “uniqueness” of a triggering pattern. Higher specificity = longer recognition time; lower specificity = more false triggers, especially as the scene or camera position changes.

3. Multiple Tiles, multiple videos

The clip below was an AR investigation centered around multiple triggers in one scene, each bringing up a specific virtual object. An interesting item to note is the orientation of the word “Quicktime” as it was being rotated: you can see the “back” of the virtual screen as if it was transparent! OpenGL is doing that automatically. Unexpected, but definitely cool:-)

OK…time to go outside and talk to some “real” people!

Leave a comment

Your email address will not be published.


*