in ISMAR 2017
Call for participation
Updated on September 10th
Model-based tracking and SLAM for 6DoF camera pose estimation have been developed for decades. Those tracking methods are now implemented in commercial devices (e.g., Microsoft Hololens, Google Tango and Intel RealSense) so that the users can easily develop AR applications. SDKs such as Vuforia are also available. These days, tracking technologies may be considered matured and can be used as a black-box tool.
From this background, this year, we would like to look back to the first concept of AR described in the following paper.
Caudell, Thomas P., and David W. Mizell, “Augmented reality: An application of heads-up display technology to manual manufacturing processes,” International Conference on System Sciences, 1992.
According to this paper, AR-based visualization was initially introduced to support manual manufacturing processes. The question for us now is, can we easily accomplish the manufacturing processes with the support of the current AR technologies? For further popularization of AR, it is important to investigate the performance of recent tracking methods and commercial systems in such situations. Therefore, we design simple but practical scenarios of manufacturing processes as onsite tracking competitions.
The competitions are open for both non-experts and experts on tracking technologies. Participants can use this opportunity to check the performance of their AR system or start research on AR. We hope we can clarify what we can do and cannot do now and find further research issues for AR communities through the competitions.
Scenarios for competitions
We summarize manufacturing processes into three different scenarios according to the size of target objects/environment as follows.
Scenario #1: Circuit board assembly
The 1st scenario is designed for supporting fine manual processes. For example, soldering on circuit boards is a task requiring concentration and should accurately be performed on small size of boards. The required accuracy is basically the order of a few millimeters. For this scenario, a planar A3 or A4 paper will be a target object. Since the template image of a target object will be provided in addition to a chessboard, both model based 2D planar tracking and SLAM are applicable to this scenario.
Scenario #2: Cabling servers in a rack
The 2nd scenario is designed for supporting manual processes in a larger space than the 1st scenario such as cabling servers in a rack. In this case, the required accuracy can be considered the order of 10 millimeters. For this scenario, a planar A0 or B0 paper will be used as a target object. The difference with the 1st scenario is that some 3D objects will be placed on the paper so that 3D tracking is required to achieve tasks.
Scenario #3: Product assembly (may not be organized this year)
The 3rd scenario is designed for supporting manual processes in a larger space than the 2nd scenario such as assembly of cars or some instruments in a 2m x 5m space. In this scenario, the environment will be similar to those in ISMAR 2015.
Tasks are common for all the scenarios as described below. The main differences among the scenarios are the size of target objects/environment and the required accuracy of pinning. Note that scenarios may be simplified according to constraints at a conference site.
Tasks and evaluation criteria
The tasks for participants are basically similar to those in ISMAR 2015 Tracking Competitions and common for all the scenarios as follows.
- Receive 3D points coordinates, the size of a square in a chessboard in millimeters, and the number of squares in the chessboard
- Acquire a coordinate system from the chessboard placed at a corner of a target object in all the scenarios, or the template image of a target object in the 1st scenario only
- Compute camera poses and show some supporting annotations to find given points
- Visualize a given point and put a pin in one of circles on a target object one by one
The evaluation criteria are similar to those in the ISMAR 2008 Tracking Competitions as follows.
- The number of pinning in correct circles
An example of a target object for the 1st scenario is here.
The system needs at least the following functions.
- Sensor pose estimation using visions or others
- Annotation visualization on a screen to support tasks
Participants need to develop an AR based supporting system, which contains at least one screen. The system can be composed of any types of such as cameras, IMUs, and Wi-Fi unless they do not contact a target object. Setup time to install external devices for outside-in tracking is available if necessary. It is welcome that participants simply use commercial devices or SDK and implement the annotation visualization by themselves. There is no constraint on AR annotations to support tasks.
Participants are asked to display the screen to the audience using a wireless HDMI device prepared by the organizers or other ways. If the system is implemented on a mobile device, the screen image needs to be transferred to a laptop connected to a projector using some screen mirroring applications.
Hideaki Uchiyama, Kyushu University, Japan
Sei Ikeda, Ritsumeikan University, Japan
Shohei Mori, Keio University, Japan