TVCG Special Session on Mixed and Augmented Reality

  • Full Conference Pass
  • Full Conference 1-Day Pass
  • Basic Conference Pass

Date/Time: 28 November 2017, 04:15pm - 06:00pm
Venue: Silk 4
Location: Bangkok Int'l Trade & Exhibition Centre (BITEC)


Simultaneous Projection and Positioning of Laser Projector Pixels

Summary: This paper presents a novel projected pixel localization principle for online geometric registration in dynamic projection mapping applications. We propose applying a time measurement of a laser projector raster-scanning beam using a photosensor to estimate its position while the projector displays meaningful visual information to human observers. Based on this principle, we develop two types of position estimation techniques. One estimates the position of a projected beam when it directly illuminates a photosensor. The other localizes a beam by measuring the reflection from a retroreflective marker with the photosensor placed in the optical path of the projector. We conduct system evaluations using prototypes to validate this method as well as to confirm the applicability of the proposed principle. In addition, we discuss the technical limitations of the prototypes based on the evaluation results. Finally, we build several dynamic projection mapping applications to demonstrate the feasibility of the proposed principle.

Author(s): Yuki Kitajima, Graduate School of Engineering Science, Osaka University
Daisuke Iwai, Graduate School of Engineering Science, Osaka University
Kosuke Sato, Graduate School of Engineering Science, Osaka University

Speaker(s): Yuki Kitajima, Graduate School of Engineering Science, Osaka University


Real-Time View Correction for Mobile Devices

Summary: We present a real-time method for rendering novel virtual camera views from given RGB-D (color and depth) data of a different viewpoint. Missing color and depth information due to incomplete input or disocclusions is efficiently inpainted in a temporally consistent way. We present our method in the context of a view correction system for mobile devices, and discuss how to obtain a screen-camera calibration and options for acquiring depth input. We show several use cases of our method for both augmented and virtual reality applications. We demonstrate the speed of our system and the visual quality of its results in multiple experiments in the paper as well as in the supplementary video.

Author(s): Thomas Schöps, ETH Zurich
Martin R. Oswald, ETH Zurich
Pablo Speciale, ETH Zurich
Shuoran Yang, ETH Zurich
Marc Pollefeys, ETH Zurich

Speaker(s): Pablo Speciale, ETH Zurich


Occlusion Leak Compensation for Optical See-Through Displays Using a Single-Layer Transmissive Spatial Light Modulator

Summary: We propose an occlusion compensation method for Optical See- Through Head-Mounted Displays (OST-HMDs) equipped with a single-layer transmissive Spatial Light Modulator (SLM), in particular a Liquid Crystal Display (LCD). Occlusion is an important depth cue for 3D perception, yet realizing it on OST-HMDs is particularly difficult due to the displays' semitransparent nature. A key component for the occlusion support is the SLM—a device that can selectively interfere light rays passing through it. For example, an LCD is a transmissive SLM that can block or pass incoming light rays by turning pixels black or transparent. A straightforward solution places an LCD in front of an OST-HMD, and drives the LCD to block light rays that could pass through rendered virtual objects at the viewpoint. This simple approach is, however, defective due to the depth mismatch between the LCD panel and the virtual objects, leading to blurred occlusion. This led existing OST-HMDs to employ dedicated hardware such as focus optics and multi-stacked SLMs. Contrary to these viable, yet complex and/or computationally expensive solutions, we return to the single-layer LCD approach for the hardware simplicity while maintaining fine occlusion—we compensate degraded occlusion area by overlaying a compensation image. We compute the image based on the HMD parameters and the background scene captured by a scene camera. The evaluation demonstrates that the proposed method suppressed the occlusion leak error in intensity to 39%.

Author(s): Yuta Itoh, Keio University
Takumi Hamasaki, Keio University
Maki Sugimoto, Keio University

Speaker(s): Takumi Hamasaki, Keio University
Yuta Itoh, Keio University


Natural Environment Illumination: Coherent Interactive Augmented Reality for Mobile and Non-Mobile Devices

Summary: Augmented Reality offers many applications today, especially on mobile devices. Due to the lack of mobile hardware for illumination measurements, photorealistic rendering with consistent appearance of virtual objects is still a problem here. In this paper, we present a full pipeline for augmenting the camera image of a mobile device with a depth sensor without pre-processing. We show how to directly work on a recorded 3D point cloud of the real environment with high dynamic range color values. For the automatic camera settings, a color compensation method is introduced. Based on this, we show photorealistic augmentations using different approaches for differential light simulating. These methods are developed especially for mobile devices, running at interactive to real-time frame rates. However, our methods are scalable to trade performance for quality and can produce quality renderings on desktop hardware.

Author(s): Kai Rohmer, TU Clausthal
Johannes Jendersie, TU Clausthal
Thorsten Grosch, TU Clausthal

Speaker(s): Johannes Jendersie, TU Clausthal