Symposium of Virtual and Augmented Reality

  • Full Conference Pass
  • Full Conference 1-Day Pass
  • Basic Conference Pass
  • Experience Pass
  • Exhibitor Pass

Date/Time: 28 November 2017, 04:15pm - 06:00pm
Venue: MR 212&213
Location: Bangkok Int'l Trade & Exhibition Centre (BITEC)


Personalized Virtual Avatars in Seconds

Summary: The age of immersive technologies will create a growing need for processing detailed visual representations of ourselves as virtual reality (VR) is growing into the next generation platform for online communication. A realistic simulation of our presence in such virtual world is unthinkable without a compelling and directable 3D digitization of ourselves. With the wide availability of mobile cameras and the emergence of low-cost VR head mounted displays (HMD), my research goal is to build a comprehensive and deployable teleportation framework for realistic 3D face-to-face communication in cyberspace. By pushing the boundaries in data-driven human digitization as well as bridging concepts in computer graphics and deep learning research, I will showcase several highlights of our latest research on democratizing the creation of virtual avatars.

Speaker(s): Hao Li, Pinscreen
Hao Li is CEO/Co-Founder of Pinscreen, assistant professor of Computer Science at the University of Southern California, and the director of the Vision and Graphics Lab at the USC Institute for Creative Technologies. Hao's work in Computer Graphics and Computer Vision focusses on digitizing humans and their performances for immersive communication in virtual worlds. His research involves the development of novel geometry processing, data-driven, and deep learning algorithms. He is known for his seminal work in non-rigid shape alignment, real-time facial performance capture, hair digitization, and dynamic full body capture. He was previously a visiting professor at Weta Digital, a research lead at Industrial Light & Magic / Lucasfilm, and a postdoctoral fellow at Columbia and Princeton Universities. He was named top 35 innovator under 35 by MIT Technology Review in 2013 and was also awarded the Google Faculty Award, the Okawa Foundation Research Grant, as well as the Andrew and Erna Viterbi Early Career Chair. He obtained his PhD at ETH Zurich and his MSc at the University of Karlsruhe (TH).


Virtual Body and Augmented Body

Summary: The social revolutions have accompanied innovation of the view of the body. If we regard the information revolution as establishment of a virtual society against the real society, it is necessary to design a new view of body "JIZAI body (freedomated body)", which can adapt freely to the change of social structure, and establish a new view of the body. In this talk, we discuss how we understand of basic knowledge about the body editing for construction of JIZAI body (freedomated body) based on VR, AR and Robotics.

Speaker(s): Masahiko Inami, University of Tokyo
Masahiko (Masa) Inami is a Professor in the Research Center for Advanced Science and Technology at the University of Tokyo, Japan. He is also directing the Inami JIZAI Body Project, JST/ERATO. His research interest is in Augmented Human, human I/O enhancement technologies including perception, HCI and robotics. He received BE and MS degrees in bioengineering from the Tokyo Institute of Technology and PhD in the Department of Advanced Interdisciplinary Studies (AIS) from the University of Tokyo in 1999. Professor Inami joined the Faculty of Engineering of the University of Tokyo, and in 1999, he moved to the University of Electro-Communications. In April 2008, he joined Keio University, where he served as a Professor of the Graduate School of Media Design and the Vice-director of the International Virtual Reality Center till October 2015. In November 2015, he rejoined the University of Tokyo. His installations have appeared at Ars Electronica Center. He proposed and organized the Superhuman Sports Society."


Empathic Computing Using AR and VR

Summary: Empathic Computing is a new field of research that explores how technology can be used to help people understand what each other are seeing, hearing and feeling in real time. In this presentation I will discuss how AR and VR technology can be used for Empathic Computing and to create new types of collaborative experiences. In particular I will present AR and VR systems that enable users to capture and share their surroundings in real time, share emotions between collaborators, and support Mixed Reality AR/VR collaboration. I will also discuss challenges that need to be overcome before AR and VR Empathic Computing systems are more widespread, and important directions for future research.

Speaker(s): Mark Billinghurst, Director of Empathic Computing Laboratory, Professor of University of South Australia
Mark Billinghurst is a Professor at the University of South Australia, where he directs the Empathic Computing Laboratory. He earned a PhD in 2002 from the University of Washington and researches how virtual and real worlds can be merged, publishing over 350 papers. Previously he was Director of the HIT Lab NZ at the University of Canterbury and he has worked at Nokia, Google, Amazon and the MIT Media Laboratory. He received the 2013 IEEE VR Technical Achievement Award for contributions to research and commercialization in AR, and is a Fellow of the Royal Society of New Zealand.


The Immersive Internet

Summary: The onset of consumer AR/VR necessitates rethinking the near quarter century old internet browser design of Mosaic and its successors. Simply pasting a current generation browser in front of your face in VR is fatiguing over long periods of focus, makes poor use of VR's expansive visual real estate, and sub-optimally presents new media like 360 videos, or 3D models and animation. Imagine instead, a webpage as an immersive space, and a link as a portal, a tear in space-time that you can magically open as needed, and walk right into another space. Now imagine that as you walk through the internet, you can interact and collaborate with other people, simply by virtue of being at the same internet address. Re-imagining webpages as webspaces is not about 2D versus 3D. It is simply about enabling all content regardless of dimension to be presented optimally, and the ability to interact with this content with an interface that is as natural, or better than the way we interact with content in the physical world. This talk will address borh scientific and practical issues on the evolution of the internet in VR, set in the context of JanusVR, an immersive internet browser.

Speaker(s): Karan Singh, Co-Founder, janusVR
Karan Singh is a Professor of Computer Science at the University of Toronto. He co-directs a globally reputed graphics and HCI lab, DGP, has over 100 peer-reviewed publications, and has supervised over 40 MS/PhD theses. His research interests lie in interactive graphics, spanning art and visual perception, geometric design and fabrication, character animation and anatomy, and interaction techniques for mobile, Augmented and Virtual Reality (AR/VR). He has been a technical lead for the Oscar award winning software Maya and was the R&D Director for the 2004 Oscar winning animated short Ryan. He has co-founded multiple companies including Arcestra (architectural design), MeshMixer (acquired by Autodesk in 2011) and JanusVR.