Organizer(s): Hongchuan YU, The National Centre for Computer Animation, Bournemouth University
Taku Komura, Edinburgh University
Jian Jun Zhang, NCCA, Bournemouth University
Producing realistic animation is one of the main targets of computer graphics and animation. This workshop aims at the topic of producing realistic animation from natural data such as video, 3D scan, MoCap. It delivers attendees the latest approaches and tools from natural data acquisition, processing to re-producing in Computer Graphics and Animation. Currently, along with smart phones, cheap real-time 3D scanners (e.g. MS Kinect) and MoCap devices spreading everywhere in our daily life, natural data are dramatically increasing and incredibly overwhelming. It is an important approach for animators to extract realistic motion data from daily life and reproduce/reuse sampling data for animation production.
The topics cover natural data processing, Motion Capture device system, video based motion capture, 3D scanning processing, motion editing, facial expression capture, re-targeting and animation. The other related topics may be in the context of data processing in computer vision, graphics, Big Data fields.
Title: An Overview of Procedural Urban Modeling in Academia and Industry
Speaker: Dr Tom Kelly
Abstract: Manually creating objects to fill our virtual 3D worlds can be tedious and time consuming; ‘procedural modeling’ is the study of programs to do this for us. Of particular interest today is urban procedural modeling - creating systems to automatically generate cities. These systems are of interest to city designers, architects, SFX companies, and video game developers. Procedural modeling is a relatively new discipline, but intersects many existing fields, including User Interfaces, Geometry, and Perceptual Studies. In this talk, Tom will discuss his experiences of procedural modeling in industry and academia.
An example is that users are easily confused by the many input parameters that 3D procedural models have, and are therefore unable to interact with them. Presenting these parameters to users requires advances in User Interfaces. Another field is Geometry - here the problem is finding geometric primitives that can be used to create realistic models; ideally these should be simple, yet useful, when modeling a wide range of real-world buildings. Finally, it is important to understand what makes a procedural model realistic. This is a challenging perceptual problem because a single urban procedural model can create many different buildings. Tom will present and discuss his research into each these problems, and how the results have been used in industry.
Biosketch: Tom is a postdoc at UCL under Niloy Mitra; he studies the modeling and reconstruction of large urban environments, fusing techniques from geometry processing, procedural modeling, and video games, to create state-of-the-art solutions to real-world problems. In previous careers, Tom has worked at animation and video start-ups, and as an software engineer at Esri.
Title: Simulating the Natural Motion of Living Creatures
Speaker: Dr Jungdam Won
Abstract: Simulating the natural motion of living creatures have always been at the heart of research interests in computer graphics and animation. Recent movies and video games featured realistic, computer-generated creatures based on special effect technology. Flying creatures have great attention because of its unique and beautiful motions. Physics-based control for flying creatures such as birds, which guarantees physical plausibility of the resultant motion, has not been widely studied due to several technical difficulties such as under-actuation, complex musculoskeletal interactions, and high-dimensionality. In this talk, several different approaches to tackle the challenges will be introduced. First, we recorded the motion of a dove using marker-based optical motion capture and high-speed video cameras. The bird flight data thus acquired allow us to parameterize natural wingbeat cycles and provide the simulated bird with reference trajectories to track in physics simulation. A data augmentation method is also introduced to construct a regression-based controller. Second, we trained deep neural networks that generate appropriate control signals when the state of the flying creature is given. Starting from a user-provided keyframe animation, the learning is automatically proceeded by deep reinforcement learning equipped with evolutionary strategies to improve the convergence rate and the quality of the control.
Biosketch: Jungdam Won is a post-doctoral researcher in Movement Research Lab. at Seoul National University. He received his Ph.D. and B.S. in Computer Science and Engineering from Seoul National University, Korea, in 2017, and 2011, respectively. He worked at Disney Research Los Angeles as a Lab Associate Intern with Jehee Lee, Carol O'Sullivan, and Jessica K. Hodgins in 2013. His current areas of research interests are character animations, where physics-based control, motion capture, and machine learning approaches have been applied.
Program, Monday, 27 Nov 2017
09:00-09:30: Opening and Keynote, Dr Tom Kelly (UCL), Procedural Urban Modeling in Industry and Academia
09:30-09:50: Thanh Nguyen Xuan, Ha Le Thanh and Yu Hongchuan, Motion Style Extraction Based on Sparse Coding Decomposition
09:50-10:10: Yonghang Yu, Yuhang Huang and Takashi Kanai, Data-Driven Approach for Simulating Brittle Fracture Surfaces
10:10-10:30: Edmond S. L. Ho, Hubert P. H. Shum, He Wang and Li Yi, Synthesizing Motion with Relative Emotion Strength
11:00-11:20: Rami Al-Ashqar, Xi Zhao and Taku Komura, Character Motion Adaptation to Novel Geometry
11:20-11:40: Phong Do, Hongchuan Yu and Thanh Nguyen, Learning components of Sparse PCA for motion style transfer
13:30-14:00: Keynote, Jungdam Won (Seoul), Simulating the natural motion of living creatures
14:00-14:20: Thanh-Nghi Do, Nguyen-Khang Pham, The-Phi Pham, Minh-Thu Tran-Nguyen and Huu-Hoa Nguyen, Parallel Bag-SVM-SGD for classifying very high-dimensional and large-scale multi-class datasets
14:20-14:40: Volker Helzle, Kai Goetz and Diana Arellano, Creating Generic Data-driven Face Rigs for Digital Actors
14:40-15:00: Zhiguang Liu, Liuyang Zhou, Howard Leung, Franck Multon and Hubert P. H. Shum, High Quality Compatible Triangulations for Planar Shape Animation
15:30-15:50: Masaki Oshita and Yasutaka Honda, Crowd Simulation Using Deep Leaning and Agent Space Heat Maps
15:50-16:10: Nuntiya Chiensriwimol, Azri Noah, Nazri Osman, Pornchai Mongkolnam and Jonathan Chan, Frozen Shoulder Exercise Simulation for Treatment Support