Keynote Speakers
Steven LaValle
Professor
University of Oulu
Steven M. LaValle is Professor of Computer Science and Engineering, in Particular Robotics and Virtual Reality, at the University of Oulu. Since 2001, he has also been a professor in the Department of Computer Science at the University of Illinois. He has also held positions at Stanford University and Iowa State University. His research interests include robotics, virtual and augmented reality, sensing, planning algorithms, computational geometry, and control theory. In research, he is mostly known for his introduction of the Rapidly exploring Random Tree (RRT) algorithm, which is widely used in robotics and other engineering fields. In industry, he was an early founder and chief scientist of Oculus VR, acquired by Facebook in 2014, where he developed patented tracking technology for consumer virtual reality and led a team of perceptual psychologists to provide principled approaches to virtual reality system calibration, health and safety, and the design of comfortable user experiences. From 2016 to 2017 he was Vice President and Chief Scientist of VR/AR/MR at Huawei Technologies, Ltd. He has authored the books Planning Algorithms, Sensing and Filtering, and Virtual Reality. He currently leads an Advanced Grant from the European Research Council on the Foundations of Perception Engineering. More information: http://lavalle.pl
Keynote: The Path to Perception Engineering
This talk starts with some motivational background from my own research on robot planning algorithms to the development of the Oculus Rift. This path has led us to propose that virtual reality (VR), and parts of other fields involving sensing and perception, can be reframed as perception engineering, in which the object being engineered is the perceptual illusion itself, and the physical devices that achieve it are auxiliary. This talk will report on our progress toward developing mathematical foundations that attempt to bring the human-centered sciences of perceptual psychology, neuroscience, and physiology closer to core engineering principles by viewing the design and delivery of illusions as a coupled dynamical system. The system is composed of two interacting entities: The organism and its environment, in which the former may be biological or even an engineered robot. Our vision is that the research community will one day have principled engineering approaches to design, simulation, prediction, and analysis of sustained, targeted perceptual experiences. It is hoped that this direction of research will offer valuable guidance and deeper insights into VR, robotics, graphics, and possibly the biological sciences that study perception.
Rachel McDonnell
Associate Professor
Trinity College Dublin
Rachel McDonnell is an Associate Professor of Creative Technologies at Trinity College Dublin, Ireland. Her research focusses on animation of virtual characters, using perception to both deepen our understanding of how virtual characters are perceived, and directly provide new algorithms and guidelines for industry developers on where to focus their efforts. She has published over 100 papers in conferences and journals in her field, including many top-tier publications at venues such as SIGGRAPH, Eurographics, and IEEE TVCG, etc. She serves as Associate Editor on journals such as ACM Transactions on Applied Perception and Computer Graphics Forum, and is a regular member of many international program committees (including ACM SIGGRAPH and Eurographics). More information:
https://www.scss.tcd.ie/Rachel.McDonnell
Keynote: Oversharing in Virtual Reality: What does our motion reveal about us
In the early 1970s, Psychologists investigated biological motion perception by attaching point-lights to the joints of the human body, known as ‘point light walkers’. These early experiments showed biological motion perception to be an extreme example of sophisticated pattern analysis in the brain, capable of easily differentiating human motions with reduced motion cues. Further experiments showed that biological motion is rich in psychological information such as social categories, emotional state, intentions, and underlying dispositions. Nowadays, motion data from reduced cues is routinely tracked using motion capture systems or even VR headsets and controllers and applied to virtual avatars in immersive virtual environments. This data contains psychological information that could be extracted, stored or even shared. In this talk, I will discuss research that I have conducted over the years on the perception of full-body motion capture and the effect of applying it to different avatar morphologies – ranging from photorealistic virtual humans to flesh-eating zombies! I will also discuss the implications for avatar-based interactions in immersive virtual worlds, as technology develops, and motion capture data becomes more accessible to all.
Niloy J. Mitra
Professor
University College London (UCL)
Niloy J. Mitra leads the Smart Geometry Processing group in the Department of Computer Science at University College London and the Adobe Research London Lab. He received his PhD from Stanford University under the guidance of Leonidas Guibas. His current research focuses on developing machine learning frameworks for generative models for high-quality geometric and appearance content for CG applications. Niloy received the 2019 Eurographics Outstanding Technical Contributions Award, the 2015 British Computer Society Roger Needham Award, and the 2013 ACM SIGGRAPH Significant New Researcher Award. He was elected as a fellow of Eurographics in 2021 and the SIGGRAPH Technical Papers Chair 2022. Besides research, Niloy is an active DIYer and loves reading, cricket, and cooking. More information: http://geometry.cs.ucl.ac.uk
Keynote: Learning Motion-guided Dynamic Garment Detail
Realistic dynamic garments on animated characters have many AR/VR applications. While authoring such dynamic garment geometry is still a challenging task, data-driven simulation provides an attractive alternative, especially if it can be controlled simply using the motion of the underlying character. Over the last few years, we have developed deep learning methods to generate wrinkles, neural garments, and motion-guided dynamic 3D garments, especially loose garments. We focus on taking inspiration from classical garment simulation literature and learning data priors that generalize to new body shapes, motion types, and garment dimensions. In this talk, I will describe our findings and discuss open challenges in this area.
Daniel Holden
Principal Animation Programmer
Epic Games
Daniel Holden is a Principal Animation Programmer at Epic Games doing research and development on animation in the Unreal Engine. Before this he worked at Ubisoft's industrial research lab "La Forge", developing techniques using Machine Learning in various areas of video game development such as animation and physics. He completed his PhD at The University of Edinburgh in 2017 with work focused on the use of neural networks and machine learning for character animation. His research has been presented at a number of conferences including SIGGRAPH, SIGGRAPH Asia, and GDC. More information: https://theorangeduck.com
Keynote: Future Animation Systems
Ten years ago, the ideas behind Physically Based Rendering began to revolutionize rendering - not through specific methods or implementations - but via the philosophy they implied. I believe that if we want to build animation systems of the future we need to go through a similar philosophical shift. In this talk I will discuss some of the lessons we can learn from this history of rendering, what challenges (and advantages) we have that are unique to animation, and how some of my previous research has tried to both tackle these challenges and exploit the advantages to get us closer to that future.