Keynote Speakers


Michiel van de Panne

How to train your character in the AI era

Physically-simulated characters and their humanoid or quadruped robotic cousins suddenly seem to be everywhere. However, there is still no unified method behind the solutions. What are the roles of example data and reinforcement learning as key ingredients? In this talk we examine design patterns at both ends of this spectrum, as well as many methods that lie somewhere in between. In particular, we highlight a multitude of roles that example data can play when “learning to move”. We also reflect on the rich design space that comes into view when we consider varied tasks, various time scales, feed-forward control, control factorization, world models with forces, simplified models, react-vs-search, and sim-to-real solutions. We showcase a number of past and current approaches through these ideas and speculate on progress to come.

Michiel van de Panne is passionate about understanding how to design physics-based characters and robots that can move with the skill, grace, and purpose that we see every day in human and animal movements. His research interests include reinforcement learning, kinematic and dynamic models of movement, robotics, motion planning, and control. His students and postdocs have (co)founded companies, are faculty at numerous universities, are in research roles in top game studios, and have played leading roles at Tesla (director of AI and Autopilot vision). In 2002 he co-founded the ACM/Eurographics Symposium on Computer Animation, a leading forum for computer animation research. He was the recipient of the 2022 SIGGRAPH Computer Graphics Achievement Award for contributions to physics-based character animation. He currently serves as the deputy director of CAIDA, UBC's main AI organization.



Dinesh Manocha

Audio Simulation and Understanding: From Scientific Solvers to LLMs

Sound Simulation methods based on synthesis and propagation have been used to generate audio content and effects for gaming, VR, and multimedia techniques. At the same time, audio comprehension—including speech, non-speech sounds, and music— is essential for AI agents to interact effectively with the world. In comparison to other modalities, research in audio processing has lagged behind areas such as language and vision.

In this talk, we give an overview of our work over the last two decades on audio simulation and understanding. This includes earlier works based on ray-tracing and scientific solvers. Next, we talk about acceleration and prediction methods based on machine learning. This includes learning acoustic scattering fields of real and virtual objects, scene-aware audio rendering based on recordings, and material-aware binaural sound propagation for reconstructed 3D scenes. We demonstrate how synthetic simulation methods can be utilized to generate training data and design networks using geometric deep learning methods.

Next, we provide an overview of audio large language models (ALLMs) used for advanced audio perception and complex reasoning. This includes GAMA, which is built with a specialized architecture, optimized audio encoding, and a novel alignment dataset. GAMA’s development builds on our past research, including MAST, SLICER, and EH-MAM, which are novel approaches for learning robust audio representations from unlabeled data. Complementing this, we introduced ReCLAP, a state-of-the-art audio-language encoder, and CompA, one of the first projects to tackle compositional reasoning in audio-language models—a critical challenge given the inherently compositional nature of audio. Towards the end, we discuss Audio Flamingo 2 and Audio Flamingo 3, which are our open ALLM models that provide advanced long-audio understanding and reasoning capabilities across speech, sound, and music.

Dinesh Manocha is the Paul Chrisman-Iribe Chair in Computer Science & ECE and a Distinguished University Professor at the University of Maryland, College Park. His research interests include virtual environments, physically-based modeling, and robotics. His group has developed numerous software packages that are standard and licensed to over 60 commercial vendors. He has published more than 850 papers & supervised 57 PhD dissertations. He is a Fellow of AAAI, AAAS, ACM, IEEE, and NAI, and a member of ACM SIGGRAPH and IEEE VR Academies, and a recipient of the Bézier Award from the Solid Modeling Association. He received the Distinguished Alumni Award from IIT Delhi and the Distinguished Career in Computer Science Award from the Washington Academy of Sciences. He co-founded Impulsonic, a physics-based audio simulation technology developer, which Valve Inc. acquired in November 2016.



Naureen Mahmood

Animating Intelligence: Building 3D Human Behavior Engines for Digital Characters and Beyond

Foundational models of human motion, large-scale motion capture datasets, and new forms of generative modeling are transforming animation not only for film, games, and virtual beings, but increasingly for healthcare, life sciences, and other domains that demand expressive digital humans. This talk will highlight the technical challenges in building such systems and how Meshcapade is bridging the gap between physical realism and semantic understanding in virtual character creation. I will explore training AI avatars to see, understand, and interact with their digital environments like real people. At the intersection of motion capture, 3D motion understanding, and generative AI, Meshcapade is building the world's first 3D human behavior engine: a system capable of producing lifelike, context-aware motion, interaction, and reaction.

Naureen Mahmood is the CEO and co-founder of Meshcapade, an award-winning tech start-up based in Europe's largest AI ecosystem: Cyber Valley. Mahmood received her B.Sc. from Lahore University of Management Sciences, Pakistan in 2006. She received a Fullbright Scholarship for her graduate studies, she completed her M.Sc. at Texas A&M's Visualization Department in 2011. From 2012 to 2018 she worked at the Max Planck Institute for Intelligent Systems in Tübingen, where she has been a key author of several academic publications in computer graphics, machine learning and computer vision. She is the inventor on a number of patents related to 3D human body models and 3D human motion. Naureen has been a speaker at the world's foremost conferences on computer vision and graphics, including CVPR, ICCV and SIGGRAPH. In 2024, her work on 3D human motion received the esteemed Test-of-Time award from ACM SIGGRAPH.



Doctoral dissertation award


Anka Chen

Towards Realistic Real-time Physics-Based Simulation

Stability, realism (accuracy), and efficiency (parallelism) are the three most crucial properties of a physics-based simulator. Despite substantial research and significant advancements in physics solvers over the past decades, current methods still struggle to deliver all three. Convergent methods provide precise solutions to physics equations, but they are less stable and usually require solving a global system, leading to poor parallel performance. Conversely, GPU-based parallel solvers, such as Extended Position Based Dynamics (XPBD), simplify the physics equations and omit important system information, resulting in a failure to converge to the true physics equations, losing realism. Stability, on the other hand, is always a challenge for all those methods, particularly with strict computation budgets.

My dissertation introduces a novel solver named Vertex Block Descent (VBD), a highly parallelizable solver for the variational form of implicit Euler through vertex-level Gauss-Seidel iterations. VBD operates with local vertex position updates that reduce global variational energy while maximizing parallelism. This creates a physics solver that achieves numerical convergence with unconditional stability and exceptional computational performance.

My work also includes accurate and efficient solutions for handling collisions. This dissertation also proposes a formal definition of the shortest boundary paths for self-intersecting objects and develops a robust algorithm for computing these paths. This method provides a well-defined collision energy for self-colliding models, effectively solving collisions in simulations of deformable volumetric objects. Then this collision resolution is extended to codimensional objects such as cloth and strands, implementing a universal collision energy applicable to various geometries, named Offset Geometric Contact (OGC). OGC offers computationally-efficient physics simulations with guaranteed collision-free results. It is more than 100x faster than the alternative methods such as the Incremental Potential Contact (IPC) method and it entirely avoids the non-orthogonal contact forces that IPC can generate, which can lead to severe visual artifacts. Consequently, the resulting contact energy of OGC is much less stiffer than IPC's. Coupled with a novel trust region based optimization method, it can simulate challenging contact scenarios with unprecedented complexity and do so within a matter of milliseconds.

My work made the physics-based simulation orders of magnitude faster than previous methods, while providing unconditional stability for and penetration-free guarantee in highly stressful simulations with extremely large numbers of collisions.

Additionally, my work includes creating high-resolution deformation capture systems to collect real-world data on non-rigid objects for studying registration. This enables driving simulations with real data and facilitates the study of inverse physics problems.

I am currently a research scientist specializing in physics-based simulation and the capture of non-rigid objects. I got my PhD in Graphics from the University of Utah. My recent research concentrates on creating parallel solutions for implicit time integration, aiming to deliver robust and efficient strategies for handling elasticity, collisions, and the dynamics of rigid bodies. Additionally, I am investigating the use of deep learning methods to speed up the convergence of physics solvers. A significant portion of my research also includes developing high-resolution deformation capture systems. Those systems are designed to collect real-world data on non-rigid objects, facilitating the study of inverse physics problems.

Early Career Researcher Award


Jiong Chen

Unfolding the Power of Green's Functions for Modeling and Simulation

In this talk, I will present my recent research on Green's functions and their applications in modeling and simulation. As fundamental solutions to partial differential equations, Green’s functions provide a simple yet powerful tool for parameterizing solutions and embedding boundary conditions. Despite their theoretical appeal, practical applications are often hampered by their inability to effectively “probe” the vast space of general linear operators, and by the heavy computational overhead of enforcing boundary conditions through integral equations. To address these challenges, I will share our recent advances in Green’s function methodology, driven by several compelling graphics applications such as free-space elastic deformations, generalized barycentric coordinates and fast preconditioning techniques.

Jiong Chen is a researcher at Inria, France. After earning his PhD from Zhejiang University in 2020, he spent a year each as a postdoctoral fellow at Télécom Paris and École Polytechnique before joining Inria Saclay, where he eventually became a permanent member of the GeomeriX team in late 2024. His research focuses on the numerical foundations of geometric modeling and physical simulation, covering topics such as multiscale analysis, numerical preconditioning and interactive techniques.

Paper Sessions


Session 1: Radical Rigids

Two-Pass Shock Propagation for Stable Stacking with Gauss–Seidel
Ziyan Xiong, Andrew Leach, Griffith Thomas and Shinjiro Sueda

Singularity-free Twist Limit Constraints for the Ball Joint
Yitong Dai, Mykhailo Potomkin and Tamar Shinar

Rig My Ride: Automatic Rigging of Physics-based Vehicles for Games
Melissa Katz, Paul Kry and Sheldon Andrews

Real-Time Triangle-SDF Continuous Collision Detection
Joël Pelletier-Guénette, Alexandre Mercier-Aubin and Sheldon Andrews


Session 2: Fire, Fluids, and Fields

FLAMEFORGE: Combustion Simulation of Wooden Structures
Daoming Liu, Jonathan Klein, Florian Rist, Wojciech Pałubicki, Sören Pirk and Dominik Michels

Fast reconstruction of implicit surfaces using convolutional neural networks
Chen Zhao, Tamar Shinar and Craig Schroeder

Representing Flow Fields with Divergence-Free Kernels for Reconstruction
Xingyu Ni, Jingrui Xing, Xingqiao Li, Bin Wang and Baoquan Chen


Session 3: Motion and Interaction: From Humans to Swarms

InterAct: A Large-Scale Dataset of Dynamic, Expressive and Interactive Activities between Two People in Daily Scenarios
Leo Ho, Yinghao Huang, Dafei Qin, Mingyi Shi, Wangpok Tse, Wei Liu, Junichi Yamagishi and Taku Komura

Fast and Accurate Parameter Conversion for Parametric Human Body Models
Julien Fischer and Stefan Gumhold

Simulating Ant Swarm Aggregations Dynamics
Matthew Loges and Tomer Weiss


Session 4: Neural Threads and Facial Forces

Dress Anyone : Automatic Physically-Based Garment Pattern Refitting
Hsiao-Yu Chen, Egor Larionov, Ladislav Kavan, Gene Lin, Doug Roble, Olga Sorkine-Hornung and Tuur Stuyck

Self-supervised Learning of Latent Space Dynamics
Yue Li, Gene Wei-Chin Lin, Egor Larionov, Aljaz Bozic, Doug Roble, Ladislav Kavan, Stelian Coros, Bernhard Thomaszewski, Tuur Stuyck and Hsiao-Yu Chen

Interactive Facial Animation: Enhancing Facial Rigs With Real-Time Shell And Contact Simulation
José Antonio Fernández-Fernández, Ryan Goldade, Ladislav Kavan, Jan Bender and Philipp Herholz

NeuRiPhy: Neural Baking of Physics-Based Deformations for Facial Rigs
Davide Corigliano, Daniel Peter, Niko Benjamin Huber, Bernhard Thomaszewski and Barbara Solenthaler


Session 5: Learning to Move

Walk This Way: Imitation-free Reinforcement Learning of Flexibly-Constrained Walking Controllers
Tiffany Matthé, Nicholas Ioannidis and Michiel van de Panne

Policy-space Interpolation for Physics-based Characters
Michele Rocca, Sheldon Andrews and Kenny Erleben

Diffusion-based Planning with Learned Viability Filters
Nicholas Ioannidis, Daniele Reda, Setareh Cohan and Michiel van de Panne

PHA: Part-wise Heterogeneous Agents with Reusable Policy Priors for Physics-Based Motion Synthesis
Luis Alberto Carranza Cobeñas, Oscar Argudo and Carlos Andujar Gran