370 likes | 478 Vues
Versatile Human Behavior Generation via Dynamic, Data-Driven control. Tao Yu COMP 768. Motivation. FIFA 2006 (EA). Motion of virtual character is prevalent in: Game Movie (visual effect) Virtual reality And more…. NaturalMotion endorphin. Motivation.
E N D
Versatile Human Behavior Generation via Dynamic, Data-Driven control Tao Yu COMP 768
Motivation FIFA 2006 (EA) • Motion of virtual character is prevalent in: • Game • Movie (visual effect) • Virtual reality • And more… NaturalMotion endorphin
Motivation What virtual characters should be able to do: • Lots of behaviors - leaping, grasping, moving, looking, attacking • Exhibit personality - move “sneakily” or “aggressively” • Awareness of environment - balance/posture adjustments • Physical force-induced movements (jumping, falling, swinging)
Outline • Motion generation techniques • Motion capture and key-framing • Data-driven synthesis • Physical-based animation • Hybrid approach • Dynamic motion controllers • Quick Ragdoll Introduction • Controllers • Transitioning between simulation and motion data • Motion search – When and where • Simulation-driven transition - How
Mocap and Key-framing (+) Captures style and subtle nuances (+) Absolute control “wyciwyg” (-) Difficult to adapt, edit, reuse (-) Not physically dynamic, especially highly dynamic motion
Data-driven synthesis • Generate motion from examples • Blending, displacement map • Kinematic controller built upon existing data • Optimization / learning statistical model (+) creators retain control Creators define all rules for movement (-) violates the “checks and balances” of motion Motion control abuses its power over physics (-) limits emergent behavior
Physical-based animation • Ragdoll simulation • Dynamic controllers (+) Interacts well with environment (-) “Ragdoll” movement is lifeless (-) Difficult to develop complex behaviors
Hybrid approaches Mocap Stylistic realism Physical simulation Physical realism Hybrid approaches: • Combine the best of both approaches • Activate either one when most appropriate • Add life to ragdolls using control systems (only simulate behaviors that are manageable)
Outline • Motion generation techniques • Motion capture and key-framing • Data-driven synthesis • Physical-based animation • Hybrid approach • Dynamic motion controllers • Quick Ragdoll Introduction • Controllers • Transitioning between simulation and motion data • Motion search – When and Where • Simulation-driven transition - How
Overview of dynamic controller • Decision making: objectives, current state (x[t]) → desired motion (xd[t]) • Motion Control: desired motion (xd[t]), current state (x[t]) → motor forces (u[t]) • Physics: current state (x[t]) → next state (x[t+1]) xd[t]=Goal(x[t]) u[t]=MC(xd[t]-x[t]) x[t+1]=P(x[t],u[t]) Decision Making xd[t] Motion Control u[t] Physics objectives x[t+1]
Physics: setting up ragdolls • Given a dynamics engine • Set primitive for each body part • Mass and inertial properties • Create 1, 2, or 3-DOF joints between parts • Set joint limit constraints for each joint • External forces (gravity, impact etc) • Dynamics Engine Supplies • Updated positions/orientations • Collision resolution with world xd[t]=Goal(x[t]) u[t]=MC(xd[t]-x[t]) Decision Making xd[t] Motion Control u[t] objectives x[t+1]
Controller types • Basic Joint-torque Controller • Low-level control • Sparse Pose control (May be specified by artist) • Continuous control (e.g.: Tracking mocap data) • Hierarchical Controller • Layered controllers • Higher level controller determines correct desired value for low level • Derived from sensor or state info, support polygon, center of mass, body contacts, etc.
Joint-torque controller Proportional-Derivative (PD servo) Controller • Actuate each joint towards desired target: • Acts like a damped spring attached to joint (restposition at desired angle) θdes is desired joint angle and θ is current angle ks and kd are spring and damper gains
Live demo Created with http://www.ode.org
Outline • Motion generation techniques • Motion capture and key-framing • Data-driven synthesis • Physical-based animation • Hybrid approach • Dynamic motion controllers • Quick Ragdoll Introduction • Controllers • Transitioning between simulation and motion data • Motion search – When and where • Simulation-driven transition - How
Transitioning between Techniques • Motion data Simulation • When: Significant external forces applied on a virtual character • How: Just initialize simulation with pose and velocities extracted from motion data. • Simulation Motion data • When and where: some appropriate pose is reached (hard to decide); Motion frame closest to simulated pose. • How: Drive simulation toward matched motion data using PD controller.
Motion state spaces • State space of data-driven technique: • Any pose present in the motion database • State space of dynamics-based technique: • Set of poses allowable by physical constraints • The latter is larger because it: • can produce motion difficult to animate or capture • includes large set of unnatural poses • Correspondence must be made to allow transitions between the two
Motion searching • Problem: Find nearest matches in the motion database to the current simulated motion. Approach: • Data representation • Joint position • Process into spatial data structure • kd-tree/bbd-tree (box-decomposition) • Search structure at runtime • Query pose comes from simulation • Nearest neighbor search (ANN)
Data Representation: Joint Positions • Need representation that allows numerical comparison of body posture • Joint angles not as discriminating as joint positions • Ignore root translation and align about vertical axis • May also want to include joint velocities • Joint velocity is considered by taking surrounding frames into distance computation
Distance metric Original Joint positions Aligned positions J – Number of joints Wj – Joint weight p – global position of joint T - Transformation to align the first frame
Searching process • Approximate Nearest Neighbor (ANN) Search • First finds the cell containing the query point in spatial data structure of the input data points. A randomized search then finds surrounding cells containing points within the given ε threshold distance from actual nearest neighbors. • Results guaranteed to be within a factor (1+ε) distance of actual nearest neighbors. • O(log n3) expected run time and O(nlogn) space requirement • Much better in practice than KNN as dimensionality of points increases
Speeding up search • Curse of dimensionality • Search Each Joint Position Separately • Pair more joints together to increase accuracy n 3-DOF searches is faster than one 3n-DOF search...
Simulating behavior • Model reaction to impacts causing loss of balance • Two controllers handle before and after contact phases respectively • Ensure transitioning to a balanced posture in motion data
Fall controller • Aim: produce biomechanically inspired, protective behaviors in response to the many different ways a human may fall to the ground.
Fall controller • Continuous control strategy • 4 controller states according to falling direction: backward, forward, right, left • During each state one or both arms are controlled to track predicted landing position of the shoulders • Goal of the controlled arm is to have wrists intersect the line between the shoulder and its predicted landing position. • A small natural bend is added to the elbow and the desired angles for the rest of the body are set to initial angles at the time the fall controller is activated.
Fall controller • Determine controller state θ is the facing direction of the character. V is the average velocity of the limbs.
Fall controller • Determine target shoulder joint angle • Can change when simulation steps forward • The ks and kd are properly tuned
Settle controller • Aim: Driving the character to similar motion clip at an appropriate time • Beginning when hands impact the ground. • Two states • Absorb impact: • gains are adjusted to reduce hip and upper body velocity. • Last a half second before next state. • ANN search: • Find a frame in motion database that is close to currently simulated posture • Use found frame as target while continuing to absorb impact • Simulated motion is smoothly blended into motion data. Final results demo
An alternative on response motion synthesis [Zordan 2005] Problem: Generating dynamic response motion to external impact Insight: • Dynamics is often only needed for a short time (a burst). • After that, the utility of the dynamics decreases and due to the lack of good behavior control • Return to mocap once the character becomes “conscious” again
Generating dynamic response motion • Transition to simulation when impact takes place • Search motion data for transition-to sequence similar with simulated response motion • Run the second simulation with joint-torque controller actuating the character toward matching motion • Final blending to eliminate the discontinuity between simulated and transition-to motions
Motion selection • Aim: to find a transition-to motion • Frame windows are compared between simulation and motion data • Frames are aligned so that roots position and orientation of start frame in each window coincide • Distance between and : pb, θb: body part position and orientation wi: window weight, quadratic function with highest value at start frame and decreasing for subsequent frames wpb, wθb: linear and angular distance scale for each body part
Transition motion synthesis • Aim: generate the motion to fill the gap between the beginning of interaction and found motion data • Realized in 2 steps: • Run a second simulation to track the intermediate sequence • Blend the physically generated motion into transition-to motion data
Transition motion synthesis • Simulation 2 • An inertia-scaled PD-servo is used to compute torque at each joint • The tracked sequence is generated by blend start and end frames using SLERP with an ease-in/ease-out. • A deliberate delay in tracking is introduced to make the reaction realistic
Conclusion • Hybrid approaches • Complex dynamic behaviors are hard to model physically • A viable option to synthesize character motion under wider range of situations • Able to incorporate unpredictable interactions, especially in game • Making it more practical • Automatic computation of motion controller parameters [Allen 2007] • Speeding up search via pre-learned model [Zordan 2007]
References • MANDEL, M., 2004. Versatile and interactive virtual humans: Hybrid use of data-driven and dynamics-based motion synthesis. Master's Thesis, Carnegie Mellon University. • ZORDAN V. B., MAJKOWSKA A., CHIU B., FAST M.: Dynamic response for motion capture animation. ACM Trans. Graph. 24, 3 (2005), 697.701. • B. Allen, D. Chu, A. Shapiro, P. Faloutsos. On the Beat! Timing and Tension for Dyanmic Characters, ACM SIGGRAPH/Eurographics Symposium on Computer Animation 2007 • Zordan, V.B., Macchietto, A., Medina, J., Soriano, M., Wu C.C., Interactive Dynamic Response for Games, ACM SIGGRAPH Sandbox Symposium 2007