DCRC

Physics Based Animation

 


Physics Based Character Controller 


 

Abstract


Controlling a character in Augment Reality is challengeable research. When the user wants the character’s motion style to be changed, it can represent these problems: The first, it is difficult to catch pin points which are parts of a character. The second, it is hard to put it down at the place where the user desires. The third, it should be spent too much time that the user manipulates all of the parts from the character body. In order to solve the problems, this paper suggests metaphor controlling method. It means manipulating some kinds of physical characteristics which determine motion styles. This paper selects distribution of angular force as the main physical characteristics. The distribution means a set of torque by each joint which are components of character. The reason why we propose that the distribution is the main physical factor is that controlling the character motion by force is the most intuitive. We analyze the basic character motion as the distribution of angular force. When the user tries to interact with the character, our system recognize the interaction as force. Thus this force activates the distribution of angular force and our system makes character changing motion style. We use Inverse Dynamics technique to compute the distribution of angular force from character motions. When the user takes a gesture, our system make a force vector combined with hands speed and path. And the vector is distributed to all joints by our algorithm. Then our system represents the result by Forward Dynamics which is one of the method changing the force into acceleration. This paper is an initial research. We use multi-linked stick figure instead of a full body character. Our method does not require an exertion to catch each part of character in order to control motion. Thus our method shortens time and it can support various user with different skill levels. 

 

  Ray Tracing

Ray tracing is one of realistic rendering techniques. It traces ray which emitted from light source to render the scene. Ray tracing can represent change of tone at object’s surface, reflection of other objects, refraction of transparent objects, and shadow effect. So, we can get very realistic image with ray tracing.

animation_raytracing_refer.png 

[Ref.]"Ray trace diagram" by Henrik - Own work. Licensed under GFDL via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:Ray_trace_diagram.svg#/media/File:Ray_trace_diagram.svg

 

In Implementation of ray tracer, we will calculate only some rays that reach to camera to reduce the calculation cost. To be like that, oppositely, ray tracer generates a ray from camera and trace it to check whether the ray reach to light source. The ray generated from camera is called prime ray. After generating a prime ray, it checks if prime ray intersects with any object. If it is, then generate a new ray from intersection point to light source. If the ray intersects with something before it reaches to the light source, the intersection point is in shadow. Otherwise calculate lighting at that point. If the object which intersects with prime ray first is reflective object, ray tracer generates a reflected ray. The reflected ray is used to reflection effect.

animation_raytracing_result1.png 

 This is result of ray tracer implemented in CPU. It means that ray calculation part is written in C++ language. It generated ray in 900*900 resolution and take 3 minutes to draw a scene once. Now, moving this ray tracer to compute shader in GPU is ongoing. 

 

 Facial Animation with Emotion and Speech


Emotion should be considered in facial animation system. Even though people talk about same thing, they would have different facial-expression. Because facial-expression depends on emotion that people feel. But making a facial animation system isn’t easy, because many factors should be considered to make model realistic. For this reason, we conduct a research on animation algorithms and development regarding facial animation with emotion and speech. The point of our research is blending of emotion look and speaking look. How to mix those two expressions on the face determines degree of naturalness of model, because both emotion look and speaking look adjust same action units.

 

The content of research is making a facial animation system with emotion and speech by developing function which blends emotion-look data and speaking-look data. Based on facial action coding system (FACS), we encode the movement of facial-expressions by making action units which are basic units of facial-expressions.


 animation_figure1.png

 animation_figure2.png

 < figure 1 >

  < figure 2 >

 

As shown in < figure 1 >, the final facial expression at a point in time is made by combination of speaking look and emotion look.

 

The main features of our research is that

 

1)     Depending on type of action units which compose emotion look, each weight of action units changes with time. If the weight values of action units are constant, the facial expressions on model are not natural. In other words, we make a natural model by making each weight of action units variable.

2)     The degree of changes on weight of action units varies according to type of action units. The causes are a distance from mouth, the length of action unit, direction with other action units, etc. Those factors have result in different degree of changes on each action unit weight.

3)     Many other studies treat only about 4 emotion categories. On the contrary, we can make any emotion look that user wants by manipulating interface. afterwards, by analyzing action units which compose emotion look that user make, we assign different weight value on each action unit at a point in time and the virtual facial model read script with emotion look on the face ( feature 1,2 )

 

 


Research