Art Department - Fall 2024 - Digital Art - Ronaldo Kiel
ARTD 2822 - 3D Character Design - The Illusion of LifeCourse Description Class Projects Schedule Important Dates Contact Index
Your animations will be renderings from Maya at 540HD resolution. All class projects should be uploaded as video clips. QT files compiled in After Effects or just saved from Quick Time Player.
Class projects have to be uploaded to Yuja Video by the due date for grade evaluation. We will be using the Assignments page of the Black Board class site for that. Please refer to the schedule page for all projects' deadlines.
You should login the computers In 5102 using:
Username: 3dstudent
Password: render66
The projects are:
1. Portrait (or generic creature head) - Turn Table Animation (minimum duration: 120 frames)
Resolution HD540
File Format: QuickTime
Frame Rate: 24 fps
2. The Talking Head (animation using the head from Project 1)
Simple head rig: eyes, neck and mouth or rig for Mocap.
Duration: variable (audio file)
Resolution HD540
File Format: QuickTime with audio
Frame Rate: 24 fps
Project 1: Portrait (or generic creature head)
In this project you will design and produce a head for a character. The first phase is Modeling. I will spent three or four classes working on the modeling. You will have to spent about 4 extra hours per week to finish this project. Modeling for deformation is a very specific technique that works best if you use quads (polygons with 4 sides) and place edge loops stratigically flowing with the main features of the surfices that you are modeling. In this particular case, edge loops around the eyes and mouth will make possible for eyes to blink and for your character to talk.
There are two main techniques to model characters. The traditional modeling method and the sculpting method.
In the traditional method, polygons are place in 3D space according to the main features of character using extrusions and other operations to create the surfaces. This process is slow but gives a lot of control over the modeling phase. The use of quads and the density of the mesh are easy to control from beginning to end.
The sculpting method allows for the artist to focus on the creative process, but later the model has to be retopologized for rigging and animation. You start with a hight density mesh and sculpt your model adding and removing material as you would with clay. Sculpting programs like ZBrush and Mud Box provide very popular work-flows and the advantage of extracting normal maps from the hight density mesh, that can be later applied to the low density mash in order to create detail. The most appealing aspect of this technique is to be able to concentrate on the creative process right at the beginning. But later, the re-topology is time consuming, technical, labor intensive and necessary if the model will be rigged for animation.
I suggest that we start with the traditional method.
You will create a polygonal model of a head from photographs or sketches (one frontal and one profile). The photos might have to be adjusted in Photoshop for the facial features to align properly. We will be using the traditional modeling method to create this head. If you decided to work from sketches of a character from your imagination please present the drawings to me before you start modeling.
Modeling: In the traditional method you build a mesh using polygons with four sides and placing edge loops strategically to facilitate sub-division and deformation during animation.
The traditional method is preferable for artists interested in a broad view of the entire process from the development of the character and rigging all way to the animation.
After placing your image planes in the Maya scene you will start to create the facial features taking in consideration the flow of your edge loops. One around the eyes and one around the mouth. The nose will be built connecting the edge loops between the eyes and mouth.
The rest of the face and the cranium are simple meshes to create, just keep in mind that you should be working with quads. There will be poles (vertex with more or less than 4 edges connected to it), try to hide the poles or at least place then in areas that will not have to deform very dramatically during animation.
The ears are a project by itself. If you model the years separate and connect later to the head you will have to fix many triangles and ngones that might result from the connection. But if you model the ears as part of the cranium you will probably have lots more geometry that you should for the rest of the cranium.
If you need help with the ears: https://www.illumira.net/show.php?pid=njcore:179984&retc=njcore:184108
The eye balls and the teeth and tongue inside the mouth will be modeled separately. Remember, later we will be creating a talking head.(project 2)
The eyes: https://www.illumira.net/show.php?pid=njcore:185278&retc=njcore:184108
Teeth and Tongue: https://www.illumira.net/show.php?pid=njcore:188209&retc=njcore:184108
All videos for both projects are recorded: https://www.illumira.net/showcollection.php?pid=njcore:184108
Traditional Modeling Guidelines: (Organic Modeling for proper deformation)
Edge Loops have to follow the main features of the face and the underline muscle structures. Remember that the goal here is to be able to deform the mesh properly.
Quads - Use polygons with four edges. Not ngones (more than 4 edges) and also avoid triangles.
Poles can be used to redirect edge flow, therefore placed then in places that will help with the modeling process and do not interfere with deformations. Poles are formed when a vertex has less or more than 4 edges. Poles are the result of 3, 5 or more edges connect to one vertex. Please take a look at topologyguides.com
Texturing
At this point you can work on the rigging (for project 2) or on the textures, or even develop both at the same time. I personally prefer to start with the textures. The first step is to UV map all parts of your model.
Maya has been improving the UV tools and with the integration between Maya and Mud Box texturing your character can be lots of fun. The textures, most likely will be further worked in Photoshop, sometimes Illustrator can be useful in this process also.
"Substance Painter" has been providing new work-flows, that makes the "Photoshop" methods look a bit basic. But the integration between "Substance" and the "Arnold" renderer in "Maya" is not straight forward.
After you bring your head geometry into Mud Box you can continue tweaking your model in a high resolution mesh and output from Mud Box, not only the color maps (textures) but also normal and/or bump maps.Displacement maps can also be extracted from Mud Box. That will help you to create detail in your model in Maya.
The use of displacement maps from Mud Box work very well in Maya and Arnold. The displacement maps calculated in Mud Box take into account the further sculpting that is possible in Mud Box. You do not need to use the sculpting capabilities of Mud Box for this project. But if you would like to explore the creation of displacement maps you can ask me and I will guide you through. I also have recorded a 9min clip of this process. clip:https://www.illumira.net/show.php?pid=njcore:204339&retc=njcore:204323
Back in Maya we will explore the shaders that will determine the look of your character. Many of the decisions here are art director decisions. You can aim for a more realistic look using subsurface scattering and bump or Normal maps or you can develop an "illustration look" using hand painted textures and Toon Shaders.
You can generate a Turn Table Animation in Mud Box but in Maya you will have far more control over the look of your animation. This is a very good way to evaluate your work and the goal of the first project.
In Maya from the Animation menu set: Select Visualize > Create Turntable option and set the number of frames to 120.
Final Rendering specifications:
Duration: minimum 5 seconds (120 frames)
Resolution HD540
File Format: QuickTime
Frame Rate: 24 fps
Rigging and Animation - Project 2
The final goal of Project 2 is animation. We will use the model from project one. We can animate by setting key frames in our controlers or we can experiment with Motion Capture.
If you choose create controllers follow the Project2_A. If you plan to animate using MoCap follow the Project2_B description below. You will not have to create controllers but you will have to set the joints and many blend shapes.
Facial Rigging can be extremely complex depending on the project that you are working on. The creation of a character for a game maight not need a rig as sofisticated as the rig of character on an animation or movie. Games will most likely use bone deformations, while a film production might require a combination of many deformation systems to give animators full control over facial expressions. In certain cases you will have to add Mel scripts to achieve the functionality required for certain controls.
Project2_A: The Talking Head
Before you can start animating you will rig the face in Maya. Most of the animation will be done with joints (for example: neck, head, jaw, and chin and eye controls will be sufficient) but some Blend Shapes will be necessary.
We will follow traditional animation methods for the phonemes and lip sync of a short audio clip in this project.
The acting of your character is an entirely different process. In some level, it will come from the work on the face expressions develop as Blend Shapes. But making your character come to life is an intense process by itself.
Animation requires a very specific skill set that certainly can be developed with time and dedication. This course is designed to give you an overview of the work flow in character creation and the testing of the rig in a short animation. Specific animation courses will be necessary if you wish to become an animator.
Steps to follow:
1. Create the face rig. Eye controls. Tongue controls, etc.
• Joints for neck, head, jaw and chin.
• Bind the Skin (parent or add to the skin the eyes, the teeth, the tongue and any other geometry that will need to move with the head mesh. Scalp and any other base for facial hair can be wrap deformed to the head geometry).
• Paint Skin Weights (I will demonstrate a hierarchical way of transferring weight).
• Start to create the User Interface.
• Create Controllers - User Interface for animators. This intro video might help: https://www.njvid.net/show.php?pid=njcore:183175
You will keep the creation the animatotor's UI (user interface). UIs can be reused, therefore the development of quite complex UIs is common on large productions.
• Connect Controllers to the Joints
• Create controller hierarchy
2. Build the Blend Shapes for the mouth and eyes movements (and any other expressions). The acting of you character will depend on how much control your rig will provide. Before you create the blend shapes, you should look for references. Look for images of facial expression or sketches of characters’ faces expressions to use as reference.
Preston Blair's "Dialogue" lists nine distinct face expressions (front and profile) that can be used for simulating the phonemes employed to create the illusion of your character speaking.
Techniques to create the blend shapes will be covered in class or you can take a look at https://www.njvid.net/show.php?pid=njcore:183174 to see how you can create eye blinks.
Before you start animating, render a series of stills to test camera angles lighting and textures.
3. Sound in Maya: lip-syncing
Think about the entire face while focusing on the mouth. The face can be treated as a elastic mass if you created enough controls in your rig.
During the animation process you can think about a traditional puppet, where the only control you have to make the puppet talk is the up and down movement of the chin. To block out the mouth shapes, create first the jaw drops for all the sounds in your dialogue that require a open mouth. The blend shapes can be used to better define the movement.
Please refer to Preston Blair "Dialogue"(page 35). The mouth shapes employed to lip-sync are compiled by Blair in his publication "Advanced Animation".
5. The camera will be focused on your talking head. Don’t move the camera. You can use more than one camera and cut between the clips. If you decide to move the camera, make it simple. The audio clip that you will use will determine the duration of your animation. Your animation doesn’t need to be long, a clip of around 300 frames will have 6 or more sentences and more then12 seconds.
NOTE: Before you commit yourself to a long rendering make sure you have already done as many test renderings as necessary to create your final animation.
PacDV (free sound effects) - https://www.pacdv.com/sounds/voices-4.html
Voice Bunny (Browse samples) - https://voicebunny.com
Project2_B - MoCap
I have been experimenting with Motion Capture using the iPhone frontal camera and software (iFacialMocap) to connect the phone and Maya on the computer to record animations in real time or transfering Mocap data capture on the phone to Maya as an animation techinique.
In this Project 2 version, with a minimum amount of rigging and no need to create controls, you will be animating using your face to drive the character head. You can also use the sound recorded while the motion capture is occurring.
The MoCap approach open up a new challenge for the animator. The animator as an actor. The performance of a short text can be recorded in real time with audio directly.
Or the recorded mocap data can be converted into key frames and edited in the graph editor in Maya. The audio also can be imported from the phone.
If you name your Joints and Blend Shapes according to the Apple ARKit convention you will start to receive input from the phone as soon as you establish the connection between the iFacialMocap software running on the phone and the Maya module of iFacialMocap in your computer. This will work with PCs or Macs but will only work with phones equipped with a frontal camera capable of FaceID.
Below I will describe the steps to rig this project for MoCap.
Start by creating the joints:
1. Joints for the eyes, neck, head and spine. To keep the default names use:
2. Skin> Bind the Skin open the options.
I have a total of 5 joints in this class project.The number of influences for each joint can be lowered. I set the "Max influences" here to 2. All the rest is default.
If you have combined your geometry, you can just select the model and shift click on the spineJoint to bind the skin.
If you have all parts of your head separated under one group. When you bind the skin Maya will create skin clusters for each piece of geometry you have. In the Bind Skin Options under "Bind to:" you can select "selected joint" to isolate the influence of a specific joint.
The eyes will need to be skinned to the eye joints. But the teeth, the tongue and any other geometry, that needs to move with the head mesh, can be skinned to the headJoint. Scalp and other base geometry for facial hair can be wrap deformed to the head geometry).
VIDEO
3. Paint Skin Weights (I will demonstrate a hierarchical way of transferring weight)
In this version of Project 2 you will not create the animator's user interface because animation will be performed and recorded with MoCap.
VIDEO
4. Blend Shapes: The time consuming part of this approach is the creation of the Blend Shapes. Here are two places where you can follow the step by step process:
https://arkit-face-blendshapes.com
https://hinzka.hatenablog.com/entry/2020/06/15/072929
VIDEO
The apple ARKit includes 52 blend shapes. You don't need to create all the blend shapes. But some jaw and mouth shapes will be important as weel as eye blinks. You will need to set up a recording session and perform your animation to the iPhone.
You can screen record from the view port if your shaders work well in the Maya viewport. Or you can transfer the Mocap data and the audio from the phone and use the Arnold render.
To complete this project you wil need to upload your animation to the Black Board web class Assignements Page.