Tuesday, May 31, 2011

Digital Acting 2 - Lip Sync project

Good day.

It is now time to wrap up my Lip Sync project, since I am done with the animation, rendering and composing. I will give a description of the process I went through to accomplish this task.

1 - Planning

The first thing I did in the planning stage was to decide whether I would go for a full body or a facial only animation. As this project objective was to study, experiment and produce a lip sync animation, I decided I should choose the facial animation so that I could focus only in the essentials of Lip Sync.

The rig choice I made was also with this objective in mind, so I went for Tito Rig, courtesy of Enrique Gato Borregán (http://www.xaloc.net/freeStuff_Tito.htm). The rig has 17 expressions that can be used from the morpher channels, and allows for quite nice control over facial expressions and construction of phonemes.

The next step was to choose a sound file to use. I made the decision of making one myself, first because it was fun and second because it allowed me to introduce a bit of my creativity into the project and create something funny (at least for me) to work with. I made a simple script line stating the following:

"Good day! We are now live from Lisbon to report an... Woooooooowwwww maaann! Did you see that? What was it? Was it a bird? Was it a plane? No wait... It´s IMF!"

This might not be that funny, but my intention was to make a satirical joke about the situation in Portugal, with the arrival of the International Monetary Found to help with the economical crisis.

I recorded myself saying it so that I could observe all of the expressions used by me while saying the script. I felt this was a better option than finding a sound clip in which I could only hear the voice of someone acting. While a bit embarrassing, it was a better study material. This was the clip I made of myself:

I then picked up an exposure sheet so that I could write down the phonemes used by me while saying the script. I only used 9 of the 40 existing phonemes in the English language, as advised. They were enough, and I understood that we actually don´t pronounce every single word with our lips while talking, and by using only these 9, I could come quite close to the expressions made.

In my exposure sheet, I had 50 frames by page. In this project I used a frame rate of 24 fps, so I divided the sound wave in sections of 2.08 seconds each, which corresponds to 50 frames. I did this, so that I could have an image of my wave to paste over my exposure sheet and write down the phonemes used at the correct frame. This also allowed me to time my animation quite well, even though it was not 100% accurate.

2 - Animation
Having the planning done it was time to start animating. I used my sound file in 3D Max so that I could synchronize my animation with the sound, while I was creating the key frames. I had watched several videos, and read several pages about Lip Sync to discover which methods I could use and which ones I felt more comfortable working with.

There was a reference in our school page´s that was the best one for me: http://www.videojug.com/film/how-to-make-lip-sync-animation

The way Thadeej breaks up the process in 4 steps really makes things a lot easier. I decided to follow his method and it worked out quite well for me.

So my process consisted of these four steps:


This was where I heard my audio file hundreds of times, said the script myself, watched myself saying it. I just had everything in my mind before I started.


I created all the frames correspondent to the mouth positions I wrote down in my exposure sheet. This gave me a very good starting point. It broke my animation in four simple mouth movements: Wide, narrow, open and closed.


At this point I started adding the small details to the mouth positions and making them more correspondent to the phonemes I wrote in my exposure sheet. I also created the basic eyebrow, head and neck movements.


In this final stage I rendered my whole animation because I wanted to have a look at a preview of the final result so that I could write down the final adjustments needed. I found several things I had to change. Most of them were in the Eye phonemes and the OOO phonemes. I had keyed them a bit exaggerated so I changed them to give a more fluid and natural feeling to the animation.

3 - Post production

With all of this done it was time to render my animation. To support my story I wanted to add a video of a location in Lisbon, called the "Praça do Comércio", so I rendered my animation with a green background. I then added the video in Premiere, and added as well some other small details (a news network logo) to make the scene complete.  

This project was extremely fun to do. I have a tendency of looking at peoples lips when I talk to them, so I am used to reading phonemes in peoples lips, I just did not know that what I did instinctively could be so useful for me as an animator. I will post my final project in a separate post as I still need to add a couple of things.

Edit: I had to edit the post since my scans of the exposure sheet didn´t show everything I wrote, some parts look erased.