From another star | Retro article

Review: In DP 01 : 2010, Weta Digital created a wacky planet, alien life forms and humanoid screen heroes. For what? For James Cameron’s Avatar!

Weta Digital, the Australian studio that has provided the special effects for blockbusters such as “District 9”, “The Day the Earth Stood Still” and the “Lord of the Rings” trilogy, creates a freaky planet, alien life forms and humanoid stars for James Cameron’s eagerly awaited film “Avatar”.

Can he still do it? in 1997, director/screenwriter/producer/writer James Cameron’s “Titanic” won eleven Oscars and grossed over 1.8 billion US dollars worldwide. So it’s no wonder that the whole of Hollywood is holding its breath at the thought of his latest film “Avatar”. Pre-production on Twentieth Century Fox’s sci-fi thriller began in January 2006, filming started in February 2007 and post-production in February 2008. Cameron shot the film in stereo 3D and used the PACE Fusion 3D camera, which he had helped to develop himself. But despite all the real film in 3D, the audience is mainly immersed in an extraterrestrial digital 3D world.

War veteran sets off for Pandora

Actor Sam Worthington plays Jake Scully, a paraplegic war veteran who embarks on a journey to the planet Pandora through an avatar that looks like a Na’vi. The Na’vi are elongated, cat-like, blue humanoid inhabitants of the planet. Zoe Saldana embodies Neytiri, a beautiful Pandorian Na’vi.

Pandora is a lush, overgrown planet with waterfalls, rainforest and luminescent plants. The luminescent plants look as if they could only thrive in this form on Earth under water. To realise these and other visions, James Cameron opted for the effects studio that won three Oscars for visual effects with “Lord of the Rings” and a nomination for the Academy Award for “I, Robot”. The Australians received a fourth Oscar for Peter Jackson’s “King Kong”.

Weta created Pandora, which is completely digital and where most of the film takes place. The creatures on Pandora, the Na’vi, were created using motion capture to animate bodies and faces. Giant Studios and Cameron’s own company Lightstorm Entertainment set up the body MoCap for the actors to play the Na’vi. In addition, there were a number of post-production studios that contributed pre-vis and effects: BUF, Framestore CFC, Halon, Hybride, Hydraulx, Industrial Light & Magic, Lola, Pixel Liberation Front, Stan Winston Studio and The Third Floor.

Senior supervisor, Weta partner and three-time Oscar winner Joe Letteri led the work on Weta’s digital effects, which began two years before the film’s release. Three visual effects supervisors worked with Letteri at Weta: Stephen Rosenbaum, Eric Saindon and Guy Williams. Andy Jones was the animation director.

Letteri was responsible for the facial MoCap, the jungle, the characters’ blue skin and a new muscle system. Making a film for stereo 3D also had a huge impact on the project. “The result we achieved with the facial animation was the big breakthrough,” says Letteri and adds: “And stereo 3D brings another discipline into the race. There’s no room to hide.”

On set with James Cameron

Stephen Rosenbaum, who won an Oscar for the visual effects on “Forrest Gump” while working at Industrial Light & Magic, spent two years with James Cameron in Los Angeles. He refined the characters and acted as a liaison between Weta and Cameron’s art department. He brought designs to Weta and in turn brought digital creatures and virtual environments back to Los Angeles.

As Cameron directed the actors on set, whose movements were motion captured, he was able to see the Na’vi in their environments in real time. Giant recorded the movements and passed the data on to the digital characters in real time. At the same time, Weta linked the facial animation to the bodies, allowing Cameron to see the Na’vi’s facial expressions and lip-sync to match the actors’ performances.

To capture the facial movements, Weta began by creating a set of emotional FACS expressions from each of the actors. FACS stands for Facial Action Coding System and describes an international standard classification for mimic muscle movements in the facial and head area. If a character was assigned a speaking role, sounds were also captured.

Using the FACS data obtained, the capture team created a pattern of points for each individual actor. They applied the patterns to masks, drilled small holes in the mask and used them to transfer the dots to the actor’s face before they began their performance.

A lipstick camera, attached to a hard-shell helmet and positioned under the actor’s nose, recorded the movements. “It was high enough to see the eyes and capture the movements of the mouth at the same time,” says Rosenbaum. To record the movements of the eyes, the studio developed software that captured the movements of the pupils. It was then possible to superimpose the facial movements onto a textured model in real time. This allowed Cameron and the actors to view the movements of the digital Na’vi as if there was a video of the recorded scenes.

Cameron also had a virtual camera system, a nine-inch diagonal LED screen with a steering wheel around it and tracking markers on it. As soon as the actors started moving on the motion capture stage, Cameron was able to use the virtual camera to coordinate the shots while seeing the Na’vi in their environment with body movements and facial expressions. Even though there was sometimes a delay of three or four frames, it was possible for Cameron to shoot his film during the motion capture.

Later, of course, everything was optimised again. Cameron edited the movements of the cameras and Weta tweaked the movements once again. “It wasn’t as massive as previous motion capture productions,” says Rosenbaum. “For the most part, the animators had to deal with the movement of the creatures and the interaction between creatures and characters. So if that interaction wasn’t perfect or if Jim wanted to re-choreograph a shot, that was reworked. But for the most part, the captures were taken from bodies or faces.”

The animations

“Basically, the motion capture worked well,” agrees Animation Director Andy Jones, noting that Giant’s body capture was very clean. The character designers and modellers had the task of shaping the Na’vi with narrower hips and shoulders and long necks. It wasn’t just a matter of lengthening the bodies to three metres tall. This helped to transfer the motion capture from the actors to the digital figures. Giant also didn’t capture any movements of the hands and fingers. The animators at Weta added this information as well as that of the tails and ears. The blue creatures use their tails like a cat.

The facial animation was a bigger challenge for Weta. “Jim [Cameron’s] expectations were so high,” says Jones. The motion data from the facial camera resulted in a facial system. But the key to success was to do it twice.

Jeff Unay created a system based on blend shapes that mimicked a volumetric muscle system. For Neytiri’s face, which was an interpretation of actress Zoe Saldana’s face, Unay created a total of 1,500 blend shapes. The animators, on the other hand, only saw 50 and controlled the rest using a slider.

The animators teamed up with the facial animation team to post-process the data to make Saldana’s facial expressions even more accurate. “The motion editors were so good at it that they almost became animators,” says Jones. Even though some of the face tracking was good enough for the lip synchronisation and mouth movement, the animators still worked extensively on the eye and eyebrow animation.

In addition to the humanoid characters, Weta also created and animated ten six-legged creatures, four flying animals and a variety of bugs.

“Almost all the creatures have six legs,” says Jones. “It’s both difficult and fun to animate them all at the same time.” To begin with, they hid the centre legs and treated the animals as if they were quadrupeds. Only then did they add the middle legs. Sometimes the creatures lift the front legs and use them as a kind of arm.

To animate the Na’vi while riding a flying creature, the animators started with motion captures of the actors on a gimballed “horse”. Because it often came down to the creature influencing the Na’vi’s movements, the animation was changed and reduced to just the riding style and eye line.

A new dynamic muscle system gave the strange-looking creatures realistic and believable behaviour. “For Gollum, we had a muscle system that was based on fat layers,” says Saindon. “Now we calculate the connections to the muscles, how they interact and move. We get much more accurate simulations that require less effort. The animators realised the creatures’ muscle system using keyframes. For the characters, it was done using motion data. The results were then further optimised by the animators as required.

Pandora: All 3D

The majority of the film takes place in the jungle, which is always digital and always 3D: the plants, the riverbeds, the waterfalls, the characters, the creatures, the effects. “The views from above may often look like matte paintings,” says Williams. “But they’re all done in 3D space.”

The work was done in Maya. The basis was FBX data created by Lightstorm to see where Cameron wanted the plants placed. Cameron had specified nearly 400 different types of plants on cards that he could move back and forth in MotionBuilder. Then, because the jungle had to be modelled entirely in 3D, Weta designed the appropriate plants, some with under a million polygons.

The effects team considered growing the plants during the rendering process. However, they realised that the jungle was very large and complex. It would have taken too much time and they needed the geometric structure for the plants, characters and creatures to interact. So they created a rule-based system. Once they had created a plant, they could immediately create variants – younger or older plants, ones with more branches, different heights and so on. They did this by changing the seed data at the beginning, so to speak, to get random results.

In addition, the people responsible for the gardens also grew plants with Massive. “We gave Massive a terrain with large plants and Massive created a forest from it. A forest with realistic growth rules where, for example, plants fought for light and space to grow,” explains Williams. Rigs within almost all the plants made it possible for branches and leaves to move when characters jumped on them or brushed against them as they walked past.

All plants are bioluminescent. When the actors walk through the jungle in the dark, the plants light up. Some glow from the inside out. Others have a kind of glowing moss that grows on the plant. During the day, the plants are lighter in colour on the upper side of the leaves than on the underside. In the beginning, the plants were just as blue in colour as the Na’vi. However, the blue sky in combination with blue plants and blue figures was too monotonous.

Instead, the jungle was given an exotic colour scheme with plants in brilliant shades of red, orange, yellow and green, which reflected off the skin of the figures, making them appear even more complex and real.

For the plants and characters, the rendering team used absorption-based subsurface scattering, which accurately calculated the frequencies of the coloured light. “Even though the Na’vi have a slight hint of cyan and nuances of red are visible on their ears and pores, this prevented the colour from slipping into purple. We also put a lot of work into the displacement maps,” says Williams. “The aliens didn’t look right without the correct structure of the pores. Obvious details like pits and minor blemishes brought life to the close-ups.”

To light the complex scenes, the team utilised Spherics Harmonics, a technique often used in video games. “It’s a really clever rendering scheme to render something with limited memory when the object is very detailed,” adds William.

Each plant in the jungle absorbs light independently and stores all the information about the lighting in the geometry. This allowed the lighting team to simply add light to the scene and still get the correct data from each plant. For example, James Cameron created a jungle set and lit it. Weta then took HDRI images of the set to get the lighting that Cameron had envisioned. This lighting was then incorporated into the set. To complement the existing lighting, in some cases it was only necessary to add a key light and a couple of rim lights. “Then we could put the characters in the scene and everything was fine,” says Letteri.

Stereo 3D

The effects artists were able to view the rendered scenes at each of the 20 or so “view stations” with red/green glasses in the respective anaglyph version in Maya. “It was easier to just rotate the camera,” says Williams. “It gave us a good intuitive feel of the area in Maya.”

One of the challenges of working in stereo 3D was that the effects had to be in 3D. In non-stereo films, effects artists put their effects, such as fire, smoke, clouds and so on, on 2D cards that they place in 3D space in front of the camera. In stereo, this results in an effect that looks flat and non-dimensional.

The effect artists therefore created a library of 3D elements for water, clouds and other effects. All already rendered in stereo 3D. Apart from the fact that everything had to be created in 3D – cheating was not allowed – working in 3D was much less of a factor than previously expected. Weta created the film in 2D and knew that everything would be okay in 3D because they could rotate the scenes in Maya.

They only rendered the second eye once Cameron had approved the 2D version. Cameron contributed the data like Convergence Plane for the 3D effect. “Jim [Cameron] has done projects in 3D in the past,” says Saindon. “He’s not a guy who gets involved in pure 3D effects.”

It’s also not Cameron’s first film that’s been heavy on special effects. The films for which he is responsible read like an honourable list of great, Oscar-worthy visual effects: “True Lies” (nominated for an Oscar), “Terminator 2: Day of Reckoning” (Oscar), “The Abyss” (Oscar), “Aliens” (Oscar). The same applies to films that come from Weta. The studio also supplies award-winning special effects, such as for the “Lord of the Rings” trilogy and “King Kong”. Many people will describe the work on the Na’vi and the creatures as a new breakthrough. Especially when it comes to facial animation. Others will emphasise the photorealistic three-dimensional landscapes. In a quality that has never been seen before, that could hardly be imagined and that is only possible with computer graphics and digital artwork.