The CG monkeys are loose

Review: In DP 06 : 2014, Weta Digital set to work on the second instalment of Planet of the Apes. A conversation with VFX supervisor Dan Lemmon about primates, performance capturing and premium VFX.

Even more CG apes and performance capturing in the fresh air, including rain and mud – how did Weta Digital master the challenges for the second part of the ape epic “Planet of the Apes – Revolution”? We spoke to VFX supervisor Dan Lemmon about this.

After a laboratory-bred virus in the first part of “Planet of the Apes – Prevolution” made the apes as smart as humans and able to talk, and they subsequently built a new home in the forest, the second part of the film begins ten years later: the virus has now wiped out the majority of humanity, but the apes are immune and live peacefully in a colony of 2,000 inhabitants in a tree city. When humans and apes unintentionally clash, a war breaks out between the survivors, who have settled near the Golden Gate Bridge, and the primates.

To tell this story, Weta created huge CG ape crowds, sometimes with up to 1,200 CG apes in one scene, riding and fighting on horses, among other things. The CG monkeys are the main protagonists in the film’s 127-minute running time. For Weta, this meant 3,000 VFX shots. The film was released in German cinemas on 7 August and was directed by Matt Reeves, who also made the monster film “Cloverfield”.

The outdoor scenes were shot in the forests of Vancouver and near New Orleans. The fact that the production was able to move to the original locations enhanced the realism – also for the performance-capture actors, who no longer had to work in isolation from the rest of the cast(bit.ly/VSQiSf). At times, up to 50 ape actors in performance capture suits walked through the forest on set. In order to capture the ape actors’ performance, Weta used 35 people per team and up to 50 motion capture cameras per scene, which were mounted throughout the forest.

VFX supervisor Dan Lemmon

Daniel “Dan” Lemmon and his team were nominated for an Oscar in the “Best Visual Effects” category for their work on the first part of “Planet of the Apes – Prevolution”, but the film was beaten by “Hugo Cabret” in 2O12. Lemmon can already look back on an eventful film career: He worked as a digital artist on “The Fifth Element” and “Fight Club” back in 1997, and one of his first projects as a supervisor for Weta was “I, Robot”. Other projects in his portfolio include “Jumper”, “Avatar” and “Man of Steel”.

Interview with Dan Lemmon about “Planet of the Apes – Prevolution”: bit.ly/VT9vmM.

The film was shot in native stereo 3D, and the team used a small rail system to transport all the technical equipment to the location in the middle of the forest. Making the technology weatherproof for the wet shooting conditions in rainy British Columbia was a major challenge.

DP: How did the performance capture shoot go outside in the forest – in the rain?

Dan Lemmon: On the shoot for the first part, we were able to move the equipment outside for the first time and get it to work in the sunlight – but at the time the system was still very fragile. For the second part, our most important task was to get the technical equipment to a level where it was equally prepared for heat, cold and humidity. The camera cables are a weak point, so we switched to wireless cameras and packed them in waterproof bags. We also replaced the markers on the actors’ motion capture suits. The previous ones were very sensitive and broke during stunt scenes. Now we have rubberised markers that are waterproof and can withstand collisions.

DP: What tools did you use for “Planet of the Apes – Revolution”?

Dan Lemmon: We wrote most of the tools in-house, including one for fur simulation and fur modelling, fluid simulation software and special software to transfer the performance capture data to the 3D ape models. We also use Maya as our main animation package, which we supplement with numerous plug-ins, as well as software from Red Giant. We use Nuke for compositing and Houdini for particle effects.

DP: For which effects did you have to upgrade the pipeline?

Dan Lemmon: The rain effect on the monkey was the biggest challenge this time. The advantage of shooting in British Columbia was that it was raining all the time. When the rain stopped or slowed down, we started artificial rain on set and added the same kind of rain to the digital characters. We got a lot of references of water hitting fur. We put a lot of work into this process, but our water simulation worked very well. We realised the CG rain with Nuke, Houdini and Naiad.

DP: How did the active markers work outside in daylight?

Dan Lemmon: The markers’ LEDs work in the infrared spectrum, so the signals are invisible to the human eye. They flash extremely brightly for a few milliseconds. We synchronised the markers with the motion capture cameras so that the camera shutter opened when the markers sent the signal. Outside, we had to set the flash pulses so bright that they outshone the sunlight and any other light sources in the area. This allowed the cameras to see the markers as dots and not as washed-out fields.

DP: Were you able to fully utilise all the performance capture data for the animation?

Dan Lemmon: With wide shots where you have to cover a lot of territory, the motion capture cameras don’t handle it well and the data is noisy. However, it’s still the best decision for a shot because we can capture more actors and shots. We brought the performance capture data into the post-production workflow as AMC files, a standard format for motion capture data.

DP: Didn’t the large amount of performance capture data of actors interacting pose a problem?

Dan Lemmon: Of course, the more actors you have in the scenes, the more difficult it is to generate good data. In some scenes we only used 8 cameras for data capture, in others it was up to 40 or 50, depending on how well we could hide how many cameras or place them outside the frame.

DP: With performance capture data from around 50 actors, including a native stereo 3D shoot, a huge amount of data certainly came together. How was this organised?

Dan Lemmon: Yes, it was a huge amount of data, an incredible amount of equipment and the fun that comes with a stereo 3D shoot. But we have a robust pipeline for ingesting that amount of data – it wasn’t too bad. But it had more to do with good management than specialised software.

DP: Wouldn’t it have been easier for the Weta team to realise the project without performance capturing at all?

Dan Lemmon: The good thing about performance capture data is that you get a realistic recording of the actors’ actions. You get small, natural movements and emotions that you wouldn’t have achieved with keyframe animation. If you want to recreate reality as accurately as possible in a project and want the audience to believe that the digital characters are real gorillas and chimpanzees, stylisations that work well in a Pixar animated film, for example, will not work. The right timing and sense of weight are crucial for a realistic impression.

DP: What other advantages did it have?

Dan Lemmon: Performance capturing gives a CG character the individual expression of a personality; an actor defines the character’s behaviour and decisions. This creates continuity, which in the case of a keyframe situation is difficult to maintain during the course of the project when different people are working on the same character. In a keyframe project, you can often tell exactly which animator worked on which shot because everyone has their own style. We also filmed with performance capturing because we wanted the shoot to be traditional: The director wanted to work with the actors and all the actors should be able to interact with each other.

DP: How accurately was the animation team able to transfer Serkis’ acting to the CG ape Caesar?

Dan Lemmon: Transferring Andy’s performance to Caesar as realistically as possible was a lot of work: one of the main difficulties was that Andy’s face doesn’t resemble that of a chimpanzee at all. Transferring his facial expressions to the nose and mouth shape as well as the facial muscles of an ape was therefore complicated and many adjustments were necessary.

DP: The statement “Artists apply digital make-up to actors” by Andy Serkis in the FMX lecture made waves online (bit.ly/1g3SqQt). What is your opinion on this?

Dan Lemmon: His statement should be seen as a metaphor, we’ve used the expression a lot before. I know the article you’re referring to. It took Andy’s statement out of context, the author twisted his words and gave the impression that he said: “I do everything and the animators do nothing” But he never literally said that. Andy trusts the artists to transfer his performance to the digital character in the best possible way. The actors know that we make changes in post-production and don’t copy every blink exactly. We also speed up or slow down certain aspects or make facial expressions differently. We do this with real actors all the time, it’s the bulk of our work – it was nothing special on this project.