This article by Rayk Schroeder originally appeared in DP 02 : 2013. Please also note our interview on Cloud Atlas by Mirja Fürst from the same issue.
with production costs of around 100 million dollars, “Cloud Atlas” by Tom Tykwer and the Wachowski siblings is currently one of the most expensive independent films produced. The elaborate visual effects of the almost three-hour work, in which six different time periods had to be realised, are impressively successful. Rise | Visual Effects Studios worked on over 100 shots for the film, transforming today’s Glasgow into the San Francisco of the 1970s, among other things.
David Mitchell wrote the novel “Cloud Atlas” in 2004. It tells six interconnected stories over a period of 500 years, starting on a sailing ship in 1849 and moving on to Cambridge in the 1930s. It continues in San Francisco in 1973, London in 2012, Neo-Seoul in 2144 and ends in a post-apocalyptic world in 2346. The book was considered unfilmable due to its intricately interwoven stories. But the three top directors Tom Tykwer (Perfume, Run Lola Run) and Lana and Andy Wachowski (Matrix trilogy) wrote a screenplay and dared to adapt the novel into a film. The result was the most expensive German cinema film to date, in which the same actors embody different ages, genders and races in each era. The film boasts a large number of stars: Tom Hanks, Halle Berry, Hugo Weaving, Ben Wishaw, Susan Sarandon, Hugh Grant and many more. The visual effects shots to be worked on were as varied as the individual stories. Rise | Visual Effects Studios worked on a total of over 100 shots from almost all the episodes. The following article describes how the Glasgow of today was transformed into the San Francisco of the 1970s. The LIDAR scans from the set not only helped with the matchmoving.
The task
The Luisa Rey episode takes place in San Francisco in the 1970s. Luisa Rey, played by Halle Berry, is a journalist who gets to the bottom of a nuclear power scandal and is then hunted by a contract killer. However, the story was not filmed in San Francisco, but in Glasgow, among other places. The streets were largely redesigned for this purpose, with typical San Francisco letterboxes, rubbish bins, street signs and other details. As Glasgow has left-hand traffic and not right-hand traffic like San Francisco, all the road markings had to be changed. To reduce the retouching effort, these were therefore covered with asphalt-like colour. As a result, they were no longer visible in many shots and did not have to be retouched away.
In order to transform Glasgow into San Francisco, the streets were to be extended with typical wooden houses or existing houses were to be replaced or remodelled. The typical telegraph poles also had to be integrated. For this purpose, there were some matching pole stumps on the set (redecorated lampposts or newly erected poles). In addition to these complex set extensions, comparably simple work such as removing the still visible road markings, surveillance cameras, adding muzzle flashes with corresponding light interaction and the matching bullet holes also had to be realised.
LIDAR scanning on set
Pointcloud9, a subsidiary of Rise | Visual Effects Studios, scanned all sets and vehicles with a Faro LIDAR scanner. The point clouds were cleaned, converted into meshes and made available to the VFX houses involved as required.
Firstly, a brief explanation of the LIDAR scanning process: LIDAR is the abbreviation for Light Detection And Ranging. A LIDAR scanner works with a mostly infrared laser beam. This is directed in various directions by means of a rotating mirror and used to measure the distance of objects and surfaces encountered. This laser scanning can take place in different quality levels/resolutions, so that details of up to a fraction of a millimetre can be recognised in the scan, depending on the time available or the quality required. These measured points result in a point cloud that represents the environment and can be processed accordingly. Such a scan only takes a few minutes and is also quickly set up and prepared. LIDAR scanners have been used for some time in both geographical and construction applications. In recent years, they have also become increasingly common in the visual effects sector. The scene in San Francisco presented here mainly takes place along a complete street. The entire street was scanned with the LIDAR scanner for later processing. A total of four scans had to be carried out to capture all the houses on the long street. Special marking spheres were set up so that these scans could later be linked together. These represented corresponding reference points for the scanner. The scanner also recorded the colour values of the measured points, which simplified the subsequent texturing of objects. The LIDAR scans enabled a dimensionally accurate image of the complete set, down to the millimetre.
Further processing of the LIDAR scans
In the Berlin office, the scans had to be further processed using various proprietary software. First of all, the artists registered the scans belonging to a set, i.e. the marker spheres were used to link them together to form a huge point cloud, similar to a stitched panoramic photo, but in 3D. In the end, this point cloud consisted of over 200 million points. Areas that could not be captured in a single scan were included in the other scans. This meant that even partially obscured objects could be visualised in the point cloud. Of course, areas that were not necessary for the actual set were also scanned during the scanning process. In order to keep only the necessary information in the point clouds, they were first cleaned and all unnecessary points were deleted. Especially with highly reflective surfaces such as car paintwork, it was difficult for the LIDAR scanner to capture the exact distance of the points to be measured. As a result, very different points, some of which did not match at all, were recorded in the point cloud in place of the smooth surfaces, which then had to be manually smoothed and filtered using appropriate tools. The meshes could then be created much more easily and accurately and exported in different levels of detail for the various areas of application.
Matchmoving with the help of LIDAR scans
One area of application for the LIDAR scans was matchmoving with PFTrack and Syntheyes. The exact geometries were particularly helpful for street canyons in order to be able to track difficult camera movements and create them at the correct scale. Thanks to the spatial information already available in the geometry, all that was needed was to assign distinctive points to the appropriate locations in the shots and the matchmoving was much easier to manage.
San Francisco from Maya
To make the streets look like San Francisco, 3D artist David Salamon modelled various typical wooden houses with lots of details such as cables hanging out, wooden joints, curtains in the windows, fire escapes, etc. He also created telegraph poles with tangled cables and fuse boxes. He also created telegraph poles with tangled cables and fuse boxes. With the help of the LIDAR scans, the entire street was decorated with houses and telegraph poles. The production company provided Rise | Visual Effects Studios with HDR images from various positions on the streets. This allowed reflections on the windows of the houses to be simulated by projecting the HDR images onto rough geometries of the surrounding buildings using the LIDAR scans. They were also important for the correct lighting of the scene. The respective shot cameras were positioned in the street at the appropriate locations that matched the set. The Maya render transmitter for the render manager Renderpal had already been adapted for other projects so that several cameras could be sent for rendering at the same time. For the San Francisco shots, it was also possible to send all the shot cameras for rendering at once and the submitter created a separate render job for each camera. In the end, you had several EXR sequences for each shot with all the important passes as well as IDs for the various building and telegraph pole parts so that you could adjust their colours separately.
Compositing in Nuke
In Nuke, the various passes (Diffuse, Specular, Reflection, Refraction, Ambient Occlusion et cetera) were combined and the houses were colour-matched to the shot plates using the ID passes. In some shots, modern surveillance cameras could be seen, which were concealed by skilfully positioning the telegraph poles and therefore no longer had to be retouched out.
Where this was not possible, they were retouched out. The retouching of the road markings was also simplified by the LIDAR scans: In the Maya scene, a rough geometry for the road was created from the LIDAR scan and exported as an OBJ together with the associated camera. This made it possible to project corresponding patches or render the road UV-unwrapped in Nuke, remove the markings on the UV map and project the retouching back again.
An OBJ for Nuke was exported from the complete street with all buildings and traffic signs to make it easier to position roto-shapes of street signs or buildings and retouching for the surveillance cameras. Among other things, the scene shows a car driving into a fire hydrant, from which a fountain of water then shoots out. As the houses or a matte painting had to be used behind this fountain in some shots, the water had to be placed over it again and correctly integrated into the shot. Nuke’s particle system was used for this. Dots were emitted that could be adapted to the original motion-blurred water droplets using the appropriate motion blur settings and colour corrections. As Halle Berry was seen in the foreground in several shots, her hair often obscured the areas where the 3D houses were supposed to be. In the plate, however, there was often only a burnt-out bright sky. It was therefore necessary to extract the hair from the plate as far as possible and match its colour to the darker background houses or, if necessary, to track corresponding hair patches to their heads. In addition to the tasks already described, the team had to retime some shots by a “crooked” factor. PFTrack often delivered the best results, especially when it came to fine details. This meant that the retouching of retiming artefacts could be reduced to a minimum, but not completely eliminated. The challenge with the sequence presented here was the uniform look of the set extensions across shots and the correct integration as a rather inconspicuous background element. However, as everything was based on a single Maya scene that had been set up once, the 3D department had created an excellent basis, so that the main compositing effort consisted of colour-matching the renderings to the shot plates and rotoscoping any foreground elements.
The entire film was shot on analogue 35 mm film material. This led to the following problem in some shots with a static camera: the image was not stable in itself and wobbled very differently in the individual areas. This made it extremely difficult to integrate the CG elements, as simple (planar) tracking was not sufficient. One possibility would have been to link individual trackers to individual points of a gridwarp and thus adapt the 3D telegraph poles to the unstable image. This option proved to be too time-consuming and inconvenient. Therefore, the artists Gene Hammond-Lewis and Gonzalo Moyano Fernandez wrote a triangulation script. It works like a mixture of CornerPin and GridWarp and automatically connected all trackers with each other in such a way that a network of triangles spanned between them. This made it possible to track many different points and automatically generate an STMap based on these trackers. This distorted the telegraph poles to match the plate. The Luisa Rey sequence was almost completely edited at Rise. Only some minor retouching and muzzleflashes were done directly at the production company by their in-house artists. The sequence also included other shots outside the described street scene: after Luisa Rey visits a nuclear power plant on an island, she is forced off the bridge by a hitman as she leaves and sinks in her car. For several shots, a matte painting had to be made of the nuclear power station both by day and by night. The original bridge was in Scotland. Of course, Luisa Rey and her car could not be pushed directly off this bridge into the water. That’s why part of the bridge was scanned and the road surface was recreated on the tarmac at Tempelhof Airport in Berlin. Elements such as bridge railings, lanterns and surrounding water were all added later and the car was partially replaced by a 3D model.
For the sinking car, the artists Simon Ohler and Andreas Giesen created realistic air bubbles in Houdini, which were combined with rotated elements. The water pressure caused the car’s windows to crack over time. These cracks were created along splines and their formation was animated manually using keyframes, also in Houdini.
Summary
In total, Rise was involved in five of the six episodes. Due to the diversity of the tasks to be accomplished, the work on the more than 100 shots was very varied and offered a wide range of challenges. In addition to the aforementioned VFX shots, an aeroplane was also blown up and other CG environments, such as a satellite station on a mountain, were created. After the completion of “Cloud Atlas”, Rise further optimised the internal LIDAR Maya workflow in order to easily illuminate a 3D scene using HDR images projected onto scanned geometry. Once the scanned location has been measured and the HDR images processed, creatures, characters, vehicles or anything else that needs to move through a set can now be automatically illuminated in next to no time.
With the help of captured mirror balls, it is also possible to change the lighting situation shot by shot, for example if the 3D object is to be in the same location once during the day and once at night. I will report on this procedure in more detail in the next issue of DP










