Table of Contents Show
In autumn 2021, six of us, as the second year of VFX students at the University of Television and Film Munich, started to find ideas for our first film exercise and met weekly under the direction of Prof. Jürgen Schopper. Planet B began with the story of an old man who breeds biotopes and discovers a small human in one of his biotopes. Once the teams for the realisation of the films had been formed, however, this idea soon developed into a futuristic sci-fi story with the aim of commenting on the man-made decay of the environment and presenting the urgency of a solution to this.
by Alexander Hupp, Franziska Bayer, Ines Timmich
After further development, the old man who created a biotope with a human became a young woman who, together with other scientists, wants to create an entire planet in order to save humanity.

The rough story was finalised at the beginning of 2022, after which the implementation began. Consequently, the next step was to find the resolution of our scenes in the various environments, for which storyboards, animatics and blocking were created in a fluid process over several weeks. Our pipeline TD Jonas Kluger worked with us in parallel to create an efficient working environment and pipeline. In addition, a lot of concept art was created, as finding our style, which is explained in more detail below, was also a challenge.




Storyboard
In the storyboard phase, we worked on the visualisation of our script with great support from Prof. Michael Coldewey. During this week, we worked intensively on how we wanted to tell our story visually. It was clear to us early on that our camera language should be static and calm, which also benefited our 2D look.

The storyboard was revised several times, so that in the end we only moved on to the pre-visualisation phase with the fourth version. There, together with Dr Rodolfo Silveira, we converted the finished storyboard into a rough pre-vis. This consisted of 2D animations that helped us to define the mood and timing of the shots and their sequence more precisely.

Concept/design
We were certain from the outset that our film should contain a combination of 2D drawn elements and 3D animation. The style templates included the games Valorant and Borderlands, the League of Legends series “Arcane” and various episodes of the series “Love Death & Robots”: “The Whitness” and “Jibaro”, both directed by Alberto Mielgo. The aesthetic of our film ran through many episodes: Initially we wanted to make everything steampunk-style, in the meantime we had focused on a retro-futuristic look, but in the end it became a mixture of grunge and pre-apocalyptic aesthetics.

To “nail” this look stylistically, we designed many different concepts for the two planets, the environments and our protagonist – who still bears the mysterious title “Creator”. Not only the look, but also the practicality of the various assets and environments was important. For example, the Creator’s robotic arm had to be anatomically similar to a real human hand so that it could be rigged realistically. For reasons of effort and time, we opted for a slicked-back short haircut that moved little or not at all, and a breathing mask that covered most of the lower half of her face. For the two planets, we concentrated primarily on colour concepts. The old Earth, destroyed by climate change, was to be bathed in dry, desert-like ochre and toxic sulphur yellow, while the new Planet B shone in rich turquoise, teal and blue tones.

We based the design of the giant satellite dish on a real giant dish, the Arecibo Observatory in Puerto Rico, which was the largest single telescope in the world until 2016, but collapsed in 2020 (similar to our dish). To get a feel for the framing and colour mood of our film, we created key concepts for the individual key scenes.
Modelling, texturing & rigging
For the modelling, we worked on the 3D models with great help from our external lecturer Helmut Stark. He showed us how we could model the biotope or the bunker corridor, for example, in a detailed and topologically sensible way. Initially, we had a complete model of the bunker including a detailed interior, but this was later replaced in many scenes by matte paintings, which better matched the style of the 2D look. To give our shots the drawn look, we often took a frame from the camera’s point of view, painted over the existing texture of the model in Krita and projected this drawing back onto the models. By focussing on this texturing method, we ended up only needing simple models that had the right silhouette and dimensions. But not everything was textured in this way.

Some models, such as the Creator’s work interface and the space capsule in which our protagonist later flies into space, were painted by hand in Adobe Substance Painter. This allowed us to show the models from several perspectives without having to project something new onto them each time. The creator herself was also painted by hand in Substance Painter. We added several lines and hard shadows, especially around her eyes and ears, to enhance the comic/2D look. The rigging was done under the very helpful supervision of our external lecturer Benc Orpak, who showed us how to rig our protagonist realistically. For her facial expressions, we used shape keys to animate her eyes and eyebrows in particular.

Animation, motion capture & simulation
Overall, it would be far too time-consuming to animate the entire character “by hand”, so we decided to record the body movements using motion capture. We used the Xsens system available at the university for this purpose. With motion capture, we also had to get the interactions right that would later be seen in the film. That’s why we built a similar environment on set so that our motion capture actress could interact with it. We made sure that the scaling was correct so that we could use the data without making too many changes.

However, we were unable to record two components of the character using the motion capture system. The hand movements, including fingers and facial expressions. With the support of Prof Melanie Beisswenger, these were created in Blender using keyframe animation. We animated the rest using shape keys for the facial features and a controller for the eyeballs, which was created using the Auto-Rig Pro plug-in. As the character’s face below the eyes is covered by a static mask, the facial expressions could only be shown via the eyes and eyebrows, which placed high demands on the animation.
To make the collapse and the behaviour in the storm look as real as possible, the hanging elements of the satellite dish were implemented as a simulation. We took the dimensions from the Arecibo observatory and then used these parameters to calculate the mass of the respective fragments. Using simple wind force fields, we were able to simulate the interaction between the pillars and the storm. The pillars were fractured, i.e. broken up into small individual pieces in advance. This allowed us to better control the collapse of the pillar.

To save time, we also simulated the land masses flying away from planet B, whose surface consists of hexagons. To do this, we detached certain hexagons from the planet. By deactivating the gravity, we were able to cause the parts to detach using a single force field in the centre of the planet. We then converted the simulation into keyframes in order to make individual changes.
Shading and rendering
By projecting the textures, the shader was easy to adjust, but only for objects that were not changed in perspective. Our character was therefore the most difficult part of the shading. It had to be able to move and the light had to behave accordingly. We set the light itself in three-dimensional space instead of painting it directly onto the textures in order to achieve realistic behaviour. CGI artist Kathrin Hawelka and cinematographer Moritz Rautenberg were particularly helpful in this process.
Overall, our desired look is very dependent on the shading. It has to be able to combine the 2D (drawn) with the 3D (animation/shading) well. We decided in favour of the LightningBoyShader, a layer-based shader. This has the advantage over conventional cell shaders that we can more easily control the influence of the light sources. The look is also defined by the broken edges. For the objects themselves, we were able to break them all up in 2D, whereas we also had to break up the light edges using the shader. The shader also allows us to use light sources selectively, i.e. we can separate the surroundings from the character and light them individually. We received great support from our 3D mentor Berter Orpak with the complex shader setups. It was also important that our 3D objects matched the background, as the lighting moods had already been defined in the 2D painting.
Another advantage of the LightningBoyShader is that it works with the Eevee real-time render engine integrated in Blender. Being able to see the “finished” image in real time while working offers a number of advantages. Above all, this has greatly accelerated the lighting and shading. Thanks to Eevee, we were also able to render intermediate states in full quality without any problems. We appreciated the fast rendering times more and more towards the end, as the complexity of the shader meant that “real time” sometimes turned into a good 15 seconds per frame. We rendered with a colour depth of 16 bits, which we had to specify early on, as all drawn textures had to be created with the same colour depth.

Compositing
The compositing part of our film was relatively small, as we had some restrictions due to the shader. Due to time constraints, we decided to do most of the compositing in Blender. The challenge was mainly in the capsule scenes, as the beam can be seen in the reflection of the disc. The problem with this is that our real-time render cannot render reflections and we therefore had to add them afterwards. The final touches are often added to the film during compositing. However, we were already able to define our look through our drawings and did not have to make many adjustments afterwards.
Sound design and language
In our first animatics, we put together a provisional mix of sound-only and atmos, but it wasn’t until May 2022 that we really got to grips with the sound design and, above all, the film music. It was clear that we needed something epic, preferably in the “Hans Zimmer style”. Our producer, Felix Mann, contacted the film composer Victor Ardelean. After a meeting with Victor, where we explained our ideas for the mood of the music, he created the perfect composition for our film within just a few weeks, based on our three-minute blocking.
Foley sounds were created in a recording studio at the HFF. For example, we created footsteps on a concrete floor and the sound of the robot hand touching the glass biotope using jewellery rings and a beer mug. The voice recordings were also made in the studio by Moritz Segura Kanngießer and Ines Timmich. The 5.1 mix for the DCP, which was created by Martin Förster, was created from the film music, Foley and voices by sound mixer Stefan Möhl.

Conclusion
We worked on the films for two semesters and half a summer with a lot of support and were able to show our finished results at the VFX Reel 2022 at the HFF on 24 November 2022. We owe the planning of this event and other events within the VFX department to our team assistant Petra Hereth. After this very successful evening, we are looking forward to the next opportunities and festivals where we can present our film
film. We hope that all viewers will enjoy our film, but also that
film will inspire viewers to rethink their own actions and how they treat the environment. Ultimately, we are still faced with the fact that there is no Planet B.





style in space











