Table of Contents Show
The year is 2029, we live in an ageing society. What will the world look like when younger people can no longer pay for the pensions of older people? Will they still have to work? Who will look after them? These are the kinds of questions we asked ourselves when creating the story. In essence, it’s about an old, frail lady who controls a robot for a living, which she uses to collect rubbish. A bit like modern bottle collecting.
This woman, or rather her robot, then finds a tape recorder, which rekindles her love of music and dance. Instead of the boring, repetitive work, the robot she controls dances. She uses her working hours, somewhat subversively, to experience again what her situation now prohibits. In order to finally arrive at this idea, we first collected a large pool of ideas and small stories, roughly sketched them out and expanded them.





Realisability was not yet a decisive factor at this stage. It was more about generating lots of ideas quickly. Thanks to this pool of ideas, we had the decisive advantage in the next phase that we were able to part with stories that didn’t work so well without any pain, whereby some discarded ideas could flow into further development in the form of details. Over several rounds, the respective stories were further enriched with character sketches and smaller scenes. In the end, the basis for the film was a human-controlled robot that collects rubbish at night. The original idea came from the carelessly discarded masks lying around on the streets during the coronavirus winter of 2021. The dystopian tale of poverty in old age and loneliness is given a positive twist, celebrating the power of memories, human resilience, music, dance and fun.
Concept
This development should also be reflected in the look. The aim was to create visually exciting images from the story. What should the film look like, what does the robot look like, what kind of city is it set in, what time of day is it, how does the positive twist manifest itself in the environment. During this phase, we mainly used the drawing programme Procreate to create sketches and images for inspiration. There was also a Miro board on which we could save and organise our ideas. Basically, we had a Pixar-like look in mind – with high standards.



The robot went through many iterations. It had to be able to collect rubbish as believably as possible as well as dance freely in a human way. In the end, a cold robot should develop human traits. We also decided on a design that was as humanoid as possible in order to be able to use motion capture in the animation later on. The robots from Boston Dynamics served as a great inspiration. New York, with its tall buildings and small neighbourhood parks, was our template for the urban design. The environment contrasts gloomy streets at night with a park at dawn, and an interior space also had to be created for the limited world of the old working woman.

Storyboard
Once the story and look for the film had been found, the next step was to combine them into a convincing storyboard. Together with Prof Michael Coldewey, we looked for key frames such as the encounter between the rubbish collector and the tape recorder, the old woman in the wheelchair or the dancing robot in front of the sunrise. Ideas were discarded, developed further and sometimes reintroduced.



We tried to break the story down to the most important plot steps without losing any of the substance. The keyframes helped us to find effective shots. We then sketched these roughly before drawing them in more detail in Procreate.




Animatic & pre-visualisation
Now it was time to convert the storyboard into a moving film. To begin with, we took the previously created shots and added slight movements and a rough sound concept using After Effects and Premiere Pro. The addition of the sound in particular helped us to get a better feel for the timing.
It also allowed us to decide what should be shown visually and what would be better told on the audio level. With this knowledge, we moved on to the three-dimensional in the next step. For the previs, the settings were roughly blocked out in Blender, cameras were set, animations were added or used from Mixamo and the timings were taken from the animatic.




Then Eevee was used for rendering and the result was cut in Premiere Pro. This allowed us to recognise which settings worked well in three dimensions and which needed to be framed differently or fundamentally changed. From there, we gradually made the Previs cleaner, adjusted the camera movements, increased the level of detail in the environments and animated the Smoother animations until we could be sure that the film would come across to the viewer the way we wanted to tell it.

Pipeline & Workflow
Our great pipeline TD Jonas Kluger had set up a pipeline for us via Shotgrid for Blender, so it was possible not to get bogged down in chaos even with a project of this size. As we had almost 250 assets in total, it was necessary to work in a clearly organised structure. We created three environments from the assets: grandma’s home, the city and the park. We could then load these into the respective shots again and again and be sure that continuity was maintained. Nevertheless, it was still possible to adjust the size and position of individual assets in certain shots.

We were also able to use the pipeline to change and publish assets and then update them in the environment or in the shot using the Shotgrid add-on in Blender. This made the division of labour within the team much easier and eliminated misunderstandings about versions or file names from the outset.
Modelling & Shopping
It was clear to us that the amount of work involved in modelling all the assets or an entire city would far exceed our capacities. That’s why we decided early on to only create the really important models ourselves and to buy in all the others.

As we didn’t want our style to be photorealistic, we had to search a little longer to find the right models. We finally decided on a package that already included an entire city. We built our hero models, the robot, the “router” and the station from which the robot emerges ourselves, with valuable tips from Helmut Stark, and we were able to create the grandma with the help of Reallusion’s Character Creator. It was important to us that the robot should look as if it had really been built to collect rubbish, but it also had to have enough freedom of movement to be able to dance. For the grandma, we decided to change the proportions of the head and body to avoid ending up in the Uncanny Valley.
Texturing
Once the models were finished, it was time for some colour, using Adobe Substance Painter to give the robot, station and router the right finish. By using textures with less detail, we were able to get closer to the goal of a cartoon-like style.

A mix of procedurally generated and hand-drawn masks for wear and tear and dirt emphasises the history of the models, for example, it became very clear that the robot has been keeping the city streets clean for a long time and has picked up a few scratches along the way. We then imported all the textures created in Substance Painter back into Blender and the models were ready for the next step.

Rigging
Our two characters, the grandma and the robot, finally wanted to move. For the grandma, we were able to use a Blender add-on, Autorig Pro, and with the help of our lecturer Benc Orpak, we were able to generate a functioning rig quite quickly. Fortunately, this saved us the manual rigging process and therefore a lot of time. Unfortunately, things were a little different with our robot: although it has human-like proportions, some of the joints worked a little differently. That’s why we had to rig the robot manually. The aperture and the cylinders on the legs were particularly interesting, as they visualise the mechanical character of the robot well. For remapping the data recorded during motion capture to the respective rig, we were able to use Autorig for both models.

Once the mapping, i.e. the definition of which bone receives which keyframes, had been set, it could be used again and again. In the end, only one click was needed to transfer all the data from the mocap to the target rig. It was a truly magical moment when you could see a robot moving almost by itself for the first time.
Motion capture shoot
When we were developing our story, it was already clear that we didn’t want to animate the dancing movements of the robot by hand, but to record them using motion capture. Katharina Hein, our producer, discovered Nicole, a professional rollerblader who showcases her roller dance skills on Instagram (@rollin_me_softly) and has even founded a Munich roller dance group (@munich_rollerdance_squad). Nicole was quickly enthusiastic about our film idea and was keen to lend her movements to our robot as a motion-capture actress.






A few days before the actual shoot, we made some preparations so that the motion capturing could run smoothly. For example, we measured our digital
digital sets so that we could recreate them in a very rudimentary form in the studio. We also developed a detailed shooting schedule for the three days of filming. For the actual motion capturing, we used a suit from Xsens and the associated software MVN Animate. On the first day of filming, we mainly recorded Nicole’s dance movements on the roller skates. As Xsens is not an optical motion capture system, but a system that works with acceleration sensors, we had the following difficulty: Xsens calculates the position of the suit in space based on ground contact and the distance travelled by the actress while walking. Due to the lack of ground contact because of the rather high roller skates and the “floating” movement above the ground, we did not receive any position data from the performer, but only her own movement data. This wasn’t a problem, however, as we were planning to manipulate the recorded routes to our liking during the animation phase anyway. In addition, we always filmed the entire set from two different perspectives with witness cams so that we could better reconstruct the movement in space later.

Animation
With the motion capture data recorded, we were able to continue with the animation. Firstly, all the selected motion captures were selected and exported to MVN Animate as .fbx. Retargeting in Blender was relatively simple using the AutoRig Pro add-on. Of course, the pure motion capture animation was not precise enough, or we wanted to change certain movements afterwards. For this we used the Blender add-on Animation Layers, which enabled us to do just that. This meant that the original mocap movement data was on one layer and we had the option of manipulating the movements on a second layer. This was particularly important in moments when the characters interacted with objects. We were accompanied by Prof Melanie Beisswenger, who not only provided practical support but also explained the basic theoretical principles of animation.

Lighting
Before we started lighting ourselves, we first looked for mood images with colours and lighting that matched our ideas for Clean Aid. For example, we analysed still frames from Pixar films for their lighting design. From these images, we developed a lookbook and colour palettes for each of the three environments.

With this inspiration, we went about lighting our shots. We used various HDRIs, on the one hand as a “natural” light source for the scenes, but also to tell the story of the transition from the night sky to the morning mood.

In one shot, in which we show a time-lapse, we even used an HDRI sequence to allow the different stages of the sky to run smoothly. Of course, we didn’t leave it at the HDRIs, but set the lights individually for each shot to achieve our desired look. With backlighting, for example, we were able to isolate the characters shown from the background. These lights were rendered in Blender on separate layers so that we could adjust them again in compositing. The use of volumes in the shots made the robot’s light cone visible. Creative director and CGI artist Kathrin Hawelka provided us with active support.
Rendering
As we had two internal render engines at our disposal in Blender, we used both. We rendered with Eevee during the entire Previs phase. This allowed us to render almost in real time, which accelerated our creative process enormously. However, as it was still important to us to have a realistic interaction with light, especially because of the robot’s light beam, we decided to use Cycles as our render engine. In order to be as flexible as possible when compositing in Nuke, we used Blender’s own compositor to export a multilayer EXR sequence in which all render passes and masks were saved. For hero objects, such as the robot, the granny and the boombox, we create one mask per shot. We realised that the denoiser works much better if we don’t apply it to the whole image, but to each render pass individually. This allowed us to denoise the volume, for example, and reduce artefacts to a minimum.
At the same time, this procedure enabled us to save all passes in the EXR without noise. However, we had to realise that the denoiser also has its limits, and it was difficult to denoise asphalt, for example. As the reflections that inevitably result from the structure are very similar to digital noise, this desired structure was often transformed into “mud”. Although we had to do without the denoiser in such cases, we were still able to significantly reduce the render time. An average render time of approx. 45 min/frame was possible.
Compositing
With the exported EXR sequences including all layers, it was time for compositing. In Nuke, we used a “back-to-beauty” workflow to merge the layers back into the overall image. However, this gave us the opportunity to adjust all the selected passes individually and manipulate them to our liking.


Depending on the environment, we set up a comp setup that allowed us to go even further in the direction of our lookbook. To achieve the most authentic camera look possible, we used chromatic abberation effects in compositing, for example, or to further emphasise the dreamy overall mood during the dance sequence at sunrise, we worked with various types of glow effects. We also had the opportunity to retouch render errors or other minor details in the compositing. Senior compositing artist Shayan Sharegh provided us with lots of helpful tips and tricks
Artist Shayan Sharegh was at our side.
Grading
Then it was off to grading with the composited shots. Together with colourist Claudia Fuchs, we fine-tuned a few small details, completed our look and checked the settings for colour accuracy. A selected grain rounded off our sequences.
Sound design & music
No film without sound. This is all the more true for our animated film. Very early on in the development process, we were looking for the right piece of music for our robot to dance its way through the streets to. As the film is set in 2029, the grandma in the wheelchair is 80 years old and her wheeler dance career is at its peak in young adulthood, it was clear to us that the song should fit in with the 70s and the disco genre.
After extensive research, we came across “Dance Like Crazy” by the artist “Ikoliks”. Mykola Odnoroh (alias Ikoliks) was enthusiastic about the idea of his piece of music making our robot dance, and that’s how the collaboration came about.
But not only the music has become an elementary component of our animated film, but also the sound design. Sound engineer Andreas Goldbrunner and artistic collaborator Rodolfo Anes Silveira used the raw sound design that we had created for Previs as a basis and used it to conjure up an auditory backdrop that brought our environments and the movements of our rusty robot to life.
Retrospective
And voilà, about a year later it was finished, our first animated short film. Who would have thought that it could take so long to develop a coherent story, to go through all the production steps of the animation pipeline until you end up with a finished film of three minutes and fifty-five seconds? We certainly didn’t – but it was worth it. It’s been a very instructive year with many hurdles and challenges overcome, so we can now look back on Clean Aid with pride and look forward to sharing it with as many people as possible.

Producers Comment by Katharina Hein and Felix Mann
After the exciting and inspiring lecture on VFX by Michael Coldewey in the first semester, we were gripped by enthusiasm and curiosity for VFX producing. The offer to support our year’s animated films gave us the opportunity to get to know a completely new way of producing. As students in our first semester, we had no previous experience in the production of animated films. So we began a period of constant learning about the workflow, the software required and how long rendering times can really take. One of the highlights of the production for both of us was the motion capture shoot, where we were able to make the best use of our previous experience from previous shoots. After all, a motion capture shoot is essentially only slightly different from a live-action film shoot. As producers, we tried to take on all organisational tasks and be available at all times to answer questions. After the shoot is before the shoot and we are very much looking forward to producing the VFX students’ first live-action film in 2021 together with WennDann Film next year. We would like to thank the entire VFX department and especially Ines, Franzi, Alex, Valentin, Hannes and Felix for this trust.
Team
- Directors: Valentin Dittlmann,Hannes Werner, Felix Zachau
- Producer: Katharina Hein
- Motion capture Actors: Nicole Adamczyk, Katharina Hein, Hannes Werner
- Music: Mykola Odnoroh (Ikoliks)
- Project Supervision: Prof. Jürgen Schopper
- Project Consultant: Berter Orpak, Rodolfo Anes Silveira
- Vfx Pipeline TD: Jonas Kluger
- Line Producer: Ina Mikkat
- Team Assistant to Line Producer: Jenny Freiburger
- Team Assistant: Petra Hereth
- Scheduling: Beate Bialas, Sabina Kannewischer
- Technical Support: Benedikt Geß, Florian Schneeweiß
- Studio Management: Peter Gottschall, Andreas Beckert
- Conforming: Martin Foerster
- Colour grading: Claudia Fuchs
- Sound Design & Re-Recording: Andreas Goldbrunner
- Lecturer: Prof. Melanie Beisswenger (Animation), Prof. Michael Coldewey (Storyboarding), Kathrin Hawelka (Lighting), Benc Orpak (Rigging), Moritz Rautenberger (Camera work), Shayan Sharegh (Compositing), Helmut Stark (Modelling & Texturing)
- Thanks to: Jonas Bartels, Franziska Bayer, Christian Gessner, Alexander Hupp, Christoph Kühn, Malte Pell, Jonas Potthoff, Nicolas Schwarz, Tobias Sodeikat, Ines Timmich






































