Table of Contents Show
The project “Ballerina” is a 30-second CG animation, Kay John Yim’s first personal project with an animated photorealistic CG character staged in a large baroque round hall. The ballerina character was created mainly with Character Creator, animated in iClone and rendered with Redshift and Cinema4D.

Yim’s growing passion for designing architecture using technology gradually led him to become a CGI artist, developing visuals that serve not only as client presentations but also as a communication tool for the design and construction team. Since the COVID lockdown in 2021, he has attended courses in CG and has since won more than a dozen CG competitions.
This making-of tutorial article is an abridged version of “Ballerina: A CGI Fantasy”, written by Kay John Yim. The full version can be found at the Fox Renderfarm News Centre.
The making of Ballerina
The animation is a representation of the inner conflicts I have with all artistic endeavours, both metaphorical and literal. Ballet, an art form that is known to have strict beauty standards and is highly susceptible to public criticism and self-criticism, is the metaphor for my daily professional and artistic practice. As an architect, I work on architectural visualisations during the day, where every detail is scrutinised by my colleagues, senior architects and clients. As an artist, I work on my own CG projects at night, doing hundreds to thousands of iterations to find the perfect composition and colour scheme. No matter how much I grow in my professional and artistic skills, the inner struggle never goes away.

The guide is divided into 4 main parts:
● The Architecture
● The character
● The animation
● The rendering
I used the following software, among others
● Rhino
● Moment of Inspiration 4 (MOI)
● Cinema4D (C4D)
● Redshift (RS)
● Character Creator (CC)
● iClone
● ZBrush & ZWrap
● XNormal
● Marvelous Designer 11 (MD)
● Houdini
1. The architecture
My main software for architectural modelling is Rhino. There are many different ways to approach architectural modelling. The main advantage of Rhino over some other popular DCCs like Cinema4D (C4D) or Houdini is the ability to edit very detailed curves in large quantities.
As an architect, every model I built always started with a curve, usually in the form of a wall section, cornice or plinth section drawn along another curve of a plan. Rhino’s command list may seem overwhelming at first glance, but I used almost exclusively a dozen of these commands to turn curves into 3D geometry:
● Rebuild
● Trim
● Blend
● Sweep
● Extrude
● Sweep 2 Rails
● Flow Along Surface
● Surface from Network of Curves
The key to architectural modelling is to use references wherever possible. I always have PureRef open in the bottom right hand corner of my screen to ensure I am modelling in the correct proportions and scale. This usually includes actual photos and architectural drawings.

For this particular project, I used the hunting room of the Amalienburg in Munich as the primary reference for the architecture.

Although the architecture consisted of three parts – the rotunda, the corridor and the end wall – it was basically the same module. Therefore, I first modelled a wall module consisting of a mirror and a window, duplicated it and bent it along a circle to obtain the walls of the rotunda.

The module was reused for both the corridor and the end wall to save time and (rendering) memory. As I had built up a library of architectural profiles and ornaments over the last year, I was able to reuse and recycle profiles and ornaments for modelling the architecture.
Modelling ornaments can be a difficult task, but with a few ornaments modelled, I simply duplicated and geometrically rearranged them to create unique shapes.
All objects within Rhino were then assigned to different layers based on the material; this made material assignment much easier later in C4D.
Note: The best way to familiarise yourself with Rhino navigation is to model small objects. Simply Rhino offers a great beginner’s series for modelling a teapot in Rhino: for those who are stuck, there are pre-made ornaments that can be purchased from 3D model shops such as Textures.com; some ornament manufacturers offer free models for download on Sketchfab and 3dsky.
Exporting from Rhino to C4D
Rhino is primarily a NURBS (Non-Uniform Rational B-Splines) software; and although NURBS models are very accurate in representing curve and surface data, most render engines or DCCs do not support NURBS.
For this reason, I exported the NURBS and MESHES to .3dm and .FBX respectively and used Moment of Inspiration (MOI) to convert the NURBS model to a mesh.
MOI provides the best conversion of NURBS to quad mesh (compared to Rhino or other DCCs) – it always provides a flawless mesh that can then be easily edited or UV mapped for rendering.

Import into C4D
Importing the FBX file into C4D was relatively easy, but there were a few things I paid attention to, specifically the import settings, model orientation and file unit, which are listed below in order of execution:
1. Open a new project in C4D (project unit in cm);
2. Merge FBX (merge FBX);
3. Select “Geometry” and “Material” in the merge panel;
4. Change the orientation of the imported geometry (P) by -90 degrees in the Y-axis;
5. Use the “AT Group All Materials” script to automatically organise Rhino materials into different groups.


I modelled half of the architecture in Rhino and then mirrored it as an instance in C4D as everything is symmetrical.

The floor (Versailles parquet tiles) was modelled using the photo texturing method propagated primarily by CG artist Ian Hubert. I took a photo of the Versailles parquet tiles as a texture on a layer and then sliced the layer with a “knife” tool to get the variations in reflective roughness along the tile joints. This allowed me to add subtle colour and dirt variations using Curvature in Redshift.
The floor tile was then placed under a cloner to duplicate it and stretch it across the entire floor.

Note: C4D and Rhino use different Y and Z orientations, so the FBX exported directly from Rhino must be rotated in C4D.
Architectural shading (Cinema4D Redshift)
Since I grouped all the meshes by materials in advance, assigning materials was as easy as dragging and dropping them onto the material groups as cubic maps or tri-planar maps. I used Textures.com, the EMC material pack from Greyscalegorilla and Quixel Megascans as base materials for all my shaders.
For ACES to work correctly in Redshift, each texture must be manually assigned to the correct colour space in the RS Texture Node; in general, diffuse/albedo maps belong to “sRGB” and the rest (roughness, displacement, normal maps) to “Raw”. My architectural shaders were mostly a 50/50 mix of photo texture and “dirt” texture to add an extra touch of realism.
2. The character
The underlying character was created in Character Creator 3 (CC3) using the Ultimate Morphs and SkinGen plugins, both of which are very user-friendly and offer self-explanatory parameters.
Ultimate Morphs offered precise sliders for each bone and muscle size of the character, while SkinGen offered a wide range of presets for skin colour, skin texture and make-up. I also used CC3’s Hair Builder to create a game-ready hair mesh for my character.

Facial texturing
The face was one of the most important areas of the CG character and required special attention. The best workflow I found to add photorealistic detail was the “killer workflow” using the VFace model from Texturing XYZ and Zwrap.
VFACE is a collection of state-of-the-art photogrammetric human head models produced by Texturing XYZ; each VFACE comes with 16K photoscanned skin textures, displacement and utility maps; Zwrap is a ZBrush plugin that allows you to automatically fit an existing topology to a custom model.
Using the “Killer Workflow”, I can transfer all the VFACE details to the CC3 head model once the two mesh shapes are aligned.
My customisation of the ” Killer Workflow ” can be described as follows:
2. Export T-Pose character from CC3 to C4D;
3. Delete all polygons except the head of the CC3 character;
4. Export CC3 head model and VFACE model to ZBrush;
5. Use MOVE/Smooth brush to adapt the VFACE model to the CC3 head model as far as possible;
6. Start ZWRAP and click and adjust as many points as possible, especially around the nose, eyes, mouth and ears;
7. ZWRAP processes the matching points;
8. ZWRARP should then be able to output a VFACE model that perfectly matches the CC3 head model;
9. Input both models into XNormal and bake the VFACE textures onto the CC3 head model.

Note: Full “Killer Workflow” tutorial on the official Textureing.XYZ Youtube channel: VFace – Getting Started with Amy Ash. I recommend saving the matching points to ZWRAP before processing. I also recommend baking all VFACE maps individually in XNormal as they are very high resolution and could cause XNormal to crash if baked in batch.
Skin Shading (Cinema4D Redshift)
Once I had finished the XYZ texture maps, I exported the rest of the character’s texture maps from CC3. I then imported the character into C4D and converted all the materials into Redshift materials.
Unfortunately, at the time of writing, Redshift did not yet support Randomwalk SSS (a very realistic and physically accurate model for subsurface scattering that can be found in other renderers such as Arnold), so I had to do a lot more tweaking when rendering the skin.
The 3 layers of subsurface scattering were controlled by a single diffuse material with different “colour correct” settings. The head shader was a mix of the CC3 textures and the VFACE textures; the VFACE multi-channel displacement map was mixed with the CC3 displacement map “microskin”.

A “redshift object” was transferred to the character to activate the displacement – only then did the VFACE displacements become visible during rendering.

Hair shading
After experimenting with C4D Ornatrix, Maya Xgen and Houdini, I decided that using the baked hair mesh from CC3 was more efficient across the board for the “Ballerina” project. I’m using a Redshift “glass” material with CC3 hair texture maps fed into the “Reflection” and “Refraction” colour slots, as hair (in real life) reacts to light like little glass tubes.
Note: For anyone interested in making CC3 hair even more realistic, CGcircuit has a great parchment tutorial on how to create and simulate hair.

3. Animation/Character Animation: iClone
I then exported the CC character to iClone for animation. I considered several options to achieve a realistic animation of the character, including:
1. using mocap data from the catalogue (Mixamo, Reallusion ActorCore);
2. Hiring a mocap studio to create a customised mocap animation;
3. Using a mocap suit (e.g. Rokoko or Xsens) for customised mocap animation;
4. Conventional keyframing.
After experimenting with various off-the-shelf mocap data, I realised that Mixamo’s mocaps were far too generic and mostly very robotic looking; Reallusion Actorcore had some very realistic movements, but I couldn’t find exactly what I needed for the project. Since I had no budget and (my) very specific requirements for the character’s movements, options 2 and 3 were out of the question. This led me to classic keyframing.
First, I recorded videos of ballet performances and laid them out frame by frame in PureRef. Then I overlaid the PureRef reference (with half opacity) over iClone and adjusted each joint of the character to my reference using“Edit Motion Layer“.


The animated characters were then exported to Alembic files.

Note: While my final project concept shows ballerinas in slow motion, my original idea was to keyframe a 20-second ballet dance, which I quickly realised was a bad idea for several reasons:
1. Slow motion allowed many frames to be interpolated, but real-time movement required many individual frames, so much more fine-tuning was needed;
2. more frames meant more rendering problems (flickering, tessellation problems, etc.).
As this is my first project to animate a character, I decided to create a slow motion style sequence instead – 2 unique poses with 160 frames of motion each.
Clothing simulation
The clothing simulation was by far the most difficult part of the project. The two main cloth sims/solvers I considered were Marvelous Designer (MD) and Houdini Vellum.
While Houdini Vellum was much more flexible and reliable than Marvelous Designer, I personally found it far too slow and therefore impractical without a farm (one frame of fabric simulation could take up to 3 minutes in Houdini Vellum versus 30 seconds in Marvelous Designer on a Threadripper PRO 3955WX with 128 GB Ram).
While fabric simulation in MD is generally much faster to set up than Houdini Vellum, it was not as easy as I had imagined. Simulated garments in MD always had some kind of error;
These included flapping fabrics, soaking through the character, or just outright contortions. Below are some of the settings I tweaked to minimise problems:
1. Using “Tack” to attach parts of the garment to the character;
2. Increasing the “Density” and “Air Damping” of the fabric to prevent the garment from moving too quickly and getting out of position;
3. Simulate parts of the garment in isolation – this is not physically accurate, but I was able to iterate and debug much faster.
In addition to the above changes, I reduced the “gravity” to achieve a slow motion look.

Note: On the official Marvelous Designer Youtube channel there are many livestreams on clothing modelling, which I find very helpful for learning MD. Alternatively, there are many 3D garments available online (especially on the official Marvelous Designer website or on the Artstation Marketplace) that I have used as the basis for many of my projects.
MD is extremely crash-prone. Also, there is a bug in both MD10 and MD11 that prevents simulated garments from being saved 90% of the time, so you should always export simulated garments as Alembic files and not rely on MD to save the simulation.
Cleaning up the simulation
After dozens of simulations, I imported the alembic files exported with MD into Houdini, where I did a lot of manual cleanup, including
1. manually correcting clashed fabrics and characters with “Soft Transform”;
2. Reducing simulation noise with “Attribute Blur”;
3. Blending together the best simulations from different Alembic files with “Time Blend”.
Alternative to clothing simulation
For those frustrated by the impractical simulation times for Houdini Vellum fabrics and MD errors, an alternative is to literally attach the garment to the character’s skin in CC3 – a technique most commonly used in game production.

Note: You can find Reallusion’s official guide to making game-appropriate clothing here.
Baking and shading garments
Once I was done with the fabric simulation in MD and the cleanup in Houdini, I imported the Alembic file into C4D. MD Alembic files always appear in C4D as an Alembic object without selection sets; this makes material assignment impossible.
This is where C4D baking came into play – a process with which I converted the Alembic file into a C4D object with PLA (Point Level Animation):
1. Drag the Alembic object into the C4D timeline;
2. Switch to the menu item “Functions”;
3. “Bake Objects;
4. Select “PLA”;
5. Then bake.
The above steps gave me a baked C4D object from which I could easily select polygons and assign multiple materials using selection sets. I then exported an OBJ file from MD with materials, imported it into C4D and dragged the selection sets directly onto the baked garment object. This eliminated the need to manually reassign the materials in C4D.
I used a mixture of linen texture maps (from Quixel Megascans Bridge) and Redshift Car Shader to simulate the sequin fabric found on many professional ballet tutu dresses.

WARNING: Do not use AO or curvature nodes for the simulated clothing materials (or other animated objects), as these can potentially lead to disturbances in the final rendering.
4. Rendering
Lighting and environment
Although I tried to keep the lighting as minimal as possible, I had to do a lot of tinkering for the “Ballerina” project due to the nocturnal surroundings.
The nighttime HDRI didn’t provide enough ambient light for the interior, and the chandelier bulbs were far too dim as the main light source. In the end, I placed an invisible spotlight under the chandelier in the centre and used a spotlight to illuminate only the architectural ornaments. The spotlight provided an extra level of indirect light that provided just the right amount of illumination without destroying the moody atmosphere.
I also added a “Redshift environment” multiplied in the Z-axis with “Maxon Noise” to add depth to the scene. In the outdoor area, I added 2 variations of dogwood trees with C4D “Matrix” distributed in the environment. They were lit in the scene from the ground to create additional depth. The lighting of the scene includes:
1. Dome light (HDRI for night) x 1
2. Chandelier (mesh lights) x 3
3. Spotlight (centre) x 1
4. Exterior area lights x 4
5. Artificial area lighting under the chandelier (contains architectural ornaments only)

Note: The trees were generated with SpeedTree. It takes a lot of practice to perfect the lighting. In addition to my daily CG practice, I spent a lot of time watching clips from films – I was very inspired by Roger Deakin’s lighting and cinematography, for example, as well as Wes Anderson’s composition and colour combinations.
Camera movements
All my camera movements were very subtle. This included tracking shots, camera pans and rotations, all controlled with Greyscalegorilla’s C4D plugin Signal. I personally prefer using Signal because of its non-destructive nature, but old-school key-framing would also work very well for similar camera movements.
Draft Renderings
Once I had the character animations, cloth simulations and camera movements finalised, I started with low-resolution test renders to make sure I wouldn’t have any surprises in the final renders:
1. Flipbook renders (openGL) to make sure the timing of the animations was optimal;
2. Rendering a complete sequence with low resolution and low samples to make sure there are no glitches;
3. Rendering high resolution (2K) high sample still images with AOVs (diffuse, reflection, refraction, volume) to check what contributed to the predominant noise, if any;
4. Submitting test renders to Fox Renderfarm to ensure the final renders match my local renders.
This process took over 2 months of repeatedly creating renders and making corrections.



Final renderings & denoising
For the final renderings, I used a relatively high rendering setting, as interior scenes in Redshift are generally prone to noise.

I also had motion blur and bokeh turned on for the final renders – in general, motion blurs and bokehs look better (more physically accurate) in the render than motion blurs and bokehs added via compositing.
Half of the final 2K sequence was rendered on a local workstation, while the rest was rendered on Fox Renderfarm, totalling in about 6840 hours of render time on two RTX 3090 machines. I used Neat Video for denoising the final shot, while the close-ups were cleaned up with Single Altus (in Redshift).
Note: Always switch off “Random Noise Pattern” under Redshift “Unified Sampling” when using “Altus Single” for denoising.
Redshift Rendering GI trick
Redshift’s GI Irradiance Cache calculation can be quite costly; for example, my final renders have an average of 5 minutes of GI Irradiance Caching time per frame.
In Vray, there was an option in the IR/LC settings called “Use camera path” that was specifically designed for scenes where the camera is moving through a still scene. Once the “Use camera path” option was enabled, Vray calculated only one frame of the GI cache for an entire sequence. Following Vray, I use the following motion blur settings to calculate the first frame of the Irradiance cache:

The one Irradiance cache is then used to render the entire sequence. Two shots of the project were rendered with a single GI cache, resulting in an overall 10% faster render time.
Note : The GI trick is only applied to shots with very little movement. For example, when I applied it to the two close-up shots of the “Ballerina” project, I got slight blotches and ghosting on the character’s skin.
Conclusion
Working on the project for months gave me a new appreciation for traditional character animators – I never realised how much effort goes into creating character animations and how many subtle details are required to bring convincing CG characters to life.
Although I wouldn’t describe myself as a character artist, I believe that character animation is very effective in bringing CG environments to life and will therefore continue to be an essential part of my personal CG work in the future.
More information :
– Kay John Yim’s personal website https://johnyim.com/
– Kay John Yim’s ArtStation https://www.artstation.com/johnyim
– Character Creator https://www.reallusion.com/de/character-creator/download.html
– iClone https://www.reallusion.com/de/iclone/testversion.html
– Reallusion https://www.reallusion.com/de
Conclusion