Alan Wake’s cut scenes feature highly detailed facial expressions. Philip Weiss, Managing Director at metricminds, explains how his motion capture studio in Frankfurt was able to give the hero this expression.
DP: Mr Weiss, what was special about this production?
Philip Weiss: For Alan Wake, we took our Embody Technology to Arizona and worked directly with Remedy for five days straight. We captured over 40 minutes of animation. We practically created the facial movements of all the cut scenes at once.
DP: What did the workflow look like?
Philip Weiss: First, of course, we set our markers. That takes time because there are always between 1,200 and 2,000 markers on the face, which cover the entire geometry to be transferred. This means that we have to know in advance what the character looks like in order to be able to use all the relevant polygons. Although the average of 1,600 markers seems quite a lot, markerless techniques do not yet give us the precise results we want. And as long as the actors don’t complain, we’d rather go for quality.
DP: How long does it take to apply the markers?
Philip Weiss: Not long compared to “classic” special effects with latex and masks. We allow around 45 minutes per face in order to optimally capture the entire geometry. However, this can vary greatly depending on the character. For example, Alan Wake, the main character, is captured much more precisely than most of the enemies. Although, with around 50 bones, their facial expressions are far higher than the usual game zombies and they have a lot of facial expressions.
DP: What do you think of other techniques that do without markers and sometimes even multiple cameras? After all, there are software developers who more or less collect data for capturing from normal 2D film.
Philip Weiss: That is also a very interesting development. I can well imagine that a wealth of opportunities are already opening up, particularly for analysis and scientific utilisation. But for games, advertising and film, the technology is not yet advanced enough. Because we can actually already do that now. It’s just that a person has to manually adjust the data very precisely for each point in its depth orientation, and that’s where it becomes uneconomical and time-consuming again. But I suspect that in less than ten years we will have software that can do this and correctly assign and project facial expressions as well as movement and depth.
DP: metricminds develops most of its own tools. Where do you place the emphasis?
Philip Weiss: Definitely on the workflow and the software – the hardware as such is actually already finished and fulfils the current requirements. The computers are fast enough, the cameras offer sufficient resolution and the studios work well enough.
But there is still a lot of room for improvement in the pipeline and skeleton solving areas. Although we can point 32 cameras at the actor, our pipeline and software are not yet able to process everything in one go. Which would of course open up incredible possibilities in terms of “animated film in general”. Just think of the recording of classical actors – with facial expressions and posture! We have made great progress in skeleton solving, but there is still a lot to do. Although we capture and calculate the data very well, retargeting – the reworking of the skeleton by hand and the markers in the peripheral areas – still takes too much time. If we want to move Motion Capture forward, we have to make it faster first and foremost.
DP: Where does metricminds stand today?
Philip Weiss: At the beginning every day. We have continuously expanded our capabilities over the past few years. When we started in 2001, the focus was exclusively on motion capture. But due to the amount of projects we had in this area, we slowly added data post-processing and motion editing. Today, we have the entire 3D workflow in-house and can produce intros, trailers and cut scenes from start to finish based on the CryEngine and the Unreal Engine.
DP: What is metricminds currently working on?
Philip Weiss: We are involved in the latest project “Bulletstorm” in collaboration with the games development company “People can fly”, which realised “Painkiller” and “Gears of War”. We want to make the cut scenes for this fun first-person shooter as beautiful as possible with great action sequences. The first trailer can already be seen at http://www.bulletstorm.com .
DP: If someone wanted to familiarise themselves with the field of motion capturing, where would you send them?
Philip Weiss: The motion capture scene is actually quite well organised. One of the biggest forums I know of is www. motioncapturesociety.com – and this forum is not particularly busy. If you want direct contact with people in the field, I recommend frequenting social networks. For example, there is the “Motion Builder” group on LinkedIn.



