<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="https://digitalproduction.com/wp-content/plugins/xslt/public/template.xsl"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	xmlns:rssFeedStyles="http://www.wordpress.org/ns/xslt#"
>

<channel>
	<title>character rigging - DIGITAL PRODUCTION</title>
	<atom:link href="https://digitalproduction.com/tag/character-rigging/feed/" rel="self" type="application/rss+xml" />
	<link>https://digitalproduction.com</link>
	<description>Magazine for Digital Media Production</description>
	<lastBuildDate>Fri, 20 Feb 2026 13:34:08 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
<site xmlns="com-wordpress:feed-additions:1">236729828</site>	<item>
		<title>Cascadeur on physics, AI and control</title>
		<link>https://digitalproduction.com/2026/02/18/cascadeur-on-physics-ai-and-control/</link>
		
		<dc:creator><![CDATA[Bela Beier]]></dc:creator>
		<pubDate>Wed, 18 Feb 2026 09:00:38 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[topnews]]></category>
		<category><![CDATA[AI animation]]></category>
		<category><![CDATA[Cascadeur]]></category>
		<category><![CDATA[character rigging]]></category>
		<category><![CDATA[FBX]]></category>
		<category><![CDATA[Filament]]></category>
		<category><![CDATA[FK]]></category>
		<category><![CDATA[IK]]></category>
		<category><![CDATA[inbetweening]]></category>
		<category><![CDATA[interpolation]]></category>
		<category><![CDATA[machinelearning]]></category>
		<category><![CDATA[Nekki]]></category>
		<category><![CDATA[physics]]></category>
		<category><![CDATA[physics animation]]></category>
		<category><![CDATA[ragdoll]]></category>
		<category><![CDATA[retargeting]]></category>
		<category><![CDATA[Shadow Fight]]></category>
		<category><![CDATA[subscribers]]></category>
		<category><![CDATA[Unity]]></category>
		<category><![CDATA[Unreal Engine]]></category>
		<category><![CDATA[USD]]></category>
		<category><![CDATA[Xsens]]></category>
		<guid isPermaLink="false">https://digitalproduction.com/?p=253398</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/inbetweening_01.png?fit=1200%2C657&quality=72&ssl=1" width="1200" height="657" title="" alt="A 3D animation software interface displaying a sequence of seven humanoid figures in varying poses, with one figure standing in a center position, showcasing character animation progress. The background is black, emphasizing the models." /></div><div><p>Cascadeur explains how physics solvers and local AI shape modern keyframe animation. Physics-assisted keyframing and AI-generated Inbetweening sound like shorthand for automation. In practice, they describe a layered system that revolves around explicit poses, timing and animator intent.</p>
<p>The post <a href="https://digitalproduction.com/2026/02/18/cascadeur-on-physics-ai-and-control/">Cascadeur on physics, AI and control</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/qualityjellyfish45275761d0/">Bela Beier</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/inbetweening_01.png?fit=1200%2C657&quality=72&ssl=1" width="1200" height="657" title="" alt="A 3D animation software interface displaying a sequence of seven humanoid figures in varying poses, with one figure standing in a center position, showcasing character animation progress. The background is black, emphasizing the models." /></div><div><p class="wp-block-paragraph"><em>For those who don’t know the tool: <a href="https://cascadeur.com/" title="">Cascadeur</a> by <a href="https://nekki.com/" title="">Nekki</a> is a standalone 3D character animation DCC focused on physics-assisted keyframe animation, AutoPosing and AI Inbetweening. It supports FBX and USD pipelines into engines such as <a href="https://digitalproduction.com/tag/unity/" title="Unity">Unity</a> and <a href="https://digitalproduction.com/tag/unreal/" title="Unreal">Unreal Engine</a>, and began life as an internal game tool.</em></p>
<span hidden class="__iawmlf-post-loop-links" data-iawmlf-links="[{&quot;id&quot;:158,&quot;href&quot;:&quot;https:\/\/cascadeur.com&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251226224346\/https:\/\/cascadeur.com\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-27 12:37:26&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-01 02:03:30&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-06 00:33:53&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-12 04:54:41&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-16 22:23:11&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-20 08:16:03&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-24 18:49:32&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-28 12:23:57&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-03 06:03:59&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-07 19:30:03&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-13 11:43:10&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-16 14:24:36&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-19 14:45:29&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-22 19:38:04&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-26 03:24:44&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-01 15:49:39&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-05 07:47:01&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-08 10:14:09&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-11 14:14:35&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-15 06:44:02&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-18 10:11:22&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-22 13:56:27&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-25 19:44:29&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-31 23:48:13&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-04 01:45:12&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-07 11:53:40&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-10 12:24:47&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-13 12:45:32&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-16 21:18:26&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-20 02:48:16&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-23 11:00:22&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-26 18:51:25&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-30 09:45:10&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-05-03 10:05:26&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-05-06 11:01:15&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-05-09 21:02:29&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-05-13 05:25:37&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-05-13 05:25:37&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:13386,&quot;href&quot;:&quot;https:\/\/nekki.com&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20260119012223\/https:\/\/nekki.com\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2026-02-13 11:45:10&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-16 14:24:36&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-19 14:45:29&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-22 19:38:03&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-26 03:24:45&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-01 15:49:39&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-05 07:47:02&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-08 10:14:09&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-11 14:14:34&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-15 06:44:01&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-18 10:11:22&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-22 13:56:26&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-25 19:44:29&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-31 23:48:13&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-04 01:45:12&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-07 11:53:38&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-10 12:24:49&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-13 12:45:32&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-16 21:18:27&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-20 02:48:15&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-23 14:15:48&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-26 18:51:27&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-30 09:45:10&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-05-03 10:05:24&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-05-06 11:01:14&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-05-09 21:02:29&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-05-13 05:25:36&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-05-13 05:25:36&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:13387,&quot;href&quot;:&quot;https:\/\/www.linkedin.com\/in\/alexander-grishanin-aba690168&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:13409,&quot;href&quot;:&quot;https:\/\/shadowfight2.com&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20260118181137\/https:\/\/shadowfight2.com\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2026-02-18 09:25:03&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-21 21:46:21&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-25 11:39:56&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-28 12:25:34&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-03 13:26:59&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-07 10:43:30&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-10 18:57:59&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-14 08:37:37&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-17 17:20:14&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-20 19:03:29&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-25 01:13:32&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-31 23:48:12&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-04 01:45:13&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-07 11:53:38&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-11 04:47:48&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-15 12:14:51&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-18 22:35:25&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-24 23:56:17&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-29 19:59:50&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-05-06 11:01:13&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-05-09 21:02:52&quot;,&quot;http_code&quot;:206}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-05-09 21:02:52&quot;,&quot;http_code&quot;:206},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:13410,&quot;href&quot;:&quot;https:\/\/www.linkedin.com\/in\/eugene-dyabin-534919240&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:6322,&quot;href&quot;:&quot;https:\/\/trello.com\/b\/oNlIizJh\/cascadeur-roadmap&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251217230637\/https:\/\/trello.com\/b\/oNlIizJh\/cascadeur-roadmap&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-30 05:31:41&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-18 09:29:15&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-21 12:38:56&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-24 23:25:03&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-01 17:21:51&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-07 20:55:57&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-11 07:50:05&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-15 06:48:25&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-18 10:14:47&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-22 13:56:26&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-01 08:01:00&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-05 00:55:05&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-08 15:51:58&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-13 14:44:34&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-26 10:20:16&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-29 20:01:14&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-05-06 11:01:17&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-05-09 21:05:26&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-05-09 21:05:26&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;}]"></span>

<div class="wp-block-image">
<figure class="alignleft size-full is-resized"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/image-30.png?quality=72&ssl=1"><img data-recalc-dims="1"  fetchpriority="high"  decoding="async"  width="1200"  height="1371"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/image-30.png?resize=1200%2C1371&quality=72&ssl=1"  alt="A young man smiling while leaning on a table, surrounded by lush green foliage. The background features a blue wall and a bright blue sky, creating a vibrant, cheerful atmosphere."  class="wp-image-253418"  style="width:280px;height:auto" ></a></figure>
</div>


<p class="wp-block-paragraph"><strong><a href="https://www.linkedin.com/in/alexander-grishanin-aba690168" title="">Alexander Grishanin</a></strong> is a software engineer and CTO of Cascadeur. With a background in applied mathematics and real-time systems, he focuses on combining classical physics simulation, machine learning, and intuitive animation workflows into a cohesive production tool. </p>



<p class="wp-block-paragraph">He joined Nekki in 2014 as a game developer, contributing to character animation systems for titles such as the <em><a href="https://shadowfight2.com/" title="">Shadow Fight</a></em> series. Today, together with Cascadeur’s original creator, Eugene Dyabin, he shapes the software’s technical vision, focusing on lowering the barrier to entry for 3D animation while preserving precise artistic control.</p>



<h2 id="cascadeur-today" class="wp-block-heading">Cascadeur Today </h2>



<p class="wp-block-paragraph"><strong>DP: What was the very first problem you wanted Cascadeur to solve?</strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: To put this into context, it helps to know that Nekki originally started more than 20 years ago as a game development studio. Animation was always a core production topic for us, not a theoretical exercise. We were building real games under real-life conditions.</p>



<p class="wp-block-paragraph">The original idea for Cascadeur came from one of Nekki’s co-founders, <a href="https://www.linkedin.com/in/eugene-dyabin-534919240/" title="">Eugene Dyabin</a>. Eugene has always been interested in animation, but approached it from a very technical perspective. What bothered him early on was that established tools like Maya, while extremely sophisticated, were almost completely disconnected from physics. For someone with a technical background, that felt like a fundamental mismatch, because character animation, beyond its artistic dimension, is strongly influenced by physical principles such as balance, weight and momentum.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/3.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="642"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/3.png?resize=1200%2C642&quality=72&ssl=1"  alt="A computer screen displaying an animated character in a martial arts pose, wielding a sword. The interface shows animation tools and a timeline at the bottom, with a shadow of the character behind. The logo &#039;Cascadeur&#039; is visible in the upper left."  class="wp-image-253427" ></a></figure>



<p class="wp-block-paragraph">At that time, Nekki was producing fighting and parkour games such as <em>Shadow Fight</em> and <em>Vector</em>. To generate large volumes of believable motion for those games, Cascadeur began as an internal tool. Initially, it was a relatively simple animation editor that relied on physics-based calculations to help animators manage balance, weight, and momentum more intuitively, rather than constantly tweaking animation curves.</p>



<p class="wp-block-paragraph">What began as a small internal project gradually evolved as we added the tools we needed in production. For a long time, the idea was simply to use Cascadeur inside our own studio. But as the tool matured and proved itself in real projects, it became clear that it could be useful far beyond Nekki,  which eventually led to the decision to turn Cascadeur into a standalone product in 2019.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/image-2-1.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="657"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/image-2-1.png?resize=1200%2C657&quality=72&ssl=1"  alt="A 3D animation software interface displaying two character models in a jumping pose. One model is green and the other is gray, with colorful trajectories represented by lines. The settings panel on the right side shows physics configuration options."  class="wp-image-254065" ></a></figure>



<p class="wp-block-paragraph"><strong>DP: And, when we look at the current version of Cascadeur, has that problem been solved? </strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: Largely, yes. But it was definitely not a single step; it was a long evolution. In its early days, Cascadeur was far from being a full-featured 3D animation tool. It used a very specific internal format and was tightly coupled to our own production setup. </p>



<p class="wp-block-paragraph">Over time, especially with the rise of Unity and Unreal Engine, it became clear that Cascadeur had to integrate seamlessly into standard pipelines. A key part of “solving the problem” was turning it into a proper 3D editor: supporting skeletal animation as used in game engines and enabling reliable import and export via common formats such as FBX and USD. From a pipeline perspective, that part is largely solved today.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/10-1.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="675"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/10-1.png?resize=1200%2C675&quality=72&ssl=1"  alt="A digitally-rendered image of a person&#039;s hands playing a guitar, focusing on the fingers positioned on the fretboard and strings. The scene includes 3D wireframe elements and markers indicating joint movements."  class="wp-image-253413" ></a></figure>



<p class="wp-block-paragraph">On the physics side, the journey was just as long. Initially, we worked with very explicit physics concepts: centre of mass, force vectors, and angular momentum visualisations. We even built detailed visual tools showing forces and reaction forces. Technically, that worked well, but we quickly realised that this approach had a very high entry barrier. While powerful, it required animators to think in highly abstract physical terms, which was intimidating for many artists.</p>



<p class="wp-block-paragraph">That insight led to a fundamental shift in approach. Instead of asking animators to <em>understand</em> physics, we focused on letting the software <em>demonstrate</em> it. This resulted in what we now call the Physical Assistant (aka “Green ghost”): essentially a physically accurate version of the character that moves alongside the animator’s animation and shows how the motion would look if it fully respected physical laws. It’s generated automatically and acts as a continuous reference rather than a diagnostic tool.</p>



<p class="wp-block-paragraph">At this point, we’re quite happy with where we landed. There’s still room for refinement and more automation, but in terms of making physics-based reasoning practical and usable in everyday animation work, the original problem is largely solved.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/5.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="636"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/5.png?resize=1200%2C636&quality=72&ssl=1"  alt="A digital animation interface showing a character in mid-jump wielding a spear. The background features abstract shapes in pink and blue hues. Below, a timeline displays various animation keys and settings for the character."  class="wp-image-253426" ></a></figure>



<p class="wp-block-paragraph"><strong>DP: One of Cascadeur’s party tricks is simplifying any Biped Rig, and making it usable for animation – how does it decide what parts of an imported rig should be simplified?</strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: To be honest, there is no fully automatic “magic” that analyses an imported rig and decides which bones are essential and which are not. At least not at the moment. Our approach is actually quite pragmatic.</p>



<p class="wp-block-paragraph">When you import a character into Cascadeur, it can indeed come with hundreds of bones: helper bones, controllers, deformation chains, design-driven additions – all the things riggers love. But for physically plausible biped animation, only a well-defined subset of those bones is actually relevant. Beyond areas like the spine, which often needs more detail, the structure required for body mechanics is fairly consistent.</p>



<figure class="wp-block-embed is-type-rich is-provider-embed-handler wp-block-embed-embed-handler"><div class="wp-block-embed__wrapper">
<div style="width: 640px;" class="wp-video"><video class="wp-video-shortcode" id="video-253398-1" width="640" height="360" preload="metadata" controls="controls"><source type="video/mp4" src="https://cascadeur.com/images/original/sections/feature/1/1.mp4?_=1" /><a href="https://cascadeur.com/images/original/sections/feature/1/1.mp4">https://cascadeur.com/images/original/sections/feature/1/1.mp4</a></video></div>
</div></figure>



<p class="wp-block-paragraph">So what Cascadeur does is this: the user selects the bones that the internal Cascadeur rig should work with. This can be done either via presets, which we created for common rig types or manually in the rigging tool. The generated Cascadeur rig then controls only this selected subset, while the rest of the bones follow through skinning or constraints.</p>



<p class="wp-block-paragraph">This is very similar to how quick rigging or humanoid setup works in game engines. Once the system knows which bone represents the pelvis, the feet, the hands, and so on, a lot of things become much easier, including physics-based tools and retargeting between characters. So rather than automatically “simplifying” rigs in a black-box way, Cascadeur relies on a clear, explicit mapping. That gives us predictability and compatibility with existing production pipelines.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/fingers-1fix-1.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="675"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/fingers-1fix-1.png?resize=1200%2C675&quality=72&ssl=1"  alt="Close-up view of two sculpted human hands positioned over the keys of a piano, showcasing detailed fingers and knuckles, with digital markers visible on the hands. The piano keys are mostly black and white, creating a striking contrast."  class="wp-image-253412" ></a></figure>



<h2 id="when-is-a-joint-really-a-joint" class="wp-block-heading">When is a joint really a joint? </h2>



<p class="wp-block-paragraph"><strong>DP: How does the system identify what counts as a joint?</strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: This is handled in essentially the same way as rig simplification. If the system recognises a known rig, a preset can be loaded that defines the joints. Otherwise, joint identification is done manually during the rig setup.</p>



<p class="wp-block-paragraph">Currently, Cascadeur does not automatically analyse an arbitrary hierarchy to determine what counts as a joint. We rely on explicit user input rather than automated heuristics. More automated approaches may be possible in the future, but currently the process is intentionally straightforward and transparent.</p>



<p class="wp-block-paragraph"><strong>DP: How does Cascadeur separate intentional mocap motion from noise?</strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: At the moment, Cascadeur does not have a specialized system that automatically distinguishes intentional mocap performance from noise. Mocap cleanup is not the core focus of the software. Cascadeur is primarily designed for creating animation from scratch, but we do provide tools that make working with mocap data practical.</p>



<p class="wp-block-paragraph">Our main approach in this regard is the so-called Unbaking workflow. You can import a baked mocap animation and unbake it, which converts the motion into a reduced set of keyframes and interpolations while preserving the original movement as much as possible. This makes the animation much easier to edit and clean manually. Smaller jitters can be reduced through Unbaking precision settings, but larger artifacts will still be preserved and need manual correction.</p>



<p class="wp-block-paragraph">In practice, this works well because most modern mocap systems already perform significant jitter reduction in their own software before the data reaches Cascadeur. On our side, we focus on making the resulting animation editable, readable, and controllable. Additional tools, such as automatic foot contact handling and upcoming collision-penetration fixes, help improve the overall cleanup process, but we don’t currently attempt to automatically distinguish artistic intent from noise.</p>



<h2 id="version-2025-3" class="wp-block-heading"><strong>Version 2025.3</strong></h2>



<figure class="wp-block-embed alignfull is-type-wp-embed is-provider-digital-production wp-block-embed-digital-production"><div class="wp-block-embed__wrapper">
<span class="fqtvlSnVsy49ZCRdbL2APh5Uuo6pWFJwGkBD"><blockquote class="wp-embedded-content" data-secret="q3JjuU7s86"><a href="https://digitalproduction.com/2025/11/28/cascadeur-2025-3/">Cascadeur 2025.3: Inbetweening De Luxe</a></blockquote><iframe class="wp-embedded-content" sandbox="allow-scripts" security="restricted"  title="“Cascadeur 2025.3: Inbetweening De Luxe” — DIGITAL PRODUCTION" src="https://digitalproduction.com/2025/11/28/cascadeur-2025-3/embed/#?secret=DUaQLdTWiJ#?secret=q3JjuU7s86" data-secret="q3JjuU7s86" width="600" height="338" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe></span>
</div></figure>



<p class="wp-block-paragraph"><strong>DP: Having that in mind: How challenging was the transition from bipeds to quadrupeds? And does AutoPosing behave differently for quadrupeds?</strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: The challenging part was not so much the core technology, but standardization. The underlying rigging system in Cascadeur (the part that generates IK/FK rigs)  is actually not tied to bipeds at all. You can rig pretty much anything with it, and our users have been doing that for years, including complex robots or non-humanoid characters.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/4.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="642"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/4.jpg?resize=1200%2C642&quality=80&ssl=1"  alt="A 3D animation software interface displaying a character performing a mid-air stunt. The figure is shown in an arched trajectory with motion curves highlighted. The workspace features toolbars and timelines, emphasizing animation controls."  class="wp-image-253425" ></a></figure>



<p class="wp-block-paragraph">What <em>was</em> new with quadrupeds in version 2025.3 was turning this flexibility into a standardized, production-ready workflow. That meant defining a clear IK/FK rig structure for quadrupeds and extending the rigging and “cooking” tools so users don’t have to build everything manually anymore.</p>



<p class="wp-block-paragraph">AutoPosing was the second major piece, and here the behavior is indeed different. Quadrupeds have fundamentally different anatomy: more complex spines, different roles for front and hind legs, and different balance logic. So AutoPosing for quadrupeds is not a simple extension of the biped system – it’s a separate solution built on top of the same foundations, but with different assumptions.</p>



<p class="wp-block-paragraph">From a system perspective, nothing really broke, but we had to add new logic and support layers to make quadrupeds feel natural to work with. We’ve already improved this significantly in recent updates, and there’s more coming in the next major release 2026.1. Retargeting for quadrupeds is now supported as well.</p>



<p class="wp-block-paragraph">One thing we don’t yet have for quadrupeds is in-between pose generation at the same level as for bipeds. That requires a much larger and more specific dataset, and we’re currently exploring ways to obtain that. The transition is well underway but not yet complete.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/autoposing-1.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="675"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/autoposing-1.png?resize=1200%2C675&quality=72&ssl=1"  alt="A digital art scene displaying four stylized humanoid figures in motion, posed dynamically around images of a woman dancing in a gray outfit. The figures are in varying stances, showcasing movement, with a dark background and grid floor."  class="wp-image-253402" ></a></figure>



<p class="wp-block-paragraph"><strong>DP: How does AutoPosing decide which constraints to prioritise between keyframes?</strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: There’s a small clarification needed here: AutoPosing itself does not operate between keyframes. It is a tool used <em>on</em> a keyframe to generate or adjust a pose.</p>



<p class="wp-block-paragraph">Our Inbetweening system is a separate mechanism. It generates motion between keyframes based on the keyframe poses, their timing, and optionally the tangents defined at the start and end keys. At the moment, it does not evaluate or prioritise constraints in the traditional sense. The keyframes themselves, and their timing, are the constraints.</p>



<p class="wp-block-paragraph">When connecting two animation segments, the system interpolates between the defined poses, respecting timing and tangents rather than resolving competing constraints. More advanced constraint-aware Inbetweening is something we may explore in the future, but it’s not how the system works today.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/retarget-4-1.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="718"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/retarget-4-1.png?resize=1200%2C718&quality=72&ssl=1"  alt="A 3D animation software interface displaying a running human figure in grayscale, transitioning to a blue humanoid character. An arrow indicates movement direction, with a timeline and editing tools visible in the background."  class="wp-image-253403" ></a></figure>



<p class="wp-block-paragraph"><strong>DP: How does retargeting interact with locked joints or pinned positions?</strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: Retargeting in Cascadeur does not explicitly take joint locks or pinned positions into account. What we focus on instead are fulcrum points – points of contact that should retain their global position, such as feet on the ground. These are detected and preserved during retargeting.</p>



<p class="wp-block-paragraph">Technically, retargeting operates at the joint or IK-rig level, but it does not account for existing constraints or joint locks. Constraints exist in Cascadeur, but the retargeting system itself does not account for them.</p>



<p class="wp-block-paragraph">In practice, this keeps the behaviour predictable. For example, when weapons or props are involved, they are usually switched to FK and remain in place, rather than being dynamically resolved during retargeting. More advanced constraint-aware retargeting would require significantly more context and assumptions, and at the moment, we don’t try to make it work “magically.”</p>



<p class="wp-block-paragraph"><strong>DP: Cascadeur now understands complex collisions and “stuff in the scene” How far can Cascadeur go with understanding objects and full environments?</strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: At the moment, Cascadeur handles environments in a fairly pragmatic way. The system automatically recognises the ground plane, and any additional objects in the scene can be treated as environment elements – as long as they have colliders assigned to them. Once that is set up, characters can interact with those objects, for example, by walking on them or making physical contact.</p>



<figure class="wp-block-embed is-type-rich is-provider-embed-handler wp-block-embed-embed-handler"><div class="wp-block-embed__wrapper">
<div style="width: 640px;" class="wp-video"><video class="wp-video-shortcode" id="video-253398-2" width="640" height="360" preload="metadata" controls="controls"><source type="video/mp4" src="https://cascadeur.com/images/original/sections/feature/5/5.mp4?_=2" /><a href="https://cascadeur.com/images/original/sections/feature/5/5.mp4">https://cascadeur.com/images/original/sections/feature/5/5.mp4</a></video></div>
</div></figure>



<p class="wp-block-paragraph">Most of our physics-based tools already account for these environment colliders. We support different types of collision behaviour, such as pinned or surface-based collisions, which allow for reasonably complex interactions with scene geometry.</p>



<p class="wp-block-paragraph">The current limitations are higher-level automation. AutoPosing and Inbetweening do not yet consider environment colliders when generating poses or motion. We plan to improve this in the future, but it’s a complex problem, and we don’t want to overpromise on timelines.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/rigging-1.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="675"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/rigging-1.png?resize=1200%2C675&quality=72&ssl=1"  alt="A 3D modeling software interface displaying a stylized humanoid model in orange and cyan. The right panel features a &#039;Quick Rigging Tool&#039; with options for body parts, including a list of rig elements and buttons for adjustments."  class="wp-image-253405" ></a></figure>



<p class="wp-block-paragraph"><strong>DP: And within a scene, the movement is either interpolated from Keyframe to Keyframe, or inbetween-ed along a vector, curve or trajectory? </strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: In Cascadeur, animation is fundamentally built around keyframes. Between those keyframes, we support several types of interpolation. Most of them are fairly standard: linear, Bézier-based interpolation. Quite similar to what you find in other 3D animation tools.</p>



<p class="wp-block-paragraph">One notable difference is on the rig side: we rely heavily on spherical interpolation. This gives us greater control over rotation continuity and tangents while avoiding issues such as gimbal lock. Compared to quaternion interpolation, it offers more predictable behaviour when animators want to art-direct motion using curves and timing.</p>



<p class="wp-block-paragraph">Inbetweening is a separate mechanism. It uses AI to generate motion <em>between</em> keyframes, but it does not currently follow curves, trajectories, or environmental context. Its constraints are the poses, timing, and optionally the tangents defined at the surrounding keyframes. Artistic control still comes primarily from how those keyframes are set up.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe class="youtube-player" width="1200" height="675" src="https://www.youtube.com/embed/2D0myjbrWVo?version=3&rel=1&showsearch=0&showinfo=1&iv_load_policy=1&fs=1&hl=en-US&autohide=2&wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div></figure>



<p class="wp-block-paragraph">We are exploring more advanced systems in the future. For example, root-motion-based generation that could follow a trajectory or directional intent. But that would not be traditional interpolation. It would be more of a generation step, where you define intent and let the system propose motion.</p>



<p class="wp-block-paragraph">So even as locomotion and motion flows become more complex, the core idea remains the same: keyframes define intent, interpolation and Inbetweening assist – but art direction always stays with the animator.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/inbetweening_01-1.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="657"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/inbetweening_01-1.png?resize=1200%2C657&quality=72&ssl=1"  alt="A digital animation workspace showcasing a variety of stylized character models in motion. One central character with an animal head is performing a dance pose, surrounded by outlines of other characters in different poses. The background is dark, highlighting the animation timeline and editing tools at the bottom."  class="wp-image-253406" ></a><figcaption class="wp-element-caption">Inbetweening</figcaption></figure>



<p class="wp-block-paragraph"><strong>DP: And how does Cascadeur switch between interpolation and Inbetweening? </strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: It’s entirely a choice made by the animator. There is no automatic switching. In earlier Cascadeur versions, Inbetweening was a separate operation: you selected a time interval, triggered the tool, and it generated a baked animation for that section.</p>



<p class="wp-block-paragraph">In version 2025.3, this changed. Inbetweening is now implemented as a type of interpolation. That means the animator explicitly chooses, per segment, whether motion between keyframes should use a traditional interpolation method or Inbetweening. From a workflow perspective, this makes the system much more coherent and predictable.</p>



<p class="wp-block-paragraph">One of the main challenges with this change was ensuring smooth transitions between different interpolation types. We added mechanisms to blend and smooth those transitions, so switching between standard interpolation and Inbetweening doesn’t introduce visual discontinuities. So the logic is simple: the animator decides what works best for a given situation, and the system focuses on making that choice behave consistently and smoothly.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/inbetweening_02.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="655"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/inbetweening_02.png?resize=1200%2C655&quality=72&ssl=1"  alt="A digital animation workspace displaying a series of character models in different walking poses. A central character is in full color, while adjacent figures are in various shades of gray, illustrating motion. The interface shows tools and timelines."  class="wp-image-253407" ></a></figure>



<h2 id="how-about-ragdolls" class="wp-block-heading">How about Ragdolls? </h2>



<p class="wp-block-paragraph">Alexander Grishanin: Yes. But it’s important to clarify how physics and ragdoll simulation work in Cascadeur. Ragdoll is a type of physical simulation that operates on the <em>resulting animation</em>, not on the process that created it.</p>



<p class="wp-block-paragraph">From the physics system’s point of view, it doesn’t matter whether an animation was created using standard interpolation, Inbetweening, imported mocap, or even a fully baked clip from a library. All interpolation and Inbetweening are evaluated first. Physics, including ragdoll, then uses the final animation frames as input and simulates from that point.</p>



<p class="wp-block-paragraph">Once you activate ragdoll, the simulation will typically override the animation quite significantly. For example, when a character starts to fall. There is no tight coupling among interpolation, in-Betweening, and ragdoll logic. Physics simply “consumes” the finished motion and reacts to it.</p>



<p class="wp-block-paragraph">In that sense, both interpolation and Inbetweening are fully compatible with ragdoll features – because physics doesn’t care how the motion was authored, only what the motion is at the moment the simulation starts.<br /></p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/8.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="636"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/8.png?resize=1200%2C636&quality=72&ssl=1"  alt="A digital animation software interface showing a character in a dynamic pose, holding weapons, with visible motion paths and timeline controls at the bottom. The workspace includes a 3D model and property settings panel."  class="wp-image-253424" ></a></figure>



<p class="wp-block-paragraph"><strong>DP: When we look at “AI-assisted Inbetweening”, wouldn’t the next step be to get additional data from a scene? </strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: That kind of scene-level, event-driven Inbetweening is far beyond what we are aiming for in the near future. Conceptually, it is interesting to use full-scene context, simulations, particles, or semantic events to influence animation behaviour, but this quickly becomes a very different class of system.</p>



<p class="wp-block-paragraph">Right now, our focus is much more grounded. We aim to improve the quality, robustness, and controllability of AI-assisted Inbetweening, based on animation data and clearly defined inputs such as poses, timing, and physical plausibility. Adding higher-level scene understanding would significantly increase complexity and assumptions.</p>



<p class="wp-block-paragraph">Ideas like text-driven reactions to explosions or environmental cues are intriguing, but they are not where Cascadeur is heading in the foreseeable future. Our priority is to make AI a reliable assistant within a traditional animation workflow, not a system that attempts to interpret entire scenes or narratives.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/turnik_0.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="656"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/turnik_0.png?resize=1200%2C656&quality=72&ssl=1"  alt="A 3D animation software interface displaying a green humanoid model performing a jumping action near a vertical pole. On the right, animation settings are visible, including options for position and frame rate, with a timeline at the bottom showing keyframes."  class="wp-image-253408" ></a></figure>



<p class="wp-block-paragraph"><strong>DP: With more interpolation tools added, how do you keep the timeline clean?</strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: There isn’t a strict formal rule behind this. In practice, we test ourselves and carefully determine what should appear directly on the timeline and what can be placed in menus or settings. The goal is to keep the timeline readable, even as more interpolation options are added.</p>



<p class="wp-block-paragraph">Unbaking is a good example of where this balance is not perfect. It can introduce quite a lot of keys and visual noise on the timeline. We’re aware of this, though in practice it hasn’t become a major usability issue yet.</p>



<p class="wp-block-paragraph">In the long term, we would like to hide more of this complexity. Ideally, animators wouldn’t have to think in terms of linear versus Bézier versus IK or Inbetweening at all. You would define intent, and the system would choose the appropriate interpolation under the hood. That said, this is more of a direction or a vision than a concrete feature on the roadmap. By the way, if you want to know what we are working on right now, our Roadmap is public, and you can have a look here: <a href="https://trello.com/b/oNlIizJh/cascadeur-roadmap">https://trello.com/b/oNlIizJh/cascadeur-roadmap</a></p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/image-1-1.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="657"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/image-1-1.png?resize=1200%2C657&quality=72&ssl=1"  alt="A 3D animation software interface displaying a humanoid character model with a cat-like head. The character is center-stage with a grid background, and two ghosted silhouettes of the same model in motion are visible to the right."  class="wp-image-254067" ></a></figure>



<p class="wp-block-paragraph"><strong>DP: How big can an unbaked animation get before performance drops?</strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: In practice, extremely long unbaked animations are more of a theoretical stress test than a real production scenario. We have users working with animations of up to 10,000 frames, and that is definitely possible. In version 2025.3, we added another round of optimisations specifically to improve performance with long timelines.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe class="youtube-player" width="1200" height="675" src="https://www.youtube.com/embed/CUo1V6H9OkM?version=3&rel=1&showsearch=0&showinfo=1&iv_load_policy=1&fs=1&hl=en-US&autohide=2&wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div></figure>



<p class="wp-block-paragraph">That said, usability becomes the limiting factor much earlier than raw performance. Working meaningfully with 10,000 frames is already very difficult, simply because it becomes hard to understand what is happening where. Most animation work happens in much smaller ranges, typically a few hundred frames at a time.</p>



<p class="wp-block-paragraph">Cascadeur is optimised around that typical use case. When you focus on a limited time interval, for example, 300 to 500 frames, the system only evaluates and updates what is needed for that interval. The total length of the animation outside that range has very little impact on interactivity. That’s exactly what we improved further in 2025.3.</p>



<p class="wp-block-paragraph">So yes, unbaked animations with 10,000 frames are feasible today. Beyond that, you’re likely to hit practical and conceptual limits before you hit hard technical ones. A 200,000-frame animation, for example, is far outside what we would consider a realistic working scenario.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/turnik_02-1.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="657"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/turnik_02-1.png?resize=1200%2C657&quality=72&ssl=1"  alt="A 3D model figure performing a pull-up on a green gym apparatus, with joint controls visible. The animation timeline is displayed below, indicating movement adjustments in a digital interface."  class="wp-image-253409" ></a></figure>



<h2 id="ai-in-animation" class="wp-block-heading"><strong>AI in Animation</strong></h2>



<p class="wp-block-paragraph"><strong>DP: Of course, in 2026 Cascadeur has AI-tools. What training data did the new Machine Learning tools rely on?</strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: Our machine learning tools are primarily trained on data we own and understand very well. A large part of it comes from animation data created over many years for Nekki’s own games, especially the <em>Shadow Fight</em> action game series. That includes a substantial amount of hand-crafted animation and poses produced by Nekki’s animators.</p>



<p class="wp-block-paragraph">In addition, we generate our own motion-capture data. We use two Xsens motion-capture suits internally and record motion data exclusively for training and validation. This allows us to create clean, well-defined datasets that align with the motion Cascadeur is designed to handle. So the foundation is a combination of long-term in-house animation data and dedicated mocap recordings, rather than external datasets.</p>



<figure class="wp-block-embed is-type-rich is-provider-embed-handler wp-block-embed-embed-handler"><div class="wp-block-embed__wrapper">
<div style="width: 640px;" class="wp-video"><video class="wp-video-shortcode" id="video-253398-3" width="640" height="360" preload="metadata" controls="controls"><source type="video/mp4" src="https://cascadeur.com/images/original/sections/feature/7/7.mp4?_=3" /><a href="https://cascadeur.com/images/original/sections/feature/7/7.mp4">https://cascadeur.com/images/original/sections/feature/7/7.mp4</a></video></div>
</div></figure>



<p class="wp-block-paragraph"><strong>DP: Do the new AI features change the minimum system requirements?</strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: No, the new AI features do not change the minimum system requirements. At the moment, our AI-related computations run entirely on the CPU and are comparable in complexity to existing systems such as physics tools or interpolation.</p>



<p class="wp-block-paragraph">We don’t rely on the GPU for AI processing, and all calculations are performed locally. Nothing is sent to the cloud. From the user’s perspective, AI in Cascadeur behaves like any other internal tool in terms of performance and system requirements.</p>



<p class="wp-block-paragraph"><strong>DP: Wait, you said that all computation is done locally. None of it relies on the cloud?</strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: Yes, Cascadeur runs completely on the client system. None of the AI features rely on the cloud. The neural networks we use are intentionally very lean. AutoPosing is quite small, and Inbetweening is a bit larger, but still nowhere near the scale of large models that require massive amounts of memory or external infrastructure. Because of that, all computation can happen locally without any special hardware or online connection.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/razdelenie_01.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="656"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/razdelenie_01.png?resize=1200%2C656&quality=72&ssl=1"  alt="A digital workspace displaying a 3D animation software interface. On the left, a gray model is in a dynamic pose, while on the right, a green model mirrors the pose. The physics settings panel is visible on the right side of the screen."  class="wp-image-253411" ></a></figure>



<p class="wp-block-paragraph"><strong>DP: How do you balance classical physics with machine learning inside Cascadeur?</strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: At the moment, classical physics and machine learning in Cascadeur are clearly separated and complement each other rather than competing. We don’t use AI inside the physics tools themselves. Physics is based on classical methods, mainly nonlinear equation optimisation, and operates on top of the animation that already exists in the scene.</p>



<p class="wp-block-paragraph">Machine learning comes into play earlier. AI is used to help generate or refine animation, especially in cases where creating complex motion manually would be difficult or time-consuming. Physics tools then take that result and propose a physically correct version of the motion, for example, by improving balance or contact behaviour.</p>



<p class="wp-block-paragraph">Over time, we realised that physics alone, while extremely valuable, cannot solve everything. An animation can be physically correct and still look wrong from an animation or artistic perspective. Contact situations are a good example: even with correct fulcrum points, physics does not necessarily enforce all the constraints needed to make motion look convincing.</p>



<p class="wp-block-paragraph">This is where AI helps. It can generate complex, plausible motion that would be difficult to achieve solely through physics-based constraints. At the same time, AI is not particularly good at physics on its own. In practice, they work very well together: AI supports motion generation and plausibility, while physics supports correctness and grounding.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/correcting-am.gif?ssl=1"><img data-recalc-dims="1"  decoding="async"  width="800"  height="450"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/correcting-am.gif?resize=800%2C450&ssl=1"  alt=""  class="wp-image-253423" ></a></figure>



<p class="wp-block-paragraph"><strong>DP: How did you train the Inbetweening model, and what can artists tweak?</strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: Our Inbetweening model is trained on our own animation data, basically to learn how it should behave in real production scenarios. We’re not trying to teach it some abstract idea of motion, but very concrete situations: how to move from one pose to another in a way that feels plausible for character animation.</p>



<p class="wp-block-paragraph">For artists, the main controls are still very familiar ones. Poses and timing matter the most, by far. If you change the key poses or the spacing between them, the result changes immediately. That’s where most of the artistic control lies.</p>



<p class="wp-block-paragraph">In Cascadeur there is also the option to add simple labels, for example saying “this should be a walk” rather than a run. That helps in cases where the same poses can be interpreted differently. Right now, those labels have less influence than poses and timing, but we’re actively looking at making them more meaningful.</p>



<p class="wp-block-paragraph">Even though AI is involved, the idea is that animators remain in charge. You don’t tweak neural network parameters. You animate, and the AI adapts to what you’re telling it through poses and timing.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/image-31.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="657"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/image-31.png?resize=1200%2C657&quality=72&ssl=1"  alt="A 3D animation software interface displaying a humanoid model with a skeletal rig. The model is posed in a dynamic stance on a grid background. The right panel shows an outline of the model&#039;s structure and properties."  class="wp-image-254068" ></a></figure>



<p class="wp-block-paragraph"><strong>DP: Considering the plethora of “Animation Libraries”, which, like the Stock Photo Libraries of ancient times (Or, 2019), have a giant, very specific set of premade animations: Is that necessary when Inbetweening is available in Cascadeur? </strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: Animation libraries are still very relevant. Inbetweening and Unbaking solve different problems, and they don’t make libraries obsolete. At least not today.</p>



<p class="wp-block-paragraph">Our Unbaking works very well with ready-made animations, including clips from animation libraries. You can import them, unbake them, and then edit or adapt them much more easily. Inbetweening, on the other hand, creates <em>new</em> motion between poses. It works best for relatively simple or well-defined movements. When animations become very complex, it’s often hard to describe them purely through poses, and in those cases libraries are still the more practical option.</p>



<p class="wp-block-paragraph">Right now, Inbetweening doesn’t replace animation libraries; it complements them. That said, it does point in an interesting direction. In the future, if you combine Inbetweening with tools such as text-to-motion or more advanced motion generation, libraries could become less central. At that point, searching a library by keywords might be replaced by simply describing the motion you want and letting the system generate it.</p>



<p class="wp-block-paragraph">But we’re not there yet. Today, animation libraries, Unbaking, and Inbetweening all have their place. Inbetweening is mainly about assembling and refining motion, not about replacing large collections of finished animations.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/image-3-1.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="656"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/image-3-1.png?resize=1200%2C656&quality=72&ssl=1"  alt="A 3D animation workspace showcasing a character model resembling a humanoid cat with joints and controls visible. The interface includes various settings, timelines, and 3D view options, displaying a grey background."  class="wp-image-254069" ></a></figure>



<h2 id="the-future" class="wp-block-heading"><strong>The future</strong></h2>



<p class="wp-block-paragraph"><strong>DP: With the improved viewport, do you see a future where artists animate directly in final-quality scenes?</strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: Yes, definitely. That’s very much the direction we’re moving in. The improved viewport is an important step toward allowing animators to work much closer to final-quality visuals while they animate.</p>



<p class="wp-block-paragraph">That’s also why we invested in a more advanced renderer. With Cascadeur 2026.1 we will completely shift to Filament, a physically based rendering system originally developed by Google. The idea is not to replace final offline rendering, but to give animators a visual result that is much closer to what they’ll see at the end, while they’re still working on motion.</p>



<p class="wp-block-paragraph">Looking further ahead, there are a couple of possible paths. One is tighter integration with high-end rendering systems or real-time engines. Another is using AI-based upscaling or enhancement, where you work with a simpler representation and let the system automatically improve visual quality. It’s hard to predict exactly which direction will dominate.</p>



<p class="wp-block-paragraph">What we are already actively working on, though, is better live connectivity. For example, we’re improving live links between Cascadeur and other software, including Unreal Engine. That’s a concrete step toward workflows where animation occurs in Cascadeur, allowing artists to immediately see the result in a final or near-final environment. So yes, animating directly in scenes that feel close to final quality is absolutely something we believe in – whether that happens inside Cascadeur itself or through tight integration with other tools.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/interaction_01.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="657"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/interaction_01.png?resize=1200%2C657&quality=72&ssl=1"  alt="A 3D animation software interface displaying two humanoid figures in dynamic poses. One figure is in green, showcasing the physics settings, while the other is in gray, indicating a different stage. Tools and settings are visible in the right panel."  class="wp-image-253410" ></a></figure>



<p class="wp-block-paragraph"><strong>DP: What does an ideal animation workflow look like to you?</strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: Looking ahead, I think the biggest room for improvement comes from how input works. In an ideal world, an animator can use many different types of input: posing with controllers, of course, but also video references, maybe voice input, maybe text prompts. The key point is that all of this input produces a single, fully editable output. Nothing should become a black box.</p>



<p class="wp-block-paragraph">In my “dream pipeline,” you can bring in any reference you want, create motion quickly with minimal input, and then dive into any part of the animation and refine it manually if needed. Whether that input comes from controllers, video, or text is secondary. What matters is that everything remains controllable.</p>



<p class="wp-block-paragraph"><strong>DP: How do you imagine animation pipelines in 2030 or 2040? And what do you imagine Cascadeur will look like in the year 2050?</strong></p>



<p class="wp-block-paragraph">Alexander Grishanin: If we look ten, twenty or even twenty-five years ahead, I think the most honest answer is that a lot is still open. Not everything depends on animation tools like Cascadeur. We’re in a phase right now where content creation is changing very quickly, and it’s hard to draw clean lines that far into the future.</p>



<p class="wp-block-paragraph">What’s very clear already is that 2D image and video generation is moving extremely fast with AI. One reason is that it’s much easier to collect data on it. 3D is lagging behind at the moment, but it’s definitely catching up. We’re seeing more tools that can automatically generate meshes, skeletons, and even basic animation. But as soon as you want to actually use those assets in a meaningful way, you still need a proper animation tool to refine and control motion. That’s where Cascadeur fits in.</p>



<p class="wp-block-paragraph">On a more personal note, I believe 3D will become increasingly important over the long term. Even many 2D AI systems seem to rely on some internal 3D understanding of the world. Making that structure explicit, with meshes, joints and motion, is simply a more consistent way to build believable worlds. AI can help a lot here by lowering the barrier to entry.</p>



<p class="wp-block-paragraph">If we push this idea further, I hope we end up in a world where creating 3D worlds and animated stories becomes as easy as writing text or drawing images today. Right now, far more people can paint, write or make music than create something in 3D, simply because 3D is still very complex. If that complexity goes away, content creation could change dramatically.</p>



<p class="wp-block-paragraph">That would also affect who creates stories. Instead of a few large studios targeting large audiences, we might see many more creators building highly specific worlds for smaller communities. Animation and 3D would no longer be just about large productions, but also about personal expression, much like photography or music today.</p>



<p class="wp-block-paragraph">At the same time, I can imagine tools for generating worlds, environments, or even basic scene setups becoming more integrated into animation workflows. Whether those systems live inside Cascadeur or connect to it seamlessly is an open question. What matters is that, once a world or scene is generated, you can step in and precisely shape how characters move and behave inside it.</p>



<p class="wp-block-paragraph"></p><p>The post <a href="https://digitalproduction.com/2026/02/18/cascadeur-on-physics-ai-and-control/">Cascadeur on physics, AI and control</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/qualityjellyfish45275761d0/">Bela Beier</a>. </p></div>]]></content:encoded>
					
		
		<enclosure url="https://cascadeur.com/images/original/sections/feature/1/1.mp4" length="981375" type="video/mp4" />
<enclosure url="https://cascadeur.com/images/original/sections/feature/5/5.mp4" length="2240974" type="video/mp4" />
<enclosure url="https://cascadeur.com/images/original/sections/feature/7/7.mp4" length="1374750" type="video/mp4" />

		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/inbetweening_01.png?fit=1919%2C1050&#038;quality=72&#038;ssl=1" length="120650" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/inbetweening_01.png?fit=1200%2C657&#038;quality=72&#038;ssl=1" width="1200" height="657" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title></media:title>
	<media:description type="html"><![CDATA[A 3D animation software interface displaying a sequence of seven humanoid figures in varying poses, with one figure standing in a center position, showcasing character animation progress. The background is black, emphasizing the models.]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/inbetweening_01.png?fit=1200%2C657&#038;quality=72&#038;ssl=1" width="1200" height="657" />
<post-id xmlns="com-wordpress:feed-additions:1">253398</post-id>	</item>
		<item>
		<title>Kangaroo Builder learns to move faces between meshes</title>
		<link>https://digitalproduction.com/2026/02/12/kangaroo-builder-learns-to-move-faces-between-meshes/</link>
		
		<dc:creator><![CDATA[Bela Beier]]></dc:creator>
		<pubDate>Thu, 12 Feb 2026 06:00:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[topnews]]></category>
		<category><![CDATA[Autodesk Maya]]></category>
		<category><![CDATA[character rigging]]></category>
		<category><![CDATA[Characters]]></category>
		<category><![CDATA[facial blendshapes]]></category>
		<category><![CDATA[Kangaroo Builder]]></category>
		<category><![CDATA[Landmark Warp]]></category>
		<category><![CDATA[Maya rigging]]></category>
		<category><![CDATA[rigging]]></category>
		<category><![CDATA[subscribers]]></category>
		<category><![CDATA[Thomas Bittner]]></category>
		<category><![CDATA[Topology]]></category>
		<category><![CDATA[topology transfer]]></category>
		<guid isPermaLink="false">https://digitalproduction.com/?p=251964</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/face_threeheads.jpg?fit=1200%2C637&quality=80&ssl=1" width="1200" height="637" title="" alt="A 3D model rendering of three dog heads, resembling a sculpted pit bull, positioned side by side. The model features a gray surface and is overlaid with multicolored wireframe diagrams, indicating animation rigging and structure." /></div><div><p>Kangaroo Builder for Maya adds Landmark Warp, a topology transfer tool aimed at moving blendshapes between meshes without matching topology.</p>
<p>The post <a href="https://digitalproduction.com/2026/02/12/kangaroo-builder-learns-to-move-faces-between-meshes/">Kangaroo Builder learns to move faces between meshes</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/qualityjellyfish45275761d0/">Bela Beier</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/face_threeheads.jpg?fit=1200%2C637&quality=80&ssl=1" width="1200" height="637" title="" alt="A 3D model rendering of three dog heads, resembling a sculpted pit bull, positioned side by side. The model features a gray surface and is overlaid with multicolored wireframe diagrams, indicating animation rigging and structure." /></div><div><p class="wp-block-paragraph"><em>For those who don’t know the tool: <a href="https://kangaroobuilder.com/?utm_source=digitalproduction.com&utm_medium=news" title="">Kangaroo Builder</a> is a character rigging toolkit for <a href="https://www.autodesk.com/products/maya/overview/?utm_source=digitalproduction.com&utm_medium=news" title="">Autodesk Maya </a>used to build body and facial rigs, manage skinning and author blendshapes. It sits squarely in character TD land and complements Maya’s rigging tools without pretending to replace them. It is developed by Thomas Bittner and integrates directly into Maya-based pipelines. </em></p>
<span hidden class="__iawmlf-post-loop-links" data-iawmlf-links="[{&quot;id&quot;:13345,&quot;href&quot;:&quot;https:\/\/kangaroobuilder.com\/?utm_source=digitalproduction.com\u0026utm_medium=news&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:13346,&quot;href&quot;:&quot;https:\/\/www.autodesk.com\/products\/maya\/overview\/?utm_source=digitalproduction.com\u0026utm_medium=news&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;https:\/\/www.autodesk.com\/products\/maya\/overview?utm_source=digitalproduction.com&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:13347,&quot;href&quot;:&quot;https:\/\/kangaroobuilder.com\/landmarkWarp&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20260212061645\/https:\/\/kangaroobuilder.com\/landmarkWarp\/&quot;,&quot;redirect_href&quot;:&quot;https:\/\/kangaroobuilder.com\/landmarkWarp\/&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2026-02-12 15:39:41&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-20 13:03:30&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-26 01:31:11&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-02 07:55:25&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-08 05:13:44&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-14 05:49:00&quot;,&quot;http_code&quot;:206}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-04-14 05:49:00&quot;,&quot;http_code&quot;:206},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:13348,&quot;href&quot;:&quot;https:\/\/kangaroobuilder.com\/?utm_source=digitalproduction.com\u0026utm_medium=news.&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;}]"></span>


<figure class="wp-block-image"><img data-recalc-dims="1"  decoding="async"  width="916"  height="1050"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/builder_buildall.gif?resize=916%2C1050&ssl=1"  alt="https://kangaroobuilder.com/images/builder_buildAll.gif"  class="wp-image-252100" ></figure>



<p class="wp-block-paragraph"></p>



<h3 id="a-familiar-problem-now-inside-the-rig" class="wp-block-heading">A familiar problem, now inside the rig</h3>



<p class="wp-block-paragraph">Transferring facial blendshapes between meshes with different topologies remains one of the more time-consuming and error-prone tasks in character production. It usually appears late in the process, just when schedules are already tight. Artists either rebuild shapes by hand, rely on wrap-based deformation tools, or accept compromises in deformation quality. Kangaroo Builder’s latest update introduces a feature explicitly designed to address this gap from inside the rigging toolset itself.</p>



<p class="wp-block-paragraph">Kangaroo Builder now includes a new topology transfer workflow called “Landmark Warp”. The tool is included in version 5.19 and later and is designed to warp one mesh onto another using user-defined landmarks, even when the two meshes have different vertex counts and edge flow. This is not a general retopology solution, nor is it presented as one.</p>



<figure class="wp-block-image"><img data-recalc-dims="1"  decoding="async"  width="1012"  height="1429"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/landmarkwarp_dogandhorse.jpg?resize=1012%2C1429&quality=80&ssl=1"  alt="https://kangaroobuilder.com/images/landmarkWarp_dogAndHorse.jpg"  class="wp-image-252097" ></figure>



<h3 id="what-landmark-warp-actually-does" class="wp-block-heading">What Landmark Warp actually does</h3>



<p class="wp-block-paragraph">Landmark Warp works by letting users place corresponding markers on a source mesh and a target mesh. These markers define spatial relationships rather than relying on shared topology. Once enough landmarks are placed, the system computes a deformation that warps the source mesh to match the target mesh’s overall shape. The warped result can then be used to transfer existing facial blendshapes via Kangaroo Builder’s Shape Editor.</p>



<p class="wp-block-paragraph">The key point is that this process operates on shape deformation rather than topology matching. The meshes do not need identical structure, only a reasonably comparable form. This makes the tool applicable to character variants, cleaned-up scan data, or iterative design changes where topology has drifted but proportions remain recognisable.</p>



<p class="wp-block-paragraph">The developer notes that Landmark Warp relies on SciPy, a Python scientific computing library, which must be installed and accessible in Maya’s Python environment. This dependency is documented but may be overlooked in locked-down studio setups, which is worth flagging early in any evaluation.</p>



<h3 id="intended-use-and-what-it-is-not" class="wp-block-heading">Intended use, and what it is not</h3>



<p class="wp-block-paragraph">Nobody would describe <a href="https://kangaroobuilder.com/landmarkWarp/" title="">Landmark Warp</a> as a fully automatic solution. The quality of the result depends heavily on landmark placement and mesh preparation. Internal geometry such as teeth, tongues, or inner mouth surfaces can interfere with the warp unless managed carefully. This is stated explicitly in the documentation.</p>



<figure class="wp-block-image"><img data-recalc-dims="1"  decoding="async"  width="1129"  height="841"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/landmarkwarp_innermouthissue.jpg?resize=1129%2C841&quality=80&ssl=1"  alt="https://kangaroobuilder.com/images/landmarkWarp_innermouthissue.jpg"  class="wp-image-252099" ></figure>



<p class="wp-block-paragraph">It is also not intended to replace dedicated wrap deformers or offline retargeting tools. Instead, the value proposition is convenience and context. Landmark Warp lives inside the same environment used to build and edit the rig. For many teams, that maybe matters more than absolute automation.</p>



<h3 id="why-this-matters-in-production" class="wp-block-heading">Why this matters in production</h3>



<p class="wp-block-paragraph">Topology drift is a fact of life in character production. Directors ask for changes. Scans get cleaned. Game and film assets diverge. Facial rigs, however, tend to be built once and guarded carefully. Any tool that reduces the cost of reusing that work deserves attention.</p>



<p class="wp-block-paragraph">By embedding topology transfer directly into a rigging toolkit, Kangaroo is addressing a real and persistent problem. The approach is conservative rather than flashy. It assumes skilled users, manual setup, and informed judgement. That will suit experienced character TDs more than newcomers, which aligns with the tool’s existing audience.</p>



<p class="wp-block-paragraph">It also reflects a broader trend of rigging tools absorbing tasks that used to live in separate utilities. Whether this is desirable depends on pipeline philosophy, but it does reduce context switching, which is often where errors creep in.</p>



<figure class="wp-block-image"><img data-recalc-dims="1"  decoding="async"  width="763"  height="757"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/landmarkwarp_selectinnermouth.jpg?resize=763%2C757&quality=80&ssl=1"  alt="https://kangaroobuilder.com/images/landmarkWarp_selectInnermouth.jpg"  class="wp-image-252098" ></figure>



<h3 id="compatibility-and-licensing" class="wp-block-heading">Compatibility and licensing</h3>



<p class="wp-block-paragraph">Kangaroo Builder runs on Autodesk Maya. At the time of writing, version 5.19 supports Maya 2023 and later on Windows and macOS, and Maya 2024 and later on Linux. Landmark Warp is included in the standard distribution and does not require a separate licence.</p>



<p class="wp-block-paragraph">Licensing remains unchanged. Kangaroo Builder is free for non-commercial use, including students and personal projects. Commercial licences are available for indie users and studios. Pricing is published on the official website and may change, so readers should <a href="https://kangaroobuilder.com/?utm_source=digitalproduction.com&utm_medium=news.">check the current terms directl</a>.</p>



<p class="wp-block-paragraph">At the time of writing, these are the prices:  (Copied from the site) </p>



<p class="wp-block-paragraph"><strong>Indy Perpetual</strong> – 220 USD<br />For freelancers making less than 85k USD per year in revenue</p>



<p class="wp-block-paragraph"><strong>Single Perpetual</strong> – 400 USD<br />Recommended for or small studios with only one rigger</p>



<p class="wp-block-paragraph"><strong>3 Seats Perpetual</strong> – 1000 USD<br />Recommended for small studios</p>



<p class="wp-block-paragraph"><strong>6 Seats Perpetual</strong> – 1800 USD</p>



<p class="wp-block-paragraph"><strong>Unlimited Seats Perpetual</strong> – 4000 USD</p>



<h3 id="what-is-still-unclear" class="wp-block-heading">What is still unclear</h3>



<p class="wp-block-paragraph">No public information is available on how Landmark Warp behaves on extreme topology differences or highly stylised characters. There is also no data on performance with dense meshes or very large blendshape libraries. These gaps do not invalidate the feature, but they do mean that due diligence is required before deployment.</p>



<p class="wp-block-paragraph">As with any rigging or deformation tool, results will depend on mesh quality, landmark placement, and user expertise. Early adopters should expect iteration, not miracles. Anyone hoping for a one-click solution will be dissapointed. s always, new tools and workflow changes should be tested thoroughly on representative assets before being introduced into active production.</p>



<p class="wp-block-paragraph"></p><p>The post <a href="https://digitalproduction.com/2026/02/12/kangaroo-builder-learns-to-move-faces-between-meshes/">Kangaroo Builder learns to move faces between meshes</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/qualityjellyfish45275761d0/">Bela Beier</a>. </p></div>]]></content:encoded>
					
		
		
		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/face_threeheads.jpg?fit=1393%2C739&#038;quality=80&#038;ssl=1" length="101263" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/face_threeheads.jpg?fit=1200%2C637&#038;quality=80&#038;ssl=1" width="1200" height="637" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title></media:title>
	<media:description type="html"><![CDATA[A 3D model rendering of three dog heads, resembling a sculpted pit bull, positioned side by side. The model features a gray surface and is overlaid with multicolored wireframe diagrams, indicating animation rigging and structure.]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/face_threeheads.jpg?fit=1200%2C637&#038;quality=80&#038;ssl=1" width="1200" height="637" />
<post-id xmlns="com-wordpress:feed-additions:1">251964</post-id>	</item>
		<item>
		<title>BlendShape Monitor puts Maya rigs under a heat lamp</title>
		<link>https://digitalproduction.com/2026/02/05/blendshape-monitor-puts-maya-rigs-under-a-heat-lamp/</link>
		
		<dc:creator><![CDATA[Bela Beier]]></dc:creator>
		<pubDate>Thu, 05 Feb 2026 06:00:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[topnews]]></category>
		<category><![CDATA[Animation]]></category>
		<category><![CDATA[Autodesk]]></category>
		<category><![CDATA[blendshape debugging]]></category>
		<category><![CDATA[BlendShape Monitor]]></category>
		<category><![CDATA[character rigging]]></category>
		<category><![CDATA[deformation]]></category>
		<category><![CDATA[facial animation]]></category>
		<category><![CDATA[Johnson Lee]]></category>
		<category><![CDATA[Maya]]></category>
		<category><![CDATA[Maya plugin]]></category>
		<category><![CDATA[rigging]]></category>
		<guid isPermaLink="false">https://digitalproduction.com/?p=250415</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/kcotnbva_ju-00-00-10-bs-monitor-the-ultimate-data-visualizer-for-maya-blendshape.png?fit=1200%2C675&quality=72&ssl=1" width="1200" height="675" title="" alt="A digital workspace displaying a 3D model of a human figure highlighted with a colorful heatmap, illustrating areas of influence with gradients of blue, green, yellow, and red. A side panel lists various attributes with corresponding visual indicators." /></div><div><p>A new Maya plugin visualises blendshape influence in real time, targeting riggers dealing with dense and poorly documented rigs.</p>
<p>The post <a href="https://digitalproduction.com/2026/02/05/blendshape-monitor-puts-maya-rigs-under-a-heat-lamp/">BlendShape Monitor puts Maya rigs under a heat lamp</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/qualityjellyfish45275761d0/">Bela Beier</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/kcotnbva_ju-00-00-10-bs-monitor-the-ultimate-data-visualizer-for-maya-blendshape.png?fit=1200%2C675&quality=72&ssl=1" width="1200" height="675" title="" alt="A digital workspace displaying a 3D model of a human figure highlighted with a colorful heatmap, illustrating areas of influence with gradients of blue, green, yellow, and red. A side panel lists various attributes with corresponding visual indicators." /></div><div><p class="wp-block-paragraph"><em>For those who don’t know the tool: <a href="https://johnsonlee529.gumroad.com/l/bs-monitor-maya/?utm_source=digitalproduction.com&utm_medium=news" title="">BlendShape Monitor</a> is a lightweight diagnostic plugin for <a href="https://www.autodesk.com/products/maya/?utm_source=digitalproduction.com&utm_medium=news" title="">Autodesk Maya </a>that sits squarely in character rigging and facial setup, focusing only on inspection rather than deformation authoring, and does not overlap with Autodesk’s own rigging tools or third-party rig builders.</em><br /></p>
<span hidden class="__iawmlf-post-loop-links" data-iawmlf-links="[{&quot;id&quot;:13301,&quot;href&quot;:&quot;https:\/\/johnsonlee529.gumroad.com\/l\/bs-monitor-maya\/?utm_source=digitalproduction.com\u0026utm_medium=news&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:13302,&quot;href&quot;:&quot;https:\/\/www.autodesk.com\/products\/maya\/?utm_source=digitalproduction.com\u0026utm_medium=news&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;https:\/\/www.autodesk.com\/products\/maya\/overview?utm_source=digitalproduction.com&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:13303,&quot;href&quot;:&quot;https:\/\/www.artstation.com\/johnson-3d&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:13304,&quot;href&quot;:&quot;https:\/\/johnsonlee529.gumroad.com\/l\/bs-monitor-maya&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;}]"></span>


<h3 id="why-blendshapes-still-go-wrong" class="wp-block-heading">Why blendshapes still go wrong</h3>



<p class="wp-block-paragraph">Blendshapes remain one of the most common deformation methods for facial animation in Maya, particularly for FACS-based rigs and corrective shapes layered on top of joint systems. Despite their ubiquity, debugging blendshapes once a rig grows beyond a few dozen targets remains largely manual and error-prone. Maya’s native interface presents blendshape nodes as long, alphabetical lists of target names with numeric weights. That abstraction works when rigs are small and well documented. It breaks down quickly when shapes overlap, are reused across multiple regions, or are indirectly driven by other systems, such as RBF solvers.</p>



<p class="wp-block-paragraph">This problem is compounded in production environments where rigs are inherited, shared, or modified over time. In many cases, artists are asked to fix deformation issues without knowing which blendshape is responsible, or whether multiple shapes are contributing simultaneously. The result is a familiar cycle of muting targets, scrubbing weights, and guessing.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe class="youtube-player" width="1200" height="675" src="https://www.youtube.com/embed/kcoTnBVA_JU?version=3&rel=1&showsearch=0&showinfo=1&iv_load_policy=1&fs=1&hl=en-US&autohide=2&wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div></figure>



<h3 id="what-blendshape-monitor-actually-does" class="wp-block-heading">What BlendShape Monitor actually does</h3>



<p class="wp-block-paragraph">BlendShape Monitor is a diagnostic plugin for Autodesk Maya that attempts to replace that guesswork with direct visual feedback. Instead of relying on target names and numerical weights, the tool visualises the influence of individual blendshapes directly on the mesh using colour-coded vertex heatmaps. These heatmaps update in real time, including during animation playback, reflecting the current evaluated weight of each shape.</p>



<p class="wp-block-paragraph">The plugin reads the deformation data from existing blendShape nodes. It does not modify the rig, create new targets, or alter evaluation order. <a href="https://www.artstation.com/johnson-3d" title="">According to the developer,</a> its sole purpose is inspection. This distinction matters, as the tool is intended to be safe to use on production rigs without changing scene data.</p>



<p class="wp-block-paragraph">The visualisation highlights which vertices are affected by a given blendshape and to what extent. Areas with stronger deformation are shown in higher-intensity colours, making it immediately obvious whether a shape is localised, overlaps with others, or extends into unintended regions of the mesh.</p>



<figure class="wp-block-image"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="650"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/r7iyfw7y86wrh2zr5dieq7wep3rw.png?resize=1200%2C650&quality=72&ssl=1"  alt="https://public-files.gumroad.com/r7iyfw7y86wrh2zr5dieq7wep3rw"  class="wp-image-250434" ></figure>



<h3 id="managing-clutter-in-dense-rigs" class="wp-block-heading">Managing clutter in dense rigs</h3>



<p class="wp-block-paragraph">One of the stated goals of BlendShape Monitor is to make large blendshape sets manageable. The plugin includes filtering options that hide inactive targets, allowing artists to focus only on shapes that currently contribute to the deformation. This is particularly relevant when blendshapes are driven indirectly by other systems, where weights may be non-zero even if no animator is directly adjusting them.</p>



<p class="wp-block-paragraph">The tool also allows individual targets to be solo-ed. When a shape is soloed, other blendshapes are temporarily hidden from the visualisation, making it easier to inspect its isolated effect. Global visibility toggles let you enable or disable the heatmap overlay without removing the plugin from the scene.</p>



<p class="wp-block-paragraph">Weight values are synced live with Maya’s evaluation during playback. This means the visualisation reflects the rig’s actual state at each frame, rather than a static snapshot. For troubleshooting animation issues that only appear in motion, this real-time aspect is central to the tool’s design.</p>



<figure class="wp-block-image"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="650"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/zs4eeoqmdtn2g10wrtgpw6urjuny.png?resize=1200%2C650&quality=72&ssl=1"  alt="https://public-files.gumroad.com/zs4eeoqmdtn2g10wrtgpw6urjuny"  class="wp-image-250435" ></figure>



<h3 id="origin-in-production-reality" class="wp-block-heading">Origin in production reality</h3>



<p class="wp-block-paragraph">BlendShape Monitor was developed by <a href="https://www.artstation.com/johnson-3d" title="">Johnson Lee</a>, an artist and tools developer whose work as Art Director at <a href="https://digitalproduction.com/tag/reallusion/" title="reallusion">Reallusion </a>directly informed the tool’s focus on large-scale facial rigs. According to the developer, the plugin emerged from internal production needs rather than as a speculative product idea.</p>



<p class="wp-block-paragraph">At Reallusion, facial rigs for characters such as those used in Character Creator workflows can exceed one hundred blendshapes. These include expression shapes based on FACS conventions, as well as numerous corrective shapes used to address secondary deformation. In such setups, identifying which shape is responsible for an artefact using Maya’s default UI is time-consuming and unreliable. The plugin reflects that context. It is narrowly scoped, avoids adding new rigging concepts, and addresses a specific bottleneck encountered when maintaining and debugging existing rigs rather than building new ones.</p>



<figure class="wp-block-embed is-type-rich is-provider-embed-handler wp-block-embed-embed-handler"><div class="wp-block-embed__wrapper">
<div style="width: 640px;" class="wp-video"><video class="wp-video-shortcode" id="video-250415-4" width="640" height="360" preload="metadata" controls="controls"><source type="video/mp4" src="https://cdn.artstation.com/p/video_sources/003/057/514/luzhi-2026-01-20-20-08-05-972.mp4?_=4" /><a href="https://cdn.artstation.com/p/video_sources/003/057/514/luzhi-2026-01-20-20-08-05-972.mp4">https://cdn.artstation.com/p/video_sources/003/057/514/luzhi-2026-01-20-20-08-05-972.mp4</a></video></div>
</div></figure>



<h3 id="compatibility-and-scope" class="wp-block-heading">Compatibility and scope</h3>



<p class="wp-block-paragraph">BlendShape Monitor is compatible with Autodesk Maya 2022 and later. No support is claimed for earlier versions. The plugin runs inside Maya and does not require external dependencies according to the product listing. Installation and licensing are handled via Gumroad.</p>



<p class="wp-block-paragraph">The tool does not claim to support other DCC applications, nor does it attempt to abstract blendshape concepts across platforms. It is explicitly Maya-specific, relying on Maya’s native blendShape node behaviour. It is also not positioned as a teaching tool. Users are expected to understand blendshape workflows, vertex-level deformation, and Maya’s rig evaluation. The plugin provides visibility, not validation.</p>



<h3 id="pricing-and-licensing" class="wp-block-heading">Pricing and licensing</h3>



<p class="wp-block-paragraph">BlendShape Monitor is sold via Gumroad. At the time of writing, pricing is listed at approximately USD 20 for a freelance licence and USD 70 for a studio seat. The exact terms of these licences are defined on the Gumroad page and should be reviewed before purchase. No subscription model is indicated.</p>



<h3 id="what-it-does-not-solve" class="wp-block-heading">What it does not solve</h3>



<p class="wp-block-paragraph">While the plugin makes deformation issues easier to see, it does not resolve them automatically. Poor topology, conflicting targets, and incorrectly authored shapes still need to be fixed at the source, by you. BlendShape Monitor does not rank shapes by quality, detect errors, or suggest corrections.</p>



<p class="wp-block-paragraph">It also does not address performance issues caused by excessive blendshape counts or inefficient evaluation. Its visual overlays are for inspection, not optimisation. Artists should be cautious when using any viewport overlay in heavy scenes and test performance impact in their own environments.</p>



<h3 id="production-considerations" class="wp-block-heading">Production considerations</h3>



<p class="wp-block-paragraph">As with any new tool, BlendShape Monitor should be evaluated under controlled conditions before being deployed to production. Its read-only approach reduces risk, but pipeline teams should still test compatibility with existing rigs, scripts, and viewport configurations. </p>



<p class="wp-block-paragraph">In teh final analysis, BlendShape Monitor addresses a narrow but persistent pain point in Maya character rigging, offering visibility where the host application still relies heavily on lists and numbers. Whether it becomes a standard part of rig debugging workflows will depend on how well it holds up wiht the messier rigs found in long-running productions.</p>



<p class="wp-block-paragraph">// BlendShape Monitor Gumroad product page<br />// <a href="https://johnsonlee529.gumroad.com/l/bs-monitor-maya/">https://johnsonlee529.gumroad.com/l/bs-monitor-maya/</a></p>



<p class="wp-block-paragraph"></p><p>The post <a href="https://digitalproduction.com/2026/02/05/blendshape-monitor-puts-maya-rigs-under-a-heat-lamp/">BlendShape Monitor puts Maya rigs under a heat lamp</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/qualityjellyfish45275761d0/">Bela Beier</a>. </p></div>]]></content:encoded>
					
		
		<enclosure url="https://cdn.artstation.com/p/video_sources/003/057/514/luzhi-2026-01-20-20-08-05-972.mp4" length="283006" type="video/mp4" />

		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/kcotnbva_ju-00-00-10-bs-monitor-the-ultimate-data-visualizer-for-maya-blendshape.png?fit=1920%2C1080&#038;quality=72&#038;ssl=1" length="185243" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/kcotnbva_ju-00-00-10-bs-monitor-the-ultimate-data-visualizer-for-maya-blendshape.png?fit=1200%2C675&#038;quality=72&#038;ssl=1" width="1200" height="675" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title></media:title>
	<media:description type="html"><![CDATA[A digital workspace displaying a 3D model of a human figure highlighted with a colorful heatmap, illustrating areas of influence with gradients of blue, green, yellow, and red. A side panel lists various attributes with corresponding visual indicators.]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2026/02/kcotnbva_ju-00-00-10-bs-monitor-the-ultimate-data-visualizer-for-maya-blendshape.png?fit=1200%2C675&#038;quality=72&#038;ssl=1" width="1200" height="675" />
<post-id xmlns="com-wordpress:feed-additions:1">250415</post-id>	</item>
		<item>
		<title>Shapekeys: The angle on your mesh</title>
		<link>https://digitalproduction.com/2025/11/28/shapekeys-the-angle-on-your-mesh/</link>
		
		<dc:creator><![CDATA[Bela Beier]]></dc:creator>
		<pubDate>Fri, 28 Nov 2025 05:00:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[3D modelling]]></category>
		<category><![CDATA[BeyondDev]]></category>
		<category><![CDATA[Blender]]></category>
		<category><![CDATA[Blender tools]]></category>
		<category><![CDATA[camera angle]]></category>
		<category><![CDATA[CamKeys]]></category>
		<category><![CDATA[CGI]]></category>
		<category><![CDATA[character rigging]]></category>
		<category><![CDATA[FOV]]></category>
		<category><![CDATA[realtime graphics]]></category>
		<category><![CDATA[ShapeKeys]]></category>
		<category><![CDATA[stylised animation]]></category>
		<guid isPermaLink="false">https://digitalproduction.com/?p=231708</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/11/kpd7gukg7lg-00-00-27-1-camkeys-for-blender-camera-angle-x-shapekeys.png?fit=1080%2C1080&quality=72&ssl=1" width="1080" height="1080" title="" alt="A 3D model of a blue hedgehog character, resembling Sonic, striking a peace sign gesture with his right hand. The background is a soft purple, and there are interface elements for a 3D modeling software displayed on the right side." /></div><div><p>CamKeys automates ShapeKeys by camera angle in Blender. Users say it mimics per-object FOV , the docs say it drives morphs.</p>
<p>The post <a href="https://digitalproduction.com/2025/11/28/shapekeys-the-angle-on-your-mesh/">Shapekeys: The angle on your mesh</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/qualityjellyfish45275761d0/">Bela Beier</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/11/kpd7gukg7lg-00-00-27-1-camkeys-for-blender-camera-angle-x-shapekeys.png?fit=1080%2C1080&quality=72&ssl=1" width="1080" height="1080" title="" alt="A 3D model of a blue hedgehog character, resembling Sonic, striking a peace sign gesture with his right hand. The background is a soft purple, and there are interface elements for a 3D modeling software displayed on the right side." /></div><div><p class="wp-block-paragraph">The <a href="https://www.blender.org">Blender</a> add-on <a href="https://beyonddev.gumroad.com/l/camkeys">CamKeys</a>, developed by <a href="https://beyonddev.gumroad.com">BeyondDev</a> (Tyler Walker), automates ShapeKeys according to camera or viewport angles. It enables an object to deform differently depending on how it is viewed, a method commonly used in stylised animation, anime-inspired facial rigs, or forced-perspective effects. Formerly known as CamShapeMatic, CamKeys 3.0 introduces a redesigned interface, multilingual documentation, and expanded animation baking tools.</p>
<span hidden class="__iawmlf-post-loop-links" data-iawmlf-links="[{&quot;id&quot;:165,&quot;href&quot;:&quot;https:\/\/www.blender.org&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251226195249\/https:\/\/www.blender.org\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-27 12:37:36&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2025-12-30 14:16:28&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-02 18:10:17&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-06 00:19:09&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-09 01:35:27&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-12 09:05:03&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-16 03:16:29&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-19 08:27:20&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-22 15:10:28&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-25 21:30:51&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-29 01:45:47&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-01 10:23:52&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-05 01:10:22&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-08 02:24:01&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-11 13:33:04&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-14 17:45:48&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-17 18:52:38&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-20 22:44:56&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-24 08:42:54&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-27 09:02:54&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-02 14:04:53&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-05 17:54:53&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-09 01:44:09&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-12 08:40:17&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-15 10:57:50&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-18 11:16:25&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-21 12:26:16&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-24 15:31:48&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-27 17:40:38&quot;,&quot;http_code&quot;:503},{&quot;date&quot;:&quot;2026-03-30 20:28:00&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-02 20:40:15&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-06 03:52:42&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-09 04:37:15&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-12 06:41:48&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-15 07:13:53&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-18 08:32:57&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-21 08:59:42&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-24 09:05:29&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-27 09:38:37&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-30 10:01:33&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-05-03 10:47:16&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-05-06 10:49:19&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-05-09 12:18:57&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-05-12 12:45:47&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-05-12 12:45:47&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:166,&quot;href&quot;:&quot;https:\/\/beyonddev.gumroad.com\/l\/camkeys&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20241102231347\/https:\/\/beyonddev.gumroad.com\/l\/camkeys&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-27 12:37:39&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-09 01:35:27&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-19 23:03:23&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-25 14:55:42&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-07 23:21:28&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-12 12:40:44&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-11 17:27:39&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-20 11:14:47&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-25 11:36:28&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-24 05:57:41&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-05-10 10:26:30&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-05-10 10:26:30&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:167,&quot;href&quot;:&quot;https:\/\/beyonddev.gumroad.com&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20250709002717\/https:\/\/beyonddev.gumroad.com\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-27 12:38:49&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-09 01:35:27&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-19 23:03:23&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-25 14:55:42&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-07 23:21:28&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-12 12:40:44&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-11 17:27:39&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-20 11:14:47&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-25 11:36:29&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-24 05:57:41&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-05-10 10:26:30&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-05-10 10:26:30&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;}]"></span>


<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-4-3 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe class="youtube-player" width="1200" height="675" src="https://www.youtube.com/embed/KPd7gukG7lg?version=3&rel=1&showsearch=0&showinfo=1&iv_load_policy=1&fs=1&hl=en-US&autohide=2&wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div></figure>



<p class="wp-block-paragraph"></p>



<h3 id="shapekeys-with-eyes" class="wp-block-heading">ShapeKeys with eyes</h3>



<p class="wp-block-paragraph">CamKeys connects one or more ShapeKeys (<a href="https://digitalproduction.com/tag/blender/" title="Blender">Blender</a>’s morph targets) to the relative angle between a camera and an object. When the camera moves, the mesh interpolates between ShapeKeys based on the configured angular falloff. Each “CamKey” entry defines a camera angle, target ShapeKey, and blending range. Updates occur in real time during playback, scrubbing, or rendering. For rigged characters, the effect can also be driven by bone rotation instead of the entire object’s orientation, allowing deformation of specific body parts such as heads or torsos.</p>



<figure class="wp-block-image"><img  decoding="async"  src="https://public-files.gumroad.com/9ze4d4pjg85ojsm8f728uj5rnefa"  alt="https://public-files.gumroad.com/9ze4d4pjg85ojsm8f728uj5rnefa" ></figure>



<figure class="wp-block-image"><img  decoding="async"  src="https://public-files.gumroad.com/mqps782dog361fj5kdvyda1bfag2"  alt="https://public-files.gumroad.com/mqps782dog361fj5kdvyda1bfag2" ></figure>



<h3 id="whats-new-in-version-3-0" class="wp-block-heading">What’s new in version 3.0</h3>



<p class="wp-block-paragraph">Version 3.0 adds refined bone tracking, improved multi-object handling, and support for linked assets. sers can employ CamKeys to imitate per-object field-of-view (FOV) shifts, producing perspective distortion variations between characters or props. However, the official <a>Beyond-3 documentation</a> does not describe any FOV modification feature. Technically, CamKeys does not alter Blender’s projection system. It drives ShapeKeys by camera angle, which can visually <em>approximate</em> FOV changes but does not constitute a real optical override. Artists may creatively repurpose it for such effects, but this behaviour is not documented or officially supported.</p>



<h3 id="inside-the-interface" class="wp-block-heading">Inside the interface</h3>



<p class="wp-block-paragraph">The CamKeys panel is divided into three areas:</p>



<ul class="wp-block-list">
<li><strong>Select Camera</strong> – Defines the active camera or viewport. Baking requires an actual camera.</li>



<li><strong>Mesh Objects</strong> – Lists meshes affected by CamKeys, each optionally tied to a bone.</li>



<li><strong>Camera Angles & ShapeKeys</strong> – Captures current view angles, assigns ShapeKeys, sets blending width, enables or disables entries, and bakes driven animation curves into keyframes.</li>
</ul>



<p class="wp-block-paragraph">All parameters update live, and tooltips in multiple languages (English, Japanese, Spanish) are provided throughout the interface.</p>



<h3 id="pricing-and-licensing" class="wp-block-heading">Pricing and licensing</h3>



<p class="wp-block-paragraph">CamKeys is sold via <a href="https://beyonddev.gumroad.com/l/camkeys">Gumroad</a> with three pricing tiers: Indie (1 user) at 19.99 USD, Studio (3–5 users) at 69.99 USD, and Studio (6+ users) at 199.99 USD. Purchases include lifetime updates. Redistribution or modification of the add-on is prohibited, but commercial use is permitted with credit to the creator.</p>



<figure class="wp-block-image"><img  decoding="async"  src="https://public-files.gumroad.com/s0g3sqfas98o42k5j974po4h9z3x"  alt="https://public-files.gumroad.com/s0g3sqfas98o42k5j974po4h9z3x" ></figure>



<h3 id="typical-use-cases" class="wp-block-heading">Typical use cases</h3>



<p class="wp-block-paragraph">According to the official documentation, CamKeys is suited for:</p>



<ul class="wp-block-list">
<li>2D/3D hybrid animation and stylised facial deformation</li>



<li>Perspective “cheat” effects without rig scaling</li>



<li>Real-time mesh updates during playback</li>



<li>Automatic deformation across camera switches</li>



<li>Baking ShapeKey-driven animation for export</li>
</ul>



<p class="wp-block-paragraph">BeyondDev’s examples show anime-style characters that morph smoothly between front and side views, preserving proportions through angular interpolation.</p>



<h3 id="production-caveats" class="wp-block-heading">Production caveats</h3>



<p class="wp-block-paragraph">No independent tests have yet confirmed how CamKeys behaves with advanced setups such as depth-of-field, motion blur, or multi-camera rigs. Blender’s dependency graph can behave unpredictably when properties depend on camera-driven transformations. While users label CamKeys 3.0 as enabling “per-object FOV,” BeyondDev’s own documentation explicitly describes ShapeKey automation by camera angle only. This difference is important when planning for camera-matched compositing or depth-based rendering workflows. Artists should run small validation tests before deploying the add-on in production pipelines or automated rigging setups.</p>



<h3 id="final-frame" class="wp-block-heading">Final frame</h3>



<p class="wp-block-paragraph">CamKeys remains a technically clear, niche utility: it automates mesh deformation relative to viewing direction, offering stylised animators precise control over how geometry reads from different angles. Whether it truly supports “per-object FOV” depends entirely on user interpretation, not on the add-on’s code.</p>



<p class="wp-block-paragraph">For those seeking controlled, angle-driven morphs without complex rig logic, CamKeys provides a practical and well-documented solution that Blender still lacks natively. As always: test before production.</p><p>The post <a href="https://digitalproduction.com/2025/11/28/shapekeys-the-angle-on-your-mesh/">Shapekeys: The angle on your mesh</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/qualityjellyfish45275761d0/">Bela Beier</a>. </p></div>]]></content:encoded>
					
		
		
		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/11/kpd7gukg7lg-00-00-27-1-camkeys-for-blender-camera-angle-x-shapekeys.png?fit=1080%2C1080&#038;quality=72&#038;ssl=1" length="268761" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/11/kpd7gukg7lg-00-00-27-1-camkeys-for-blender-camera-angle-x-shapekeys.png?fit=1080%2C1080&#038;quality=72&#038;ssl=1" width="1080" height="1080" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title></media:title>
	<media:description type="html"><![CDATA[A 3D model of a blue hedgehog character, resembling Sonic, striking a peace sign gesture with his right hand. The background is a soft purple, and there are interface elements for a 3D modeling software displayed on the right side.]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/11/kpd7gukg7lg-00-00-27-1-camkeys-for-blender-camera-angle-x-shapekeys.png?fit=1080%2C1080&#038;quality=72&#038;ssl=1" width="1080" height="1080" />
<post-id xmlns="com-wordpress:feed-additions:1">231708</post-id>	</item>
		<item>
		<title>Video Mocap: Reallusion’s AI Gets Moving</title>
		<link>https://digitalproduction.com/2025/11/03/video-mocap-reallusions-ai-gets-moving/</link>
		
		<dc:creator><![CDATA[Bela Beier]]></dc:creator>
		<pubDate>Mon, 03 Nov 2025 10:34:45 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Reallusion]]></category>
		<category><![CDATA[topnews]]></category>
		<category><![CDATA[AI motion capture]]></category>
		<category><![CDATA[Animation]]></category>
		<category><![CDATA[character animation]]></category>
		<category><![CDATA[character rigging]]></category>
		<category><![CDATA[iclone]]></category>
		<category><![CDATA[mocap service]]></category>
		<category><![CDATA[pay-per-use]]></category>
		<category><![CDATA[QuickMagic]]></category>
		<category><![CDATA[real-time animation]]></category>
		<category><![CDATA[reallusion]]></category>
		<category><![CDATA[realtime graphics]]></category>
		<category><![CDATA[VFX]]></category>
		<category><![CDATA[video-based motion capture]]></category>
		<guid isPermaLink="false">https://digitalproduction.com/?p=219412</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/11/6G3FbbQ7Uw-00-00-07-iClone-Video-Mocap_-AI-Mocap-and-Motion-Editing-in-One-Streamlined-Workflow-_-iClone.png?fit=1200%2C675&quality=72&ssl=1" width="1200" height="675" title="" alt="A split-screen preview of a digital animation software displaying two animated female figures. One figure is in grey, wearing a fitted top and pants, and the other is in a white top and grey pants, positioned in front of an orange curtain background." /></div><div><p>Reallusion adds online AI motion capture to iClone: Video Mocap turns footage into editable animation for $2.50 per clip.</p>
<p>The post <a href="https://digitalproduction.com/2025/11/03/video-mocap-reallusions-ai-gets-moving/">Video Mocap: Reallusion’s AI Gets Moving</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/qualityjellyfish45275761d0/">Bela Beier</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/11/6G3FbbQ7Uw-00-00-07-iClone-Video-Mocap_-AI-Mocap-and-Motion-Editing-in-One-Streamlined-Workflow-_-iClone.png?fit=1200%2C675&quality=72&ssl=1" width="1200" height="675" title="" alt="A split-screen preview of a digital animation software displaying two animated female figures. One figure is in grey, wearing a fitted top and pants, and the other is in a white top and grey pants, positioned in front of an orange curtain background." /></div><div><p class="wp-block-paragraph"><a href="https://www.reallusion.com">Reallusion</a> has released <a href="https://www.reallusion.com/iclone/video-mocap" title="">Video Mocap</a>, an online AI motion-capture service integrated into <a href="https://www.reallusion.com/iclone/">iClone</a>. The feature was developed with AI mocap firm <a href="https://quickmagic.ai/">QuickMagic</a> and arrives alongside iClone 8.63, a minor update focusing on plugin support and FBX export fixes. Video Mocap extracts actor movement from uploaded footage and generates editable animation directly inside iClone. Unlike standalone online services that export FBX data, the new plugin connects the AI pipeline to iClone’s animation system, allowing users to clean up artefacts such as foot slippage using iClone’s native tools.</p>
<span hidden class="__iawmlf-post-loop-links" data-iawmlf-links="[{&quot;id&quot;:378,&quot;href&quot;:&quot;https:\/\/www.reallusion.com&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251218152349\/https:\/\/www.reallusion.com\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-27 13:52:31&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2025-12-30 14:22:09&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-03 01:50:39&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-07 12:13:22&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-11 12:50:51&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-14 14:36:53&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-18 19:04:53&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-21 21:29:29&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-26 08:26:15&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-29 12:33:34&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-02 00:42:32&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-05 09:54:51&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-10 19:00:04&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-13 20:24:58&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-19 08:20:45&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-22 09:53:59&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-25 11:33:44&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-28 15:27:12&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-03 20:09:38&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-06 20:59:41&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-09 21:14:04&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-12 21:26:49&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-16 08:04:35&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-19 11:02:33&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-22 12:47:11&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-25 13:33:19&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-28 14:33:09&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-31 14:37:08&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-03 14:39:06&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-06 16:03:42&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-09 19:03:54&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-12 19:41:15&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-15 21:19:13&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-18 21:48:00&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-22 00:19:01&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-25 03:47:28&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-28 06:24:21&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-05-01 07:06:29&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-05-04 09:13:08&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-05-07 09:14:37&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-05-10 09:36:51&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-05-13 10:15:04&quot;,&quot;http_code&quot;:206}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-05-13 10:15:04&quot;,&quot;http_code&quot;:206},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:379,&quot;href&quot;:&quot;https:\/\/www.reallusion.com\/iclone\/video-mocap&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251227135509\/https:\/\/www.reallusion.com\/iclone\/video-mocap\/&quot;,&quot;redirect_href&quot;:&quot;https:\/\/www.reallusion.com\/iclone\/video-mocap\/&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-27 17:32:45&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-01 18:47:18&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-08 18:23:54&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-17 02:54:52&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-20 09:57:55&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-24 18:04:43&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-29 07:37:59&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-02 00:42:32&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-11 19:52:38&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-21 19:38:39&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-27 15:05:16&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-06 10:37:01&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-12 18:22:37&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-16 11:04:42&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-19 23:39:00&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-27 00:38:52&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-04 13:48:38&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-08 13:41:34&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-14 15:41:48&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-23 10:15:34&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-26 15:46:25&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-05-02 22:10:09&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-05-08 06:54:35&quot;,&quot;http_code&quot;:206}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-05-08 06:54:35&quot;,&quot;http_code&quot;:206},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:380,&quot;href&quot;:&quot;https:\/\/www.reallusion.com\/iclone&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251218172811\/https:\/\/www.reallusion.com\/iclone\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-27 13:52:32&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2025-12-31 22:19:03&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-05 02:01:34&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-08 08:17:58&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-12 19:23:08&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-16 17:19:11&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-20 09:57:54&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-24 02:23:33&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-27 10:59:14&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-02 00:42:32&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-07 17:22:37&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-10 19:40:45&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-18 18:28:58&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-21 19:38:39&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-26 18:40:24&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-06 10:37:01&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-10 13:26:55&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-15 08:04:26&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-18 23:32:44&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-22 05:49:27&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-25 15:58:32&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-28 17:41:56&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-03 13:07:17&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-06 15:51:01&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-10 17:34:41&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-14 15:41:48&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-22 11:04:40&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-26 15:46:26&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-05-01 14:39:16&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-05-05 06:57:02&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-05-08 08:10:04&quot;,&quot;http_code&quot;:206}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-05-08 08:10:04&quot;,&quot;http_code&quot;:206},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:381,&quot;href&quot;:&quot;https:\/\/quickmagic.ai&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251208024633\/https:\/\/www.quickmagic.ai\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-27 13:52:33&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-01 18:47:18&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-08 18:23:54&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-17 02:54:52&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-20 09:57:55&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-24 18:04:45&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-29 07:37:59&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-02 00:42:32&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-11 19:52:38&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-21 19:38:38&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-27 15:05:16&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-06 10:37:01&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-12 18:22:36&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-16 11:04:42&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-19 23:39:00&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-27 00:38:51&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-04 13:48:37&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-08 13:41:32&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-14 15:41:46&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-23 10:15:33&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-26 15:46:26&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-05-02 22:10:08&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-05-08 06:54:34&quot;,&quot;http_code&quot;:206}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-05-08 06:54:34&quot;,&quot;http_code&quot;:206},&quot;process&quot;:&quot;done&quot;}]"></span>


<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe class="youtube-player" width="1200" height="675" src="https://www.youtube.com/embed/-6G3FbbQ7Uw?version=3&rel=1&showsearch=0&showinfo=1&iv_load_policy=1&fs=1&hl=en-US&autohide=2&wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div></figure>



<h3 id="capture-options-and-limitations" class="wp-block-heading">Capture options and limitations</h3>



<p class="wp-block-paragraph">The service offers full-body or upper-body capture, including finger tracking. Facial motion capture is not supported and remains a separate paid add-on in the iClone ecosystem.  Each motion-generation task processes up to 60 seconds of video for a single character and costs 250 DA Points, equivalent to $2.50. DA Points are Reallusion’s digital currency, sold with a minimum purchase of $10. Video processing takes place entirely online, and users can submit multiple clips in a batch. Up to ten motion files can be generated per batch if a video contains several performers.</p>



<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="675"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/11/6G3FbbQ7Uw-00-01-01-iClone-Video-Mocap_-AI-Mocap-and-Motion-Editing-in-One-Streamlined-Workflow-_-iClone.png?resize=1200%2C675&quality=72&ssl=1"  alt="Three stylized figures in a dark room simulate a baseball pitching and catching scenario. A smaller inset image shows a real baseball player in a red jersey crouching behind the plate, ready for the pitch."  class="wp-image-219419" ></figure>



<h3 id="compatibility-and-pricing" class="wp-block-heading">Compatibility and pricing</h3>



<p class="wp-block-paragraph">The Video Mocap plugin is available as a free download for iClone 8.63 and Windows 10+. iClone itself remains a paid application with a $599 perpetual licence. Video Mocap operates on a pay-per-use model. Each clip must be trimmed to a maximum of 60 seconds, though source videos up to 15 minutes can be uploaded. Supported formats include .mp4, .mov, .avi, .mkv, and ten others. Reallusion recommends a minimum resolution of 720p, while 4K and 8K files may reduce performance.</p>



<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="675"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/11/6G3FbbQ7Uw-00-01-33-iClone-Video-Mocap_-AI-Mocap-and-Motion-Editing-in-One-Streamlined-Workflow-_-iClone.png?resize=1200%2C675&quality=72&ssl=1"  alt="A close-up of a female tennis player&#039;s hand gripping a red tennis racket, with an interface showing a character model and animation tools on the right. The background features another player preparing to hit the ball on a tennis court."  class="wp-image-219418" ></figure>



<h3 id="recording-recommendations" class="wp-block-heading">Recording recommendations</h3>



<p class="wp-block-paragraph">Reallusion advises recording footage with a static, eye-level camera, ensuring the performer’s full body or upper body remains visible, depending on the capture type. AI tracking may fail if the subject leaves the frame, wears clothing similar to the background, or performs rapid or occluded motions such as flips or combat. The firm also recommends removing empty video sections before upload to improve motion-tracking accuracy. These and other technical notes are detailed in the <a>iClone Video Mocap Online Manual</a>.</p>



<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="675"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/11/6G3FbbQ7Uw-00-01-42-iClone-Video-Mocap_-AI-Mocap-and-Motion-Editing-in-One-Streamlined-Workflow-_-iClone.png?resize=1200%2C675&quality=72&ssl=1"  alt="Three character models standing side by side in a 3D design software interface. The first model is a stylized female figure, followed by a young girl wearing casual clothes, and a cartoonish penguin character on the right. The software&#039;s timeline and settings panel are visible at the bottom."  class="wp-image-219417" ></figure>



<h3 id="context-ai-mocap-enters-production-tools" class="wp-block-heading">Context: AI mocap enters production tools</h3>



<p class="wp-block-paragraph">Video Mocap joins a growing field of AI-based motion capture systems that interpret human movement from standard video footage. Reallusion’s implementation distinguishes itself by integrating this functionality directly into a production-ready 3D animation platform, avoiding intermediate export steps.</p>



<p class="wp-block-paragraph">As with any AI-driven system, actual performance and accuracy will depend on recording conditions, motion complexity, and camera quality. Artists should test the service with their own material before integrating it into production pipelines.</p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p><p>The post <a href="https://digitalproduction.com/2025/11/03/video-mocap-reallusions-ai-gets-moving/">Video Mocap: Reallusion’s AI Gets Moving</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/qualityjellyfish45275761d0/">Bela Beier</a>. </p></div>]]></content:encoded>
					
		
		
		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/11/6G3FbbQ7Uw-00-00-07-iClone-Video-Mocap_-AI-Mocap-and-Motion-Editing-in-One-Streamlined-Workflow-_-iClone.png?fit=1920%2C1080&#038;quality=72&#038;ssl=1" length="372427" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/11/6G3FbbQ7Uw-00-00-07-iClone-Video-Mocap_-AI-Mocap-and-Motion-Editing-in-One-Streamlined-Workflow-_-iClone.png?fit=1200%2C675&#038;quality=72&#038;ssl=1" width="1200" height="675" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title></media:title>
	<media:description type="html"><![CDATA[A split-screen preview of a digital animation software displaying two animated female figures. One figure is in grey, wearing a fitted top and pants, and the other is in a white top and grey pants, positioned in front of an orange curtain background.]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/11/6G3FbbQ7Uw-00-00-07-iClone-Video-Mocap_-AI-Mocap-and-Motion-Editing-in-One-Streamlined-Workflow-_-iClone.png?fit=1200%2C675&#038;quality=72&#038;ssl=1" width="1200" height="675" />
<post-id xmlns="com-wordpress:feed-additions:1">219412</post-id>	</item>
	</channel>
</rss>
