<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="https://digitalproduction.com/wp-content/plugins/xslt/public/template.xsl"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	xmlns:rssFeedStyles="http://www.wordpress.org/ns/xslt#"
>

<channel>
	<title>video upscaling - DIGITAL PRODUCTION</title>
	<atom:link href="https://digitalproduction.com/tag/video-upscaling/feed/" rel="self" type="application/rss+xml" />
	<link>https://digitalproduction.com</link>
	<description>Magazine for Digital Media Production</description>
	<lastBuildDate>Tue, 09 Dec 2025 11:30:48 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
<site xmlns="com-wordpress:feed-additions:1">236729828</site>	<item>
		<title>RE:Vision Effects ups the ante with REZup V2 (and yes, it&#8217;s 4× sharper)</title>
		<link>https://digitalproduction.com/2025/09/01/revision-effects-ups-the-ante-with-rezup-v2-and-yes-its-4x-sharper/</link>
		
		<dc:creator><![CDATA[Jürgen Firsching]]></dc:creator>
		<pubDate>Mon, 01 Sep 2025 05:09:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[topnews]]></category>
		<category><![CDATA[Digital Production]]></category>
		<category><![CDATA[DNN 2]]></category>
		<category><![CDATA[DNN 3]]></category>
		<category><![CDATA[machine-learning upscaler]]></category>
		<category><![CDATA[post-production]]></category>
		<category><![CDATA[RE:Vision FX]]></category>
		<category><![CDATA[REZup V2]]></category>
		<category><![CDATA[VFX plug-in]]></category>
		<category><![CDATA[video upscaling]]></category>
		<guid isPermaLink="false">https://digitalproduction.com/?p=197401</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/08/49-rezup-demo-2x-youtube-0-0-25.jpeg?fit=1200%2C675&quality=80&ssl=1" width="1200" height="675" title="" alt="A close-up profile of a woman on the left and a digitally enhanced image on the right, showcasing the difference in clarity and detail. A caption below notes the comparison of REZup zoom on the right and ordinary scaling on the left." /></div><div><p>REZup V2 arrives with two new GPU-friendly ML models: general-purpose DNN 3 and face-focused DNN 2 and adds temporal denoising for cleaner 4× upscaling.</p>
<p>The post <a href="https://digitalproduction.com/2025/09/01/revision-effects-ups-the-ante-with-rezup-v2-and-yes-its-4x-sharper/">RE:Vision Effects ups the ante with REZup V2 (and yes, it’s 4× sharper)</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/juergenfirsching/">Jürgen Firsching</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/08/49-rezup-demo-2x-youtube-0-0-25.jpeg?fit=1200%2C675&quality=80&ssl=1" width="1200" height="675" title="" alt="A close-up profile of a woman on the left and a digitally enhanced image on the right, showcasing the difference in clarity and detail. A caption below notes the comparison of REZup zoom on the right and ordinary scaling on the left." /></div><div><script type='application/json' class='__iawmlf-post-loop-links'>[{"id":887,"href":"https:\/\/revisionfx.com\/news\/2025\/08\/04\/rezup-v2-breaks-the-resolution-barrier","archived_href":"http:\/\/web-wp.archive.org\/web\/20250915162422\/https:\/\/revisionfx.com\/news\/2025\/08\/04\/rezup-v2-breaks-the-resolution-barrier\/","redirect_href":"","checks":[{"date":"2025-12-27 16:40:26","http_code":200},{"date":"2026-01-07 07:38:16","http_code":200},{"date":"2026-01-19 02:26:52","http_code":200},{"date":"2026-01-23 06:26:41","http_code":200},{"date":"2026-01-30 10:28:00","http_code":200},{"date":"2026-02-06 13:28:10","http_code":200},{"date":"2026-02-16 22:35:10","http_code":200},{"date":"2026-03-09 13:04:36","http_code":503},{"date":"2026-03-16 18:52:09","http_code":200},{"date":"2026-04-03 18:44:55","http_code":200},{"date":"2026-04-07 03:54:48","http_code":200},{"date":"2026-04-11 22:36:26","http_code":200},{"date":"2026-04-19 13:33:08","http_code":200},{"date":"2026-05-07 14:33:50","http_code":200}],"broken":false,"last_checked":{"date":"2026-05-07 14:33:50","http_code":200},"process":"done"},{"id":888,"href":"https:\/\/revisionfx.com\/products\/rezup","archived_href":"http:\/\/web-wp.archive.org\/web\/20251208132557\/https:\/\/revisionfx.com\/products\/rezup\/","redirect_href":"","checks":[{"date":"2025-12-27 16:40:31","http_code":200},{"date":"2026-01-07 07:38:16","http_code":200},{"date":"2026-01-19 02:26:52","http_code":200},{"date":"2026-01-23 06:26:41","http_code":200},{"date":"2026-01-30 10:28:00","http_code":200},{"date":"2026-02-06 13:28:10","http_code":200},{"date":"2026-02-16 22:35:11","http_code":200},{"date":"2026-03-09 13:04:36","http_code":503},{"date":"2026-03-16 18:52:10","http_code":200},{"date":"2026-04-03 18:44:53","http_code":200},{"date":"2026-04-07 03:54:48","http_code":200},{"date":"2026-04-11 22:36:26","http_code":200},{"date":"2026-04-19 13:33:09","http_code":200},{"date":"2026-05-07 14:33:49","http_code":200}],"broken":false,"last_checked":{"date":"2026-05-07 14:33:49","http_code":200},"process":"done"},{"id":889,"href":"https:\/\/www.adobe.com\/products\/aftereffects.html?utm_source=chatgpt.com","archived_href":"","redirect_href":"","checks":[],"broken":false,"last_checked":null,"process":"done"},{"id":843,"href":"https:\/\/www.adobe.com\/products\/premiere.html?utm_source=chatgpt.com","archived_href":"http:\/\/web-wp.archive.org\/web\/20250623095526\/https:\/\/www.adobe.com\/products\/premiere.html?utm_source=chatgpt.com","redirect_href":"","checks":[{"date":"2025-12-27 16:28:35","http_code":503},{"date":"2026-01-07 07:39:30","http_code":503},{"date":"2026-01-19 02:26:59","http_code":503},{"date":"2026-01-28 09:39:09","http_code":503},{"date":"2026-02-02 10:12:51","http_code":503},{"date":"2026-02-07 10:43:30","http_code":503},{"date":"2026-02-12 01:52:52","http_code":503},{"date":"2026-02-16 22:35:29","http_code":503},{"date":"2026-03-16 18:53:10","http_code":503},{"date":"2026-04-01 23:14:25","http_code":503},{"date":"2026-04-06 12:16:51","http_code":503},{"date":"2026-04-11 14:20:38","http_code":503},{"date":"2026-05-01 08:42:29","http_code":503},{"date":"2026-05-04 09:08:36","http_code":503},{"date":"2026-05-07 14:33:59","http_code":503}],"broken":true,"last_checked":{"date":"2026-05-07 14:33:59","http_code":503},"process":"done"},{"id":890,"href":"https:\/\/www.apple.com\/final-cut-pro\/?utm_source=chatgpt.com","archived_href":"","redirect_href":"","checks":[],"broken":false,"last_checked":null,"process":"done"}]</script>
<p class="wp-block-paragraph"><a href="https://revisionfx.com/news/2025/08/04/rezup-v2-breaks-the-resolution-barrier/" title="">RE:Vision Effects</a> has released <strong><a href="https://revisionfx.com/products/rezup/" title="">REZup Version 2</a></strong>, its updated video and animation upscaling plugin. The launch date is 4 August 2025, and the announcement emphasises improved upscaling with two newly integrated machine-learning models. REZup remains structured as two plug-ins—Resize (for upscaling or zooming) and Enhance (for refining image quality)—now enhanced with fresh capabilities.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe class="youtube-player" width="1200" height="675" src="https://www.youtube.com/embed/3MVma9Gmr6c?version=3&rel=1&showsearch=0&showinfo=1&iv_load_policy=1&fs=1&hl=en-US&autohide=2&wmode=transparent&listType=playlist&list=PLJZE0COAfWUUSAlVjBB91n6BBXt7oOGvA" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div></figure>



<h3 id="built-in-temporal-denoising-and-colour-handling" class="wp-block-heading">Built-In Temporal Denoising and Colour Handling</h3>



<p class="wp-block-paragraph"><a href="https://revisionfx.com/news/2025/08/04/rezup-v2-breaks-the-resolution-barrier/" title="">REZup </a>V2 adds a built-in <strong>temporally denoise</strong> option to suppress noise during upscaling. It also refines <strong>colour-space handling</strong>, making transitions and grading more reliable across varied footage.</p>



<h3 id="two-ml-models-one-for-everything-one-just-for-faces" class="wp-block-heading">Two ML Models — One for Everything, One Just for Faces</h3>



<p class="wp-block-paragraph">The update introduces two new machine-learning models:</p>



<ul class="wp-block-list">
<li><strong>DNN 3</strong> is a general-purpose model suitable for wide-ranging content, from live-action to rendered 3D animation.</li>



<li><strong>DNN 2</strong> is optimised for faces and close-ups, promising detail retention such as fine hair strands, and even performs well on anime-style footage.</li>
</ul>



<p class="wp-block-paragraph">The company highlights that the models are “particularly adept” at 4× upscaling, for example transforming HD to 8K. Other scaling factors are shown in RE:Vision Effects’ demo videos.</p>



<h3 id="performance-and-compatibility" class="wp-block-heading">Performance and Compatibility</h3>



<p class="wp-block-paragraph">REZup V2 supports AMD, NVIDIA, and Apple Silicon GPUs, and is expected to work with Intel Arc B60 discrete GPUs—though Intel users are invited to report test results. The company notes that upscaling from 1080p to 8K demands at least 8 GB VRAM; GPUs with more memory (12 GB to 96 GB) will handle higher resolutions more smoothly.</p>



<h3 id="pricing-and-update-strategy" class="wp-block-heading">Pricing and Update Strategy</h3>



<p class="wp-block-paragraph">REZup V2 is priced at US $189.95, with an upgrade path from Version 1 available for US $39.99. The plugin is also included in RE:Vision Effects’ Effections bundle.</p>



<h3 id="host-application-support" class="wp-block-heading">Host Application Support</h3>



<p class="wp-block-paragraph">The update is compatible with a wide range of host applications, including <a href="https://www.adobe.com/products/aftereffects.html?utm_source=chatgpt.com">Adobe After Effects</a> and <a href="https://www.adobe.com/products/premiere.html?utm_source=chatgpt.com">Premiere</a>, <a>Resolve</a> and <a>Fusion</a>, <a href="https://www.apple.com/final-cut-pro/?utm_source=chatgpt.com">Apple Final Cut Pro</a>, <a>Assimilate Scratch</a>, <a>Nuke</a>}, HSA-Art Diamant and Film Buster, <a>Flame</a>, <a>Boris FX Silhouette</a>, and other OpenFX-based hosts. A note indicates DNN 2 and 3 are not yet available on Flame Linux.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe class="youtube-player" width="1200" height="675" src="https://www.youtube.com/embed/pferw1kR9rg?version=3&rel=1&showsearch=0&showinfo=1&iv_load_policy=1&fs=1&hl=en-US&autohide=2&wmode=transparent&listType=playlist&list=PLJZE0COAfWUUSAlVjBB91n6BBXt7oOGvA" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div></figure>



<h3 id="combining-with-twixtor" class="wp-block-heading">Combining with Twixtor</h3>



<p class="wp-block-paragraph">When used alongside <a>Twixtor</a>, REZup Resize enables reframing VR footage or remastering high-resolution content with no quality loss. </p>



<p class="wp-block-paragraph"><strong>Note to readers:</strong> Confirm VRAM, host compatibility, and scaling results in your specific pipeline before adopting REZup V2 for critical deliverables.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe class="youtube-player" width="1200" height="675" src="https://www.youtube.com/embed/KiQQgDkErOQ?version=3&rel=1&showsearch=0&showinfo=1&iv_load_policy=1&fs=1&hl=en-US&autohide=2&wmode=transparent&listType=playlist&list=PLJZE0COAfWUUSAlVjBB91n6BBXt7oOGvA" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div></figure><p>The post <a href="https://digitalproduction.com/2025/09/01/revision-effects-ups-the-ante-with-rezup-v2-and-yes-its-4x-sharper/">RE:Vision Effects ups the ante with REZup V2 (and yes, it’s 4× sharper)</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/juergenfirsching/">Jürgen Firsching</a>. </p></div>]]></content:encoded>
					
		
		
		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/08/49-rezup-demo-2x-youtube-0-0-25.jpeg?fit=1920%2C1080&#038;quality=80&#038;ssl=1" length="49972" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/08/49-rezup-demo-2x-youtube-0-0-25.jpeg?fit=1200%2C675&#038;quality=80&#038;ssl=1" width="1200" height="675" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title></media:title>
	<media:description type="html"><![CDATA[A close-up profile of a woman on the left and a digitally enhanced image on the right, showcasing the difference in clarity and detail. A caption below notes the comparison of REZup zoom on the right and ordinary scaling on the left.]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/08/49-rezup-demo-2x-youtube-0-0-25.jpeg?fit=1200%2C675&#038;quality=80&#038;ssl=1" width="1200" height="675" />
<post-id xmlns="com-wordpress:feed-additions:1">197401</post-id>	</item>
		<item>
		<title>Out on its own: Twixtor Standalone</title>
		<link>https://digitalproduction.com/2025/02/17/out-on-its-own-twixtor-standalone/</link>
		
		<dc:creator><![CDATA[Uli Plank]]></dc:creator>
		<pubDate>Mon, 17 Feb 2025 13:00:00 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[topnews]]></category>
		<category><![CDATA[AI slow motion]]></category>
		<category><![CDATA[frame interpolation]]></category>
		<category><![CDATA[Fusion plugin]]></category>
		<category><![CDATA[neural networks]]></category>
		<category><![CDATA[NLE compatibility]]></category>
		<category><![CDATA[optical flow]]></category>
		<category><![CDATA[post-production]]></category>
		<category><![CDATA[RE:Vision FX]]></category>
		<category><![CDATA[slow motion]]></category>
		<category><![CDATA[Speed Warp]]></category>
		<category><![CDATA[subscribers]]></category>
		<category><![CDATA[Topaz Labs]]></category>
		<category><![CDATA[Twixtor]]></category>
		<category><![CDATA[video rendering]]></category>
		<category><![CDATA[video upscaling]]></category>
		<guid isPermaLink="false">https://digitalproduction.com/?p=159898</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Blume_Twixtor_Still.jpg?fit=1200%2C505&quality=80&ssl=1" width="1200" height="505" title="" alt="A vine with vibrant pink bougainvillea flowers climbing along a trellis in a lush garden setting." /></div><div><p>Twixtor standalone offers an easy-to-use solution for slow-motion video generation using AI, but it is slower than competitors. Topaz Video AI excels in speed and quality, making it the recommended choice, even if it doesn't come cheap. Which one is right for you?</p>
<p>The post <a href="https://digitalproduction.com/2025/02/17/out-on-its-own-twixtor-standalone/">Out on its own: Twixtor Standalone</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/uliplank/">Uli Plank</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Blume_Twixtor_Still.jpg?fit=1200%2C505&quality=80&ssl=1" width="1200" height="505" title="" alt="A vine with vibrant pink bougainvillea flowers climbing along a trellis in a lush garden setting." /></div><div><script type='application/json' class='__iawmlf-post-loop-links'>[{"id":2121,"href":"https:\/\/mediaarea.net\/de\/MediaInfo","archived_href":"http:\/\/web-wp.archive.org\/web\/20251116210938\/https:\/\/mediaarea.net\/de\/MediaInfo","redirect_href":"","checks":[{"date":"2025-12-27 23:23:58","http_code":200},{"date":"2025-12-31 18:21:04","http_code":200},{"date":"2026-01-05 20:40:54","http_code":200},{"date":"2026-01-26 21:25:46","http_code":200},{"date":"2026-02-04 00:26:43","http_code":200},{"date":"2026-02-18 21:34:50","http_code":200},{"date":"2026-02-27 15:19:45","http_code":200},{"date":"2026-03-05 21:36:01","http_code":200},{"date":"2026-03-15 15:33:55","http_code":200},{"date":"2026-04-01 13:34:21","http_code":200},{"date":"2026-04-10 12:43:34","http_code":200},{"date":"2026-04-15 10:18:52","http_code":200}],"broken":false,"last_checked":{"date":"2026-04-15 10:18:52","http_code":200},"process":"done"},{"id":2122,"href":"https:\/\/www.videotoolshed.com\/handcrafted-timecode-tools\/qtchange","archived_href":"http:\/\/web-wp.archive.org\/web\/20250717213854\/https:\/\/www.videotoolshed.com\/handcrafted-timecode-tools\/qtchange\/","redirect_href":"","checks":[{"date":"2025-12-27 23:24:03","http_code":200},{"date":"2025-12-31 18:21:09","http_code":200},{"date":"2026-01-05 20:40:54","http_code":200},{"date":"2026-01-19 18:40:54","http_code":200},{"date":"2026-01-26 21:25:50","http_code":200},{"date":"2026-02-04 00:26:44","http_code":200},{"date":"2026-02-18 21:34:50","http_code":200},{"date":"2026-02-27 15:19:47","http_code":200},{"date":"2026-03-05 21:36:05","http_code":200},{"date":"2026-03-15 15:33:56","http_code":200},{"date":"2026-03-30 14:41:39","http_code":503},{"date":"2026-04-02 23:34:36","http_code":503},{"date":"2026-04-08 22:55:41","http_code":503},{"date":"2026-04-12 04:39:05","http_code":503},{"date":"2026-04-15 10:18:58","http_code":503},{"date":"2026-04-22 14:11:29","http_code":200},{"date":"2026-05-06 21:31:49","http_code":200}],"broken":false,"last_checked":{"date":"2026-05-06 21:31:49","http_code":200},"process":"done"},{"id":2123,"href":"https:\/\/www.dropbox.com\/scl\/fo\/1u4fn3v1l17762kph9nrf\/AHYzoVvUmP6iNuT_UWwjg6Y\/Drums_original.mov?rlkey=t870zx78wjsgrilzphtidk55r&dl=0","archived_href":"","redirect_href":"","checks":[],"broken":false,"last_checked":null,"process":"done"},{"id":2124,"href":"https:\/\/www.dropbox.com\/scl\/fo\/1u4fn3v1l17762kph9nrf\/APhgx_Ieow_oq9etZC5KAe4\/Drums_TVAI_6x.mov?rlkey=t870zx78wjsgrilzphtidk55r&dl=0","archived_href":"","redirect_href":"","checks":[],"broken":false,"last_checked":null,"process":"done"},{"id":2125,"href":"https:\/\/www.dropbox.com\/scl\/fo\/1u4fn3v1l17762kph9nrf\/AF65sxe8JVbjsUU3w4qSou4\/Drums_Speed_Warp_6x.mov?rlkey=t870zx78wjsgrilzphtidk55r&dl=0","archived_href":"","redirect_href":"","checks":[],"broken":false,"last_checked":null,"process":"done"},{"id":2126,"href":"https:\/\/www.dropbox.com\/scl\/fo\/1u4fn3v1l17762kph9nrf\/ALUyLib0K6mXmsnDmgXVZRo\/Drums_Twixtor_6x.mov?rlkey=t870zx78wjsgrilzphtidk55r&dl=0","archived_href":"","redirect_href":"","checks":[],"broken":false,"last_checked":null,"process":"done"}]</script>
<p class="wp-block-paragraph">Just like DaVinci Resolve’s Speed Warp (DR for short) and Topaz’ Video AI (TVAI), Twixtor is now using neural networks aka machine learning to generate additional frames for slow motion (slo-mo) in post-production. We have already reviewed the early beta of its version 8 <a href="https://digitalproduction.com/2024/04/12/the-discovery-of-slowness/">last year</a> (the link includes examples of the results). But now the software is not only final, it is also available as a standalone version for Windows and MacOS. Any big changes?</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Twixtor_alone.png?quality=72&ssl=1"><img data-recalc-dims="1"  fetchpriority="high"  decoding="async"  width="1073"  height="840"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Twixtor_alone.png?resize=1073%2C840&quality=72&ssl=1"  alt=""  class="wp-image-160047" ></a><figcaption class="wp-element-caption">Compared to its plug-in version Twixtor standalone is as easy to use as it gets.</figcaption></figure>
</div>


<h4 id="features-of-the-standalone-version" class="wp-block-heading">Features of the standalone Version</h4>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Twixtor-in-Fusion.jpg?quality=80&ssl=1"><img data-recalc-dims="1" height="208" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Twixtor-in-Fusion.jpg?resize=1200%2C208&quality=80&ssl=1"  alt=""  class="wp-image-159962" ></a><figcaption class="wp-element-caption">Twixtor as an OFX plug-in needs Fusion for slo-mo in DaVinci Resolve.</figcaption></figure>



<p class="wp-block-paragraph">The standalone version of Twixtor is easier to use for beginners than the plug-in, you don’t need any workarounds to extend the clips for slo-mo (see my other <a href="https://digitalproduction.com/2024/04/12/the-discovery-of-slowness/">article</a>). You can choose a faster or a more precise algorithm and the mode of control between a simple speed change or speed ramping. The latter offers quite intuitive controls you may wish to have in other software, but setting keyframes was not yet completely bug-free. Choices for the treatment of audio are self-explanatory, as is the speed slider. You can define the range of frames to be treated and choose the frame rate for the output independently of the input rate. Slightly confusing is the activation of the GPU (which is normally the right choice), since clicking on the field makes it dark. Now, is that on or off? BTW, the installer is looking very “Windows” on a Mac, but that works flawlessly.</p>



<p class="wp-block-paragraph">For input, all flavours of AVC, HEVC and ProRes are accepted, but not DNxHR or Cineform, which are also high quality intermediate codecs. They are needed by users of DR under Windows, who can’t export ProRes. Neither is the MXF wrapper accepted, which is an important alternative to MOV. Unfortunately, you don’t even get an explanation when trying to use one of these, the picture just stays black. In the end, the plug-in might be the better choice for professional users, and for log, HDR, or RAW you should also resort to that. But then, RE:Vision FX gives you a license for the standalone together with the plug-in version.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Speed_Ramp.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="495"  height="290"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Speed_Ramp.png?resize=495%2C290&quality=72&ssl=1"  alt=""  class="wp-image-160102" ></a><figcaption class="wp-element-caption">Speed ramps can be defined with ease.</figcaption></figure>
</div>


<p class="wp-block-paragraph">The most serious limitation is in the output codecs you can choose from: there are only HEVC and H264, and both come in 8 bit with 4:2:0 chroma subsampling only. That happens without being mentioned anywhere, you’ll need a tool like MediaInfo (<a href="https://mediaarea.net/de/MediaInfo">free</a>) to find this out. Not a perfect choice if you want to treat the results further in any NLE with colour grading or filtering. At least you can up the bitrate for that purpose. The fact that they are wrapped as .m4v doesn’t make things better, some NLEs only accepted those after a re-wrap into .mov. And then, they contain no timecode, which can wreak havoc if you have to move files after editing or if you need to send them to collaborators. Definitely add TC while you are re-wrapping for such situations (with a tool like e.g. <a href="https://www.videotoolshed.com/handcrafted-timecode-tools/qtchange/">QTchange</a>).</p>



<h4 id="setup-for-the-test" class="wp-block-heading">Setup for the Test</h4>



<p class="wp-block-paragraph">Our earlier tests suggested that higher image quality, more fps and shorter exposure times yield better results with any of these softwares. We didn’t want to make it too easy for the candidates, so this time we have used some smartphone footage. It was shot in HD only, at 30 fps and under available light. Of course, we have chosen a scene with intense activity, in this case dancers and drummers with criss-crossing movements. Get the source file <a href="https://www.dropbox.com/scl/fo/1u4fn3v1l17762kph9nrf/AHYzoVvUmP6iNuT_UWwjg6Y/Drums_original.mov?rlkey=t870zx78wjsgrilzphtidk55r&dl=0">here</a> for your own tests. This is the stuff where trying to get good slo-mo in post usually fails. But, after all, you can shoot good slo-mo in most modern cameras if you have thought of it beforehand.</p>



<p class="wp-block-paragraph">You can help Twixtor Pro as a plug-in to get pretty perfect results with masks and tracking points, but it’s not really easy to use in DaVinci Resolve or other hosts, as explained in my older article. The standalone version is rather simplified and straightforward, which can be very helpful for beginners. So, we also used standard presets for Speed Warp Better and the model offered in TVAI automatically for slo-mo without scaling, which is now Apollo. Fine tuning can result in even better results from both of these. TVAI is offering only 4x, 6x or 8x of slo-mo, so we decided to try 6x, since 8x seems a bit too adventurous for such footage. We ran the tests on a M1 Pro MacBook with 32 GB under Sonoma 14.7.2.</p>



<h4 id="test-results" class="wp-block-heading">Test Results</h4>



<p class="wp-block-paragraph">Our test results at the highest quality setting were close, you need to do watch carefully to see the differences. All three are struggling with small objects like the drumsticks, which flicker on and off, and all three can show some pushing and pulling of the background or of criss-cross movement, in particular where there is motion blur. For this less than optimal source footage, the results of all three are still impressive. But personally, I’d say TVAI gets the crown here, in particular for the least background distortion. The faster mode in Twixtor reverts to optical flow, which would also be an option in DR. Both are considerably faster than the AI and can suffice for less critical footage. TVAI also offers a faster alternative for the Apollo model.</p>



<p class="wp-block-paragraph">Of course, speed also matters, and <strong>TVAI’s Apollo</strong> model needed only 2:19 if nothing else was activated, the time is predicted quite well (the result is <a href="https://www.dropbox.com/scl/fo/1u4fn3v1l17762kph9nrf/APhgx_Ieow_oq9etZC5KAe4/Drums_TVAI_6x.mov?rlkey=t870zx78wjsgrilzphtidk55r&dl=0">here</a>). So, it needs only 3.5 times the playback time of the final sample to render. This applies once the model is downloaded for the first time, which can need quite a while depending on your internet connection and their servers. Its only shortcoming: you can’t do speed ramping, it works with fixed ratios. But it can generate its output in high quality codecs. Blackmagic’s <strong>Speed Warp Better</strong> is considerably slower at 7:11, a factor of 11, but the visual results are close (see <a href="https://www.dropbox.com/scl/fo/1u4fn3v1l17762kph9nrf/AF65sxe8JVbjsUU3w4qSou4/Drums_Speed_Warp_6x.mov?rlkey=t870zx78wjsgrilzphtidk55r&dl=0">here</a>). Being part of DR, it handles speed ramps, albeit a bit clunky, and a broad choice of codecs. <strong>Twixtor</strong> was the slowest at 12:15, which is a factor of 18.8 (the <a href="https://www.dropbox.com/scl/fo/1u4fn3v1l17762kph9nrf/ALUyLib0K6mXmsnDmgXVZRo/Drums_Twixtor_6x.mov?rlkey=t870zx78wjsgrilzphtidk55r&dl=0">result</a>). All of them are predicting the time needed pretty well, and all three got the GPU cores of our humble laptop fully loaded when rendering, so you can expect considerably better speeds with higher numbers of cores on a Mac or with any strong GPU on a PC.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/TVAI.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="714"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/TVAI.png?resize=1200%2C714&quality=72&ssl=1"  alt=""  class="wp-image-160068" ></a><figcaption class="wp-element-caption">Topaz Video AI has come a long way from its early incarnations, it’s both fast and good these days.</figcaption></figure>



<p class="wp-block-paragraph">We tested some very high quality footage in UHD at 50 fps too, you can see a frame of those Bougainvilleas in strong wind in the header above. But the results of the best algorithm in either candidate were so close that only speed would matter. Considering quality and speed, TVAI’s Apollo model was the best, their optimisation for speed in particular is truly impressive, when compared to early versions. But it’s also quite costly at 300 US$ initially, being subscription based. The deal is fair: You are only excluded from further updates if you don’t pay any more, but it’ll still work. And then, it can also do excellent de-interlacing and upscaling (see my article <a href="https://digitalproduction.com/2023/10/15/topaz-video-ai-revisited-version-4/">here</a>). Speed Warp is a Studio feature in DR, which costs just the same at 300 US$, but also includes dozens of other valuable features, and until today there was never a charge for updates. Like in our earlier test, Twixtor is still the slowest, even after being out of beta. But at the current price just short of 100 US$ for a permanent license, it is also the cheapest, while you pay as much for TVAI every year.</p>



<h4 id="recommendation" class="wp-block-heading">Recommendation</h4>



<p class="wp-block-paragraph">Twixtor standalone is easy to operate and the best bid for synthetic slo-mo if you don’t own DaVinci Resolve Studio anyway. But it has its shortcomings and is the slowest of the bunch. If you own DR Studio, its internal Speed Warp will suffice for the occasional slo-mo, even if it’s not that much faster. But TVAI is winning the crown here, and not only for quality and speed. If you are planning for slo-mo, you can shoot higher speed in most cameras at a lower resolution, and then let TVAI do the additional slo-mo plus upscaling. Even if this approach takes more time than slo-mo alone, it’ll still be faster than the others and can yield excellent results. But it comes at a hefty price tag.</p><p>The post <a href="https://digitalproduction.com/2025/02/17/out-on-its-own-twixtor-standalone/">Out on its own: Twixtor Standalone</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/uliplank/">Uli Plank</a>. </p></div>]]></content:encoded>
					
		
		<enclosure url="https://www.dropbox.com/scl/fo/1u4fn3v1l17762kph9nrf/AHYzoVvUmP6iNuT_UWwjg6Y/Drums_original.mov?rlkey=t870zx78wjsgrilzphtidk55r&#038;dl=0" length="0" type="video/quicktime" />
<enclosure url="https://www.dropbox.com/scl/fo/1u4fn3v1l17762kph9nrf/APhgx_Ieow_oq9etZC5KAe4/Drums_TVAI_6x.mov?rlkey=t870zx78wjsgrilzphtidk55r&#038;dl=0" length="0" type="video/quicktime" />
<enclosure url="https://www.dropbox.com/scl/fo/1u4fn3v1l17762kph9nrf/AF65sxe8JVbjsUU3w4qSou4/Drums_Speed_Warp_6x.mov?rlkey=t870zx78wjsgrilzphtidk55r&#038;dl=0" length="0" type="video/quicktime" />
<enclosure url="https://www.dropbox.com/scl/fo/1u4fn3v1l17762kph9nrf/ALUyLib0K6mXmsnDmgXVZRo/Drums_Twixtor_6x.mov?rlkey=t870zx78wjsgrilzphtidk55r&#038;dl=0" length="0" type="video/quicktime" />

		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Blume_Twixtor_Still.jpg?fit=3711%2C1563&#038;quality=80&#038;ssl=1" length="175729" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Blume_Twixtor_Still.jpg?fit=1200%2C505&#038;quality=80&#038;ssl=1" width="1200" height="505" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title></media:title>
	<media:description type="html"><![CDATA[A vine with vibrant pink bougainvillea flowers climbing along a trellis in a lush garden setting.]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Blume_Twixtor_Still.jpg?fit=1200%2C505&#038;quality=80&#038;ssl=1" width="1200" height="505" />
<post-id xmlns="com-wordpress:feed-additions:1">159898</post-id>	</item>
		<item>
		<title>The discovery of slowness</title>
		<link>https://digitalproduction.com/2024/04/12/the-discovery-of-slowness/</link>
		
		<dc:creator><![CDATA[Uli Plank]]></dc:creator>
		<pubDate>Fri, 12 Apr 2024 14:06:00 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[AI motion interpolation]]></category>
		<category><![CDATA[Aion]]></category>
		<category><![CDATA[Compositing]]></category>
		<category><![CDATA[DP2403]]></category>
		<category><![CDATA[frame rate conversion]]></category>
		<category><![CDATA[Fusion Page]]></category>
		<category><![CDATA[GPU rendering]]></category>
		<category><![CDATA[neural network video processing]]></category>
		<category><![CDATA[optical flow]]></category>
		<category><![CDATA[post-production]]></category>
		<category><![CDATA[slow motion]]></category>
		<category><![CDATA[Speed Warp]]></category>
		<category><![CDATA[subscribers]]></category>
		<category><![CDATA[Topaz Labs]]></category>
		<category><![CDATA[Twixtor]]></category>
		<category><![CDATA[video upscaling]]></category>
		<guid isPermaLink="false">https://digitalproduction.com/?p=159941</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Heli_Aion.jpg?fit=1170%2C1080&quality=80&ssl=1" width="1170" height="1080" title="Die AI, hier Aion, kann es besser, aber nicht perfekt." alt="Die AI, hier Aion, kann es besser, aber nicht perfekt." /></div><div><p>Artificial slow motion, i.e. the calculation of additional intermediate images, has been around for a long time. The best methods to date have been called "optical flow", although this actually refers to the visual perception of movement in general. Now A.I. or neural networks are also establishing themselves here. We compare Twixtor 8, DaVinci Resolve 18.6 and Topaz Video AI 4.</p>
<p>The post <a href="https://digitalproduction.com/2024/04/12/the-discovery-of-slowness/">The discovery of slowness</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/uliplank/">Uli Plank</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Heli_Aion.jpg?fit=1170%2C1080&quality=80&ssl=1" width="1170" height="1080" title="Die AI, hier Aion, kann es besser, aber nicht perfekt." alt="Die AI, hier Aion, kann es besser, aber nicht perfekt." /></div><div><script type='application/json' class='__iawmlf-post-loop-links'>[{"id":2693,"href":"http:\/\/is.gd\/zeitlupenfiles","archived_href":"","redirect_href":"https:\/\/is.gd\/zeitlupenfiles","checks":[],"broken":false,"last_checked":null,"process":"done"}]</script>
<p class="wp-block-paragraph"></p>



<figure class="wp-block-gallery has-nested-images columns-2 is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Drache_Optical_Flow.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="675"  data-id="159953"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Drache_Optical_Flow.jpg?resize=1200%2C675&quality=80&ssl=1"  alt=""  class="wp-image-159953" ></a><figcaption class="wp-element-caption">The typical doublings and distortions with Optical Flow</figcaption></figure>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Drache_Speed_Warp.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="675"  data-id="159954"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Drache_Speed_Warp.jpg?resize=1200%2C675&quality=80&ssl=1"  alt=""  class="wp-image-159954" ></a><figcaption class="wp-element-caption">Speed Warp has this motif well under control.</figcaption></figure>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Drache_Twixtor.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="675"  data-id="159952"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Drache_Twixtor.jpg?resize=1200%2C675&quality=80&ssl=1"  alt=""  class="wp-image-159952" ></a><figcaption class="wp-element-caption">Twixtor is slightly better defined.</figcaption></figure>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Drache_Aion.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="675"  data-id="159951"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Drache_Aion.jpg?resize=1200%2C675&quality=80&ssl=1"  alt=""  class="wp-image-159951" ></a><figcaption class="wp-element-caption">But Aion is even more precise.</figcaption></figure>
</figure>



<p class="wp-block-paragraph">All established methods are based on recognising details in successive individual images and use these to calculate motion vectors. This is then used to calculate the displacements of pixel groups for the intermediate images. This works best if the images have little blurring and good contrast. In addition, such slow motion is usually better if the source material has already been recorded at 50 fps or more. The algorithms primarily have difficulties with uniform patterns where the direction of movement can be incorrectly recognised. They also have problems with segmentation, i.e. distinguishing between moving foreground elements and the background or intersecting movements.</p>



<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="484"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Twixtor_Hinweis.jpg?resize=1200%2C484&quality=80&ssl=1"  alt=""  class="wp-image-159960" ><figcaption class="wp-element-caption">Twixtor is not quite as easy to use in Resolve as Speed Warp.</figcaption></figure>



<p class="wp-block-paragraph">The typical artefacts are “ghost images”, i.e. additional elements where there were none in the image, and the dragging of the still background or opposing movements. We therefore used a hyperlapse over a rice field with a drone as “evil” test material, as in previous tests in the DP, i.e. lots of sharpness but repetitive structures. Then women stamping rice and dragon dancers on Chinese New Year because of the fast, criss-crossing movements. Finally, as a technical object, a helicopter extinguishing a fire, fast-flowing water and a close-up of rice stamping. For the last subject, we had a quadruple slow-motion shot from the camera with reduced resolution.</p>



<figure class="wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Twixtor_Settings.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1063"  height="1331"  sizes="(max-width: 1200px) 100vw, 1200px"  data-id="159964"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Twixtor_Settings.jpg?resize=1063%2C1331&quality=80&ssl=1"  alt=""  class="wp-image-159964" ></a><figcaption class="wp-element-caption">The settings for Twixtor correspond to the previous version except for the new DNN.</figcaption></figure>
</figure>



<h2 id="the-opponents" class="wp-block-heading">The opponents</h2>



<p class="wp-block-paragraph">We have already tested Topaz Video AI (TVAI for short) and the equally CPU-intensive “Speed Warp” from DaVinci Resolve (DR for short) (DP 23:01). However, version 4 of TVAI is now available with the new AI model “Aion”, which is supposed to be optimised for precisely this task. On a Mac, it only runs from Ventura onwards and should at least be fed with HD. A new addition is Twixtor version 8 as a public beta, which has long been established as a plug-in for Optical Flow. The new version for the first time also uses a neural network called “DNN – model 1” . As a plug-in, Twixtor has a specific problem: you have to create space for the extended result because DR does not support this as elegantly as its own Speed Warp.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Timeline_extend.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="682"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Timeline_extend.jpg?resize=1200%2C682&quality=80&ssl=1"  alt=""  class="wp-image-159963" ></a><figcaption class="wp-element-caption">An additional track of the desired length is inserted in the timeline.</figcaption></figure>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph">To do this, you can either repeat the clip on the timeline until it corresponds to the new length, or place a coloured area (solid) of the desired length on the lower track and the clip above it. The whole thing is selected and a Fusion clip is created from it. Then add Twixtor and set the desired slow motion in the inspector. Speedramping with keyframes is also possible, and the Pro version also allows masks for better segmentation.</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Twixtor-in-Fusion-1.jpg?quality=80&ssl=1"><img data-recalc-dims="1" height="208" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Twixtor-in-Fusion-1.jpg?resize=1200%2C208&quality=80&ssl=1"  alt=""  class="wp-image-159970" ></a><figcaption class="wp-element-caption">The plug-in is then used in the Fusion Page.</figcaption></figure>



<h2 id="performance" class="wp-block-heading">Performance</h2>



<p class="wp-block-paragraph">We tested material in HD and UHD on a MacBook M1 Pro. An eightfold slow motion was calculated, so the software had to create seven synthetic images per real image. The AI processes required significantly longer computing times than conventional Optical Flow. All of them rely on fully utilised GPU cores, the CPUs have hardly anything to do. The values always refer to the result at 40 seconds: In HD at 25 fps, Optical Flow Enhanced Better in DR needs just under a third, Speed Warp on the other hand needs eight times the runtime, TVAI Aion needs just under 10 and Twixtor a factor of 17. With a source in UHD at 50 fps, Aion needs a factor of 74, the relative values of the other methods are similar.</p>



<figure class="wp-block-gallery has-nested-images columns-1 is-cropped wp-block-gallery-3 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Reisstampfen_Optical_Flow.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="675"  data-id="159973"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Reisstampfen_Optical_Flow.jpg?resize=1200%2C675&quality=80&ssl=1"  alt=""  class="wp-image-159973" ></a><figcaption class="wp-element-caption">Conventional optical flow shows ugly artefacts.</figcaption></figure>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Reisstampfen_Speed_Warp.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="675"  data-id="159974"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Reisstampfen_Speed_Warp.jpg?resize=1200%2C675&quality=80&ssl=1"  alt=""  class="wp-image-159974" ></a><figcaption class="wp-element-caption">Speed Warp is not much better here<br />.</figcaption></figure>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Reisstampfen_Aion.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="675"  data-id="159971"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Reisstampfen_Aion.jpg?resize=1200%2C675&quality=80&ssl=1"  alt=""  class="wp-image-159971" ></a><figcaption class="wp-element-caption">Aion can separate the movements quite well.</figcaption></figure>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Reisstampfen_Twixtor.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="675"  data-id="159972"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Reisstampfen_Twixtor.jpg?resize=1200%2C675&quality=80&ssl=1"  alt=""  class="wp-image-159972" ></a><figcaption class="wp-element-caption">Twixtor DNN shows slightly<br />more motion blur.</figcaption></figure>
</figure>



<h2 id="quality" class="wp-block-heading">Quality</h2>



<p class="wp-block-paragraph">The examples, some of which are also available for download<a href="http://is.gd/zeitlupenfiles">(is.gd/slow motion files</a>, with the test files and the images), clearly show how much the results depend on the subject. The women stamping the rice are an extreme example: fast, intersecting movements with motion blur and small, complex patterns of clothing remain a challenge for all methods.</p>



<p class="wp-block-paragraph">Here you have to perform pixel peeping frame by frame to identify differences. Speed Warp, which can achieve amazing results with other subjects, is only slightly superior to the much faster, non-neural algorithm here. Twixtor’s DNN delivers better results, but at the cost of enormous computing times. In our opinion, Aion also looks better than Speed Warp and is only slightly slower.</p>



<figure class="wp-block-gallery has-nested-images columns-2 is-cropped wp-block-gallery-4 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Drache_Aion.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="675"  data-id="159951"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Drache_Aion.jpg?resize=1200%2C675&quality=80&ssl=1"  alt="Doch Aion ist noch präziser."  class="wp-image-159951" ></a><figcaption class="wp-element-caption">But Aion is even more precise.</figcaption></figure>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Drache_Twixtor.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="675"  data-id="159952"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Drache_Twixtor.jpg?resize=1200%2C675&quality=80&ssl=1"  alt="Twixtor ist etwas besser definiert."  class="wp-image-159952" ></a><figcaption class="wp-element-caption">Twixtor is slightly better defined.</figcaption></figure>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Drache_Speed_Warp.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="675"  data-id="159954"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Drache_Speed_Warp.jpg?resize=1200%2C675&quality=80&ssl=1"  alt="Dieses Motiv hat Speed Warp gut im Griff."  class="wp-image-159954" ></a><figcaption class="wp-element-caption">Speed Warp has this theme well under control.</figcaption></figure>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Drache_Optical_Flow.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="675"  data-id="159953"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Drache_Optical_Flow.jpg?resize=1200%2C675&quality=80&ssl=1"  alt="Mit Optical Flow die typischen Verdoppelungen und Verzerrungen"  class="wp-image-159953" ></a><figcaption class="wp-element-caption">The typical doubling and distortions with Optical Flow</figcaption></figure>
</figure>



<p class="wp-block-paragraph">Our other test series with dragon dancers at the Chinese New Year brought even better results with the AI-based processes. You have to look closely to even notice the deformations in crossing movements or in the background. Here too, Aion looks at least as good as Twixtor. However, it has the disadvantage that this does not yet work in the plug-in for DR, but only in the standalone version. Twixtor and Aion did not quite pass the endurance test with the rice paddy either, with Twixtor showing fewer artefacts with local blurring, while Aion produces large-scale distortions in the same place.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-gallery has-nested-images columns-1 is-cropped wp-block-gallery-5 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Heli_Optical-Flow.jpg?quality=80&ssl=1"><img data-recalc-dims="1" height="1080" width="1170"  decoding="async"  data-id="159978"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Heli_Optical-Flow.jpg?resize=1170%2C1080&quality=80&ssl=1"  alt=""  class="wp-image-159978" ></a><figcaption class="wp-element-caption">Optical Flow produces the typical ghost images despite 50 fps.</figcaption></figure>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Heli_Aion.jpg?quality=80&ssl=1"><img data-recalc-dims="1" height="1080" width="1170"  decoding="async"  data-id="159977"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Heli_Aion.jpg?resize=1170%2C1080&quality=80&ssl=1"  alt=""  class="wp-image-159977" ></a><figcaption class="wp-element-caption">The AI, in this case Aion, can do better, but is not perfect either.</figcaption></figure>
</figure>



<p class="wp-block-paragraph">Finally, the helicopter did not show the ghostly rotor blades as with conventional Optical Flow, but instead the rotors became pulsatingly shorter and longer. The AI just doesn’t understand anything about rotors in lateral perspective ;-) However, it also became apparent here that fast-flowing or falling water is rather uncritical. A river shot with intensive movements confirmed this: here, the human eye can hardly notice the weaknesses of the artificial slow motion. A close-up shot in 720p at 100 fps, which we scaled to HD with TVAI and slowed down to twice the length, also looked quite good: The rice flour dusts quite convincingly.</p>



<p class="wp-block-paragraph">A higher frame rate during recording is therefore the better alternative despite the lower resolution. TVAI can even do this in one go, as it also scales very well. DR Studio delivers similarly good results with SuperScale Enhanced and Speed Warp, but both processes require enormous computing times. Only a powerful PC with a strong power supply and the Nvidia 4090 would help. However, TVAI does not yet use TensorRT in Aion and obviously processes the AI models one after the other, as it is almost twice as slow as DR when doing scaling plus slow motion.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-gallery has-nested-images columns-1 is-cropped wp-block-gallery-6 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Reis_Twixtor.jpg?quality=80&ssl=1"><img data-recalc-dims="1" height="675" width="1200"  decoding="async"  data-id="159980"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Reis_Twixtor.jpg?resize=1200%2C675&quality=80&ssl=1"  alt=""  class="wp-image-159980" ></a><figcaption class="wp-element-caption">In hyperlapse, Twixtor only shows a few blurs in the corners on the left.</figcaption></figure>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Reis_Aion.jpg?quality=80&ssl=1"><img data-recalc-dims="1" height="675" width="1200"  decoding="async"  data-id="159979"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Reis_Aion.jpg?resize=1200%2C675&quality=80&ssl=1"  alt=""  class="wp-image-159979" ></a><figcaption class="wp-element-caption">In this case, Aion fails.</figcaption></figure>
</figure>



<p class="wp-block-paragraph">Since differences can only be recognised with extreme pixel peeping, DR is sufficient here if you don’t want to go from “small” HD (720p) to UHD. You should also get good results if the camera is only capable of HD in slow motion and is scaled to UHD in the same way. After all, it is a typical use case that the artificial slow motion is deliberately added in order to limit extreme resolution losses or lack of light in camera based slow motion.</p>



<h2 id="commentary" class="wp-block-heading">Commentary</h2>



<p class="wp-block-paragraph">In the ratio of computing time to performance, TVAI with Aion beats the new AI in Twixtor. To be fair, it has to be said that Twixtor 8 is still a beta version. In terms of quality, both outperform Speed Warp in subtleties, but the differences recognisable by the general audience depend heavily on the subject. No AI can currently replace real slow motion from the camera, but the combination of high-quality upscaling of the camera shot and a milder slow motion in post looks very good. And here again the link to the material: <a href="http://is.gd/zeitlupenfiles">is.gd/slow-motionfiles</a></p><p>The post <a href="https://digitalproduction.com/2024/04/12/the-discovery-of-slowness/">The discovery of slowness</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/uliplank/">Uli Plank</a>. </p></div>]]></content:encoded>
					
		
		
		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Heli_Aion.jpg?fit=1300%2C1200&#038;quality=80&#038;ssl=1" length="90923" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Heli_Aion.jpg?fit=1170%2C1080&#038;quality=80&#038;ssl=1" width="1170" height="1080" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title>Die AI, hier Aion, kann es besser, aber nicht perfekt.</media:title>
	<media:description type="html"><![CDATA[Die AI, hier Aion, kann es besser, aber nicht perfekt.]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Heli_Aion.jpg?fit=1170%2C1080&#038;quality=80&#038;ssl=1" width="1170" height="1080" />
<post-id xmlns="com-wordpress:feed-additions:1">159941</post-id>	</item>
		<item>
		<title>Saving audiovisual cultural heritage with Topaz Video AI 3.0</title>
		<link>https://digitalproduction.com/2022/11/14/saving-audiovisual-cultural-heritage-with-topaz-video-ai-3-0/</link>
		
		<dc:creator><![CDATA[Uli Plank]]></dc:creator>
		<pubDate>Mon, 14 Nov 2022 07:39:00 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[Blackmagic Design]]></category>
		<category><![CDATA[AI model stacking]]></category>
		<category><![CDATA[AI video enhancement]]></category>
		<category><![CDATA[batch processing]]></category>
		<category><![CDATA[command line interface]]></category>
		<category><![CDATA[de-interlacing]]></category>
		<category><![CDATA[denoising]]></category>
		<category><![CDATA[digital video preservation]]></category>
		<category><![CDATA[DR]]></category>
		<category><![CDATA[FFmpeg integration]]></category>
		<category><![CDATA[frame rate conversion]]></category>
		<category><![CDATA[H.264 encoding]]></category>
		<category><![CDATA[H.265 encoding]]></category>
		<category><![CDATA[Neat Video]]></category>
		<category><![CDATA[Neatvideo]]></category>
		<category><![CDATA[noise reduction]]></category>
		<category><![CDATA[parallel processing]]></category>
		<category><![CDATA[Resolve]]></category>
		<category><![CDATA[Topaz Labs]]></category>
		<category><![CDATA[variable frame rate support]]></category>
		<category><![CDATA[video restoration tools]]></category>
		<category><![CDATA[video stabilization]]></category>
		<category><![CDATA[video upscaling]]></category>
		<guid isPermaLink="false">https://digitalproduction.com/?p=158389</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/TVAI_full_GUI-hd.jpg?fit=1200%2C702&quality=80&ssl=1" width="1200" height="702" title="Das User-Interface von Topaz Video AI 3.0 wurde umfassend verbessert." alt="" /></div><div><p>In DP 04:20, we have already tested the artificial intelligence for upscaling video. Topaz Video AI (TVAI for short), as it is now called in version 3.0, has, according to the manufacturer, been developed from scratch<br />
developed from scratch to incorporate additional capabilities and enable the stacking of AI models with filters and parallel operation.</p>
<p>The post <a href="https://digitalproduction.com/2022/11/14/saving-audiovisual-cultural-heritage-with-topaz-video-ai-3-0/">Saving audiovisual cultural heritage with Topaz Video AI 3.0</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/uliplank/">Uli Plank</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/TVAI_full_GUI-hd.jpg?fit=1200%2C702&quality=80&ssl=1" width="1200" height="702" title="Das User-Interface von Topaz Video AI 3.0 wurde umfassend verbessert." alt="" /></div><div><p class="wp-block-paragraph">In addition to the previous capabilities in upscaling and de-interlacing, changes to the frame rate up to slow motion, stabilisation (still in beta) and noise reduction are now offered. There is a command line control and the programme should have no problems with variable frame rates, as is often the case with smartphones and computer recordings.</p>



<h2 id="gui" class="wp-block-heading">GUI</h2>



<p class="wp-block-paragraph">Despite all the technical performance, the user interface still seemed a little clumsy, but Topaz Labs has thoroughly tidied things up here. Batch processing in particular, including processes running in parallel, has become much clearer. It can now be organised according to previews and export tasks and their status as well as well-estimated remaining times are visible at first glance. If several variants are to be created from one original, this is also unproblematic. Unfortunately, it is not yet possible to change the order in which the tasks are to be processed, which would be useful for the lengthy calculations.<br />The selected processes and their parameters are displayed on the right, with useful presets and brief explanations of the properties of the respective AI models. To do this, you must leave the cursor on the headline or the respective icon, not on the word itself. For the output, you can choose from all containers and codecs that ffmpeg can handle, as this is what TVAI is based on. Accordingly, hardware encoding of H.264/265 is also offered if the computer supports this.<br />In addition, very high-quality video formats with a large bit depth are also available, for which you previously had to output image sequences from Video Enhancer AI. However, this also means that a codec such as ProRes is not the original version, but the version from the free programme that is not authorised by Apple. Technically, this is not usually a problem, but some customers may find it annoying.<br />A number of models and filters can be combined. But you can’t stack every combination, e.g. you should perform high-quality de-interlacing with “Dione” and only then apply optimum upscaling to the result with “Proteus”. The imported material can be trimmed to a desired area using “Trim”, but unfortunately the timecode of the original is not displayed, which would of course be very helpful in the workflow. This could probably be easily changed, because if TC is present in the source, it is passed through to the final product.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/Crop_DV-hd.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="768"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/Crop_DV-hd.jpg?resize=1200%2C768&quality=80&ssl=1"  alt=""  class="wp-image-158393" ></a><figcaption class="wp-element-caption">TVAI now offers precise cropping, as is necessary for DV formats, for example.</figcaption></figure>



<p class="wp-block-paragraph"><br />You can also crop the source, as is necessary for the thin lateral bars of DV or the blanking interval of analogue sources. The pixel values are displayed precisely. TVAI also recognises the format of non-square pixels if the corresponding flag in the source is correct. The programme cannot handle inverse telecine (i.e. the removal of redundant fields in NTSC film copies), but any good editing system today can do this in a flash.<br />The selection of the appropriate AI model is simplified by optimised suggestions, with some you can no longer intervene at all, but you don’t have to. So far, TVAI does not save any projects and their settings, which is why you are only asked whether you mean this first when you exit the programme. As long as the programme is rendering in the background, there is unfortunately no progress bar under its icon. If you need the computing power for something else, you should simply stop running processes and leave the programme open in the background.<br />Any malfunctions are only displayed with a red X next to the process at first glance, but clicking on it leads to a more detailed explanation. The calculations ran reliably during testing with version 3.0.3, there were only two minor issues. During output (here always in ProRes 422 HQ), the first image was black and was displayed as offline in DaVinci Resolve. When de-interlacing, the first few frames were black. In addition, the entire GUI sometimes flickered during the calculation.</p>



<h2 id="de-interlacing-and-scaling" class="wp-block-heading">De-interlacing and scaling</h2>



<p class="wp-block-paragraph">The programme naturally continues to perform the original main task of upscaling. The AI models recommended for this are comprehensive and quite varied. “Gaia”, for example, has only two fixed presets for high-quality sources or for computer graphics, while “Artemis” has five for material of varying quality. When an AI model is used for the first time, TVAI first has to download it, which can take a while depending on the internet connection. Later, the saved models are used and the internet connection is no longer necessary.<br />This can add up to quite a lot; by the end of the test, we needed a good 2.7 GB of space. The models “Theia” and “Proteus” (incidentally, these are all figures from Greek mythology) can be adjusted in<br />The models “Theia” and “Proteus” (incidentally, these are all figures from Greek mythology) can be adjusted in detail, e.g. to reduce compression artefacts, moiré or noise, to sharpen the images or to add some structure as noise or ‘grain’ to overly smooth images. Proteus even offers the option of analysing the material and suggesting suitable slider settings.<br />Of course, we were not able to test all models for all types of sources, and certainly not with all adjustment options. We tested two types of typical sources from the early digital era, both with interlace. One source was in DVCAM in PAL format of 720 x 576 from a semi-professional camera with an aspect ratio of 4:3. The other was HDV in 1080i with 1440 x 1080, i.e. a pixel format that also has to be ‘stretched’ for 16:9. In both cases, the digitally copied original was read by TVAI without any problems.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/PAL_DV-hd.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="901"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/PAL_DV-hd.jpg?resize=1200%2C901&quality=80&ssl=1"  alt=""  class="wp-image-158394" ></a><figcaption class="wp-element-caption">This is what a DVCAM original looks like when enlarged twice.</figcaption></figure>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/HDV_to_UHD-hd.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="486"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/HDV_to_UHD-hd.jpg?resize=1200%2C486&quality=80&ssl=1"  alt=""  class="wp-image-158396" ></a><figcaption class="wp-element-caption">HDV in 1080i becomes quite good UHD.</figcaption></figure>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph">In our early test of Video Enhancer AI, we still had to work with tricks, but now the programme is capable of de-interlacing. However, this function is not offered automatically based on the flags. We first had to switch to “Interlaced” under “Video Type”, but then TVAI does an amazing job. It automatically converts the PAL recordings to 50 fps so that the temporal resolution is retained and always uses a variant of the “Dione” model for this. </p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/TVAI_DeInt_Upscale-hd.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="899"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/TVAI_DeInt_Upscale-hd.jpg?resize=1200%2C899&quality=80&ssl=1"  alt=""  class="wp-image-158397" ></a><figcaption class="wp-element-caption">That’s how much TVAI can conjure up from DVCAM in HD.</figcaption></figure>



<p class="wp-block-paragraph">What came out of our DVCAM source in HD is quite impressive. It was not a drone shot (which did not exist for civilian purposes at the time), but a handheld camera in a helicopter. No other de-interlacer was able to handle the particularly critical parallel line structures on the buildings like TVAI. Only in the case of short, fast vibrations did the motion blur prevent an optimal reconstruction of the contour lines. However, a trial with scaling to UHD led to results with a manga look.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/DV_SD_zu_UHD-hd.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="537"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/DV_SD_zu_UHD-hd.jpg?resize=1200%2C537&quality=80&ssl=1"  alt=""  class="wp-image-158399" ></a><figcaption class="wp-element-caption">The attempt to go from DVCAM to UHD shows a visual proximity to animation.</figcaption></figure>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"><br />As a cross-check, we also scaled down a clip in UHD from a high-quality camera to SD (to 1024 x 576) with minimal compression and ‘blown up’ it again with TVAI. It looked quite decent in HD, but no longer in UHD. In parts of the image, the AI then invented structures that did not even exist in the source. The rest was not pixelated, but blurred. Even if the software offers much more extreme scaling, we would consider this to be of little use. Especially as modern TVs scale so well from HD to UHD that it hardly bothers you at a normal viewing distance.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/SD_to_UHD_Compare-hd.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="888"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/SD_to_UHD_Compare-hd.jpg?resize=1200%2C888&quality=80&ssl=1"  alt=""  class="wp-image-158400" ></a><figcaption class="wp-element-caption">SD becomes UHD without pixelation, but the software hallucinates structures like the one on the right on the lower roof.</figcaption></figure>



<p class="wp-block-paragraph"><br /></p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/HD-_UHD_Compare-hd.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="724"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/HD-_UHD_Compare-hd.jpg?resize=1200%2C724&quality=80&ssl=1"  alt=""  class="wp-image-158402" ></a><figcaption class="wp-element-caption">High-quality HD material, on the other hand, becomes quite convincing UHD.</figcaption></figure>



<p class="wp-block-paragraph">For comparison, we ran the material through the ‘neural’ de-interlacer in DaVinci Resolve Studio (DR for short) and upscaled it with its SuperScale. While differences in the scaling<br />were only recognisable on very close inspection, the de-interlacing was significantly weaker. Only the freeware QTGMC, which we had already tested at the time, comes close to the results from TVAI, but is somewhat cumbersome in professional practice. The results from HDV, especially with natural textures, are also impressive in UHD. The de-interlacing was just as flawless here.</p>



<h2 id="intermediate-images" class="wp-block-heading">Intermediate images</h2>



<p class="wp-block-paragraph">Since, according to Topaz Labs, TVAI also obtains the reconstruction of details from the neighbouring images, the generation of additional images for slow motion or format conversions (e.g. 24 to 50 fps) was obvious. The programme suggests “Chronos” as the AI model for an extension up to four times, and “Apollo” for even higher values. However, you can also choose freely between these models. Chronos Fast” does not calculate faster, but is supposed to recognise fast movements better; Apollo actually calculates a good 20 percent faster.</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/Karneval_Compare.png?quality=72&ssl=1"><img data-recalc-dims="1" height="535" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/Karneval_Compare.png?resize=1200%2C535&quality=72&ssl=1"  alt=""  class="wp-image-158405" ></a><figcaption class="wp-element-caption">TVAI generates quite convincing slow motion, but high-contrast material can appear too flat.</figcaption></figure>



<p class="wp-block-paragraph"><br />Which is better in terms of results seems to depend more on the subject. To compare the quality, we again used the studio version of DR and set the “Speed Warp” algorithm under “Optical Flow”. You won’t find this in the project, but only in the “Inspector” for the individual clip. Although it also works slowly, in our experience it is usually the best option for slow motion. The results are difficult to distinguish from each other if the DV material has previously been freed from interlacing with TVAI.</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/Tanz_SloMo.png?quality=72&ssl=1"><img data-recalc-dims="1" height="903" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/Tanz_SloMo.png?resize=1200%2C903&quality=72&ssl=1"  alt=""  class="wp-image-158407" ></a><figcaption class="wp-element-caption">With more suitable material from DVCAM, TVAI can deliver excellent slow motion.</figcaption></figure>



<p class="wp-block-paragraph"><br />On closer inspection, both methods have similar problems with overlapping movements or the dragging of parts of the background, and yet both are clearly superior to the common optical flow methods. These errors will hardly be noticeable to an untrained observer for many motifs. The calculation times do not differ drastically here, Apollo is around 6 per cent slower than Speed Warp.</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/Zeitlupe_perfekt.png?quality=72&ssl=1"><img data-recalc-dims="1" height="901" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/Zeitlupe_perfekt.png?resize=1200%2C901&quality=72&ssl=1"  alt=""  class="wp-image-158408" ></a><figcaption class="wp-element-caption">TVAI calculates flawless intermediate images from a suitable motif.</figcaption></figure>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/Zeitlupe_Artefakt.png?quality=72&ssl=1"><img data-recalc-dims="1" height="900" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/Zeitlupe_Artefakt.png?resize=1200%2C900&quality=72&ssl=1"  alt=""  class="wp-image-158410" ></a><figcaption class="wp-element-caption">Repetitive structures cause problems, such as blurred or offset areas.</figcaption></figure>



<p class="wp-block-paragraph"><br />The slowing down of a drone shot over a rice field was very interesting. It was actually a series of individual images that were to be stretched as a sequence. Both programmes produced excellent results with a vertical movement. However, with a transverse movement, in which the motif was characterised by many repetitive structures in the plants, both failed and produced almost identical artefacts. There’s nothing like real slow motion from the camera; in all other cases, success is heavily dependent on the subject.</p>



<h2 id="image-enhancement" class="wp-block-heading">Image enhancement</h2>



<p class="wp-block-paragraph">Yes, they still exist, the small improvements through “enhancement”, primarily with Proteus. It is designed to reduce compression artefacts and noise and bring out real details. We tried it with deliberately over-compressed and not entirely noise-free material that had previously been run through an H.264 encoder in UHD at too low a data rate. But the results require a lot of “pixel peeping” to see the progress, and one wonders whether this justifies computing times by a factor of 10 to 20 on hardware that isn’t quite weak.</p>



<figure class="wp-block-image size-full is-resized"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2022/11/Proteus_Estimate.png?quality=72&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="534"  height="853"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2022/11/Proteus_Estimate.png?resize=534%2C853&quality=72&ssl=1"  alt=""  class="wp-image-158412"  style="width:800px;height:auto" ></a><figcaption class="wp-element-caption">Proteus can suggest setting values via image analysis.</figcaption></figure>



<p class="wp-block-paragraph"><br />Another test for noise reduction alone was carried out with Artemis, this time with high-quality footage from a Blackmagic 12K, but in very low light. For comparison, we used Neatvideo as the current ‘gold standard’ in the lower price range. Both programmes were quite good at bringing out real detail and significantly reducing noise. But Neatvideo was once again able to do it better, in particular Artemis left a slightly coloured and cloudy unsteadiness over the entire image. This phenomenon was hardly present in Neatvideo, which is probably due to the differentiated processing of individual frequency ranges with the help of specific samples. </p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="535" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2022/11/NR_TVAI-1.png?resize=1200%2C535&quality=72&ssl=1"  alt=""  class="wp-image-158415" ><figcaption class="wp-element-caption">TVAI can filter out noise without sacrificing detail. Unfortunately, this results in coloured clouds.</figcaption></figure>



<p class="wp-block-paragraph">In addition, Artemis only has three settings for low, medium and high noise (plus halo removal if required). In none of them were we able to remove enough noise without the surface clutter appearing. Neatvideo, on the other hand, was already better in the standard setting, without us having to resort to the highly differentiated adjustment. It is almost seven times faster.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="675" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2022/11/NV_Denoise-4k.jpg?resize=1200%2C675&quality=80&ssl=1"  alt=""  class="wp-image-159291" ><figcaption class="wp-element-caption">Neatvideo can do it better and faster.</figcaption></figure>



<h2 id="stabilisation" class="wp-block-heading">Stabilisation</h2>



<p class="wp-block-paragraph">The new version also offers stabilisation of nervous camera movements, and even the correction of artefacts through rolling shutter or the reduction of jitter can be optionally activated. No special AI model is responsible for this, but the function simply appears under “Filters” (still in beta). The results with “Auto-Crop” are already very decent, even if some ‘jelly’ from the rolling shutter occasionally remains. However, our test material came from a camera with a rather slow readout of 30 milliseconds.<br />The alternative is called “full-frame”, where the missing image content in peripheral areas is filled in with offset material from neighbouring images. This works surprisingly well at times, but occasionally very poorly with short bursts. A slight artefact of this kind can even be seen in the Topaz tutorial. As such interference in peripheral areas distracts a lot from the film, the crop version plus upscaling should generally look better.<br />We compared the stabilisation results with the function from DR, with the “Perspective” setting. This is quite similar in terms of quality, with slight image distortions remaining here too. The processing times are acceptable in both cases if you do not activate Full Frame or Reduce Jitter in TVAI<br />Again. However, neither method can achieve the level of stabilisation based on gyro data that some cameras from Blackmagic or Sony provide (see DP 06:22).</p>



<h2 id="hardware-requirements" class="wp-block-heading">Hardware requirements</h2>



<p class="wp-block-paragraph">Our tests were carried out on an Apple M1 computer, which is not exactly optimally equipped for this, even if TVAI is already running natively. Although the computer is well used, neither the CPU nor the GPU are fully utilised. We compared the times on an Intel computer with 580 GPUs (AMD) as a test, which cannot keep up with the M1 laptop under DR despite the second GPU. With TVAI, however, the older iMac was significantly faster. The eGPU didn’t play such a big role here; with the internal GPU alone, the times were only around 20 per cent longer.<br />In most cases, however, the computing times are miserably long, apart from<br />pure de-interlacing, which should achieve real time on more powerful hardware than ours. Even though TVAI can now use all GPU families, it is primarily at home on Nvidia. For comparison on our computer: Creating a quadruple slow motion from the clip of the Mexican carnival took an hour and 45 minutes with TVAI. SpeedWarp and SuperScale took 6 minutes and 23 seconds. The visual result was by no means so drastically superior.</p>



<h2 id="commentary" class="wp-block-heading">Commentary</h2>



<p class="wp-block-paragraph">You shouldn’t expect miracles: A recording from 1998 from a very respectable DVCAM from Sony at the time doesn’t look really good in UHD, even with Topaz Video AI. The highest of feelings is reasonably usable HD, whereby the astonishingly good de-interlacing contributes even more to the result than the actual upscaling. The software can also turn HDV with non-square pixels and interlacing into reasonably presentable UHD.<br />That’s it, and anyone who needs this more often to save their audiovisual cultural heritage should buy a PC with the most powerful Nvidia card, with which you can also heat your study until the next blackout. Faster software such as DaVinci Resolve can do slow motion or stabilisation almost as well, the latter with gyro data even better. For noise filtering, Neatvideo remains<br />remains unbeaten.</p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p><p>The post <a href="https://digitalproduction.com/2022/11/14/saving-audiovisual-cultural-heritage-with-topaz-video-ai-3-0/">Saving audiovisual cultural heritage with Topaz Video AI 3.0</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/uliplank/">Uli Plank</a>. </p></div>]]></content:encoded>
					
		
		
		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/TVAI_full_GUI-hd.jpg?fit=1845%2C1080&#038;quality=80&#038;ssl=1" length="245920" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/TVAI_full_GUI-hd.jpg?fit=1200%2C702&#038;quality=80&#038;ssl=1" width="1200" height="702" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title>Das User-Interface von Topaz Video AI 3.0 wurde umfassend verbessert.</media:title>
	<media:description type="html"><![CDATA[]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/01/TVAI_full_GUI-hd.jpg?fit=1200%2C702&#038;quality=80&#038;ssl=1" width="1200" height="702" />
<post-id xmlns="com-wordpress:feed-additions:1">158389</post-id>	</item>
	</channel>
</rss>
