<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="https://digitalproduction.com/wp-content/plugins/xslt/public/template.xsl"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	xmlns:rssFeedStyles="http://www.wordpress.org/ns/xslt#"
>

<channel>
	<title>KI - DIGITAL PRODUCTION</title>
	<atom:link href="https://digitalproduction.com/tag/ki/feed/" rel="self" type="application/rss+xml" />
	<link>https://digitalproduction.com</link>
	<description>Magazine for Digital Media Production</description>
	<lastBuildDate>Fri, 31 Oct 2025 08:21:32 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
<site xmlns="com-wordpress:feed-additions:1">236729828</site>	<item>
		<title>Text-to-Animation-Generator?</title>
		<link>https://digitalproduction.com/2024/08/29/text-to-animation-generator/</link>
		
		<dc:creator><![CDATA[Jürgen Firsching]]></dc:creator>
		<pubDate>Thu, 29 Aug 2024 17:49:32 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Animation]]></category>
		<category><![CDATA[Generator]]></category>
		<category><![CDATA[KI]]></category>
		<category><![CDATA[Text]]></category>
		<guid isPermaLink="false">https://digitalproduction.com/?p=144225</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/08/image-1.webp?fit=825%2C465&quality=72&ssl=1" width="825" height="465" title="" alt="" /></div><div><p>Just say how the animation runs and it will be generated? That's possible, at least according to the makers of Saymotion. How well? You'll have to see for yourself.</p>
<p>The post <a href="https://digitalproduction.com/2024/08/29/text-to-animation-generator/">Text-to-Animation-Generator?</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/juergenfirsching/">Jürgen Firsching</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/08/image-1.webp?fit=825%2C465&quality=72&ssl=1" width="825" height="465" title="" alt="" /></div><div><div class='__iawmlf-post-loop-links' style='display:none;' data-iawmlf-post-links='[{&quot;id&quot;:2599,&quot;href&quot;:&quot;https:\/\/www.deepmotion.com\/saymotion\/docs&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251228041220\/https:\/\/www.deepmotion.com\/saymotion\/docs&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:2600,&quot;href&quot;:&quot;https:\/\/www.deepmotion.com\/post\/saymotion-v2-0-release&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251015150156\/https:\/\/deepmotion.com\/post\/saymotion-v2-0-release&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-28 04:10:17&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2025-12-28 04:10:17&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:2601,&quot;href&quot;:&quot;https:\/\/www.deepmotion.com\/saymotion&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251226224057\/https:\/\/www.deepmotion.com\/saymotion&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-28 04:10:18&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2025-12-28 04:10:18&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;}]'></div>
<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph">DeepMotion has launched version 2.0 of its <strong>SayMotion</strong> software, which aims to simplify and automate the creation of animation sequences. The software allows users to turn stories and ideas directly into animation sequences, with the update bringing some significant new features. These are designed to improve the production pipeline and optimise collaboration between different teams.</p>



<figure class="wp-block-embed is-type-rich is-provider-embed-handler wp-block-embed-embed-handler wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe class="youtube-player" width="1200" height="675" src="https://www.youtube.com/embed/eT654QdTf6Q?version=3&rel=1&showsearch=0&showinfo=1&iv_load_policy=1&fs=1&hl=en-US&autohide=2&wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div></figure>



<p class="wp-block-paragraph"><strong>Extended file format support and new features</strong></p>



<p class="wp-block-paragraph">One of the most notable new features in SayMotion 2.0 is support for a wider range of file formats. Artists can now import and export files in <strong>FBX, BVH, USD</strong> and <strong>GLTF</strong> formats. This expanded compatibility makes the software more versatile, especially when working with other popular tools in the industry.</p>



<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/fa2cec6c-8de7-4f56-b3af-889fd8f7f84f.png&w=3840&q=100"  alt="" ></figure>



<p class="wp-block-paragraph">In addition to improved file format support, SayMotion 2.0 also offers deeper integration of AI-powered features. Users can automatically analyse and adjust motion data through the use of deep learning algorithms, which significantly speeds up the workflow. New tools for motion optimisation have also been introduced, which make it possible to automatically correct unwanted motion errors before they become visible in the final animation.</p>



<h3 id="cloud-integration-and-improved-collaboration" class="wp-block-heading"><strong>Cloud integration and improved collaboration</strong></h3>



<p class="wp-block-paragraph">Another highlight of SayMotion 2.0 is the extended cloud integration, which allows multiple users to work on projects simultaneously. This function is primarily aimed at larger teams and international projects where it is important that everyone involved always has access to the latest versions of the animation data. Cloud integration supports seamless synchronisation of project data and allows changes to be tracked in real time. This promotes collaboration and reduces the risk of version conflicts.</p>



<h3 id="new-export-options-and-integration-into-existing-pipelines" class="wp-block-heading"><strong>New export options and integration into existing pipelines</strong></h3>



<p class="wp-block-paragraph">SayMotion 2.0 also gives users the option of exporting their animation sequences directly in various video formats such as <strong>MP4</strong> and <strong>MOV</strong>. Direct uploading to platforms such as YouTube or Vimeo is also supported. This function is particularly useful for artists who want to present their work quickly and easily.</p>



<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/cf2906b4-b871-4085-b007-115ce120c738.png&w=3840&q=100"  alt="" ></figure>



<p class="wp-block-paragraph">Another feature is the support of Python scripts. This makes it possible to create customised automations and integrate the software seamlessly into existing production pipelines. These enhancements allow SayMotion 2.0 to be used in a wide range of projects, from small indie productions to large VFX productions.</p>



<h3 id="licence-models-and-prices" class="wp-block-heading"><strong>Licence models and prices</strong></h3>



<p class="wp-block-paragraph">SayMotion 2.0 licences are offered in several models, depending on the needs of the user. For individual users there is a monthly subscription fee, while teams and studios can choose an extended licence with additional functions and support options. Prices vary depending on the scope of the licence and start at around USD 15 per month. A trial version is also available to evaluate the new features in advance.</p>



<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/d6294bb0-e7de-4040-a846-6908c6c513d0.jpg&w=3840&q=100"  alt="" ></figure>



<h3 id="conclusion" class="wp-block-heading"><strong>Conclusion</strong></h3>



<p class="wp-block-paragraph">SayMotion 2.0 brings a host of new features and improvements that should be of particular interest to users in the animation and VFX industry. The extended file format support, cloud integration and new export options make the software a versatile tool in the modern digital production pipeline. Nevertheless, all new functions should be thoroughly tested before they are used in ongoing projects – and of course the animation is by far NOT yet suitable for the hero character, but for “quick variants in the crowd”, background characters or a bit of movement and bustle in the background? Why not.</p>



<h3 id="further-links" class="wp-block-heading"><strong>Further links:</strong></h3>



<p class="wp-block-paragraph"><a href="https://www.deepmotion.com/saymotion/docs" target="_blank" rel="noreferrer noopener">SayMotion documentation</a><br>Detailed technical documentation on the software.</p>



<p class="wp-block-paragraph"><a href="https://www.deepmotion.com/post/saymotion-v2-0-release" target="_blank" rel="noreferrer noopener">SayMotion 2.0 Release Notes</a><br>Official release notes from DeepMotion.</p>



<p class="wp-block-paragraph"><a href="https://www.deepmotion.com/saymotion" target="_blank" rel="noreferrer noopener">SayMotion official website</a><br>Manufacturer’s website with further information and documentation.</p><p>The post <a href="https://digitalproduction.com/2024/08/29/text-to-animation-generator/">Text-to-Animation-Generator?</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/juergenfirsching/">Jürgen Firsching</a>. </p></div>]]></content:encoded>
					
		
		
		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/08/image-1.webp?fit=825%2C465&#038;quality=72&#038;ssl=1" length="20418" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/08/image-1.webp?fit=825%2C465&#038;quality=72&#038;ssl=1" width="825" height="465" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title></media:title>
	<media:description type="html"><![CDATA[]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/08/image-1.webp?fit=825%2C465&#038;quality=72&#038;ssl=1" width="825" height="465" />
<post-id xmlns="com-wordpress:feed-additions:1">144225</post-id>	</item>
		<item>
		<title>Trailer! Artificial intelligence meets media production</title>
		<link>https://digitalproduction.com/2024/01/13/trailerkuenstliche-intelligenz-trifft-medienproduktion/</link>
		
		<dc:creator><![CDATA[Bela Beier]]></dc:creator>
		<pubDate>Sat, 13 Jan 2024 17:14:00 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[DP2401]]></category>
		<category><![CDATA[KI]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[subscribers]]></category>
		<category><![CDATA[Test]]></category>
		<category><![CDATA[trialer]]></category>
		<category><![CDATA[tv]]></category>
		<guid isPermaLink="false">https://digitalproduction.com/?p=146977</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/9-Endboard-Midjourney-Logos-und-Sendezeit-wurden-haendisch-hinzugefuegt.png?fit=1200%2C673&quality=72&ssl=1" width="1200" height="673" title="Trailer-Endboard mit Hilfe von Midjourney" alt="Trailer-Endboard mit Hilfe von Midjourney" /></div><div><p>Trailer here, trailer there, bang-boom here, bang-boom there. It's always the same - "produce the coolest, never-before-seen trailer" to get people excited, encourage them to go to the cinema, watch a film on TV or on the streaming portal of their choice. And then came AI...</p>
<p>The post <a href="https://digitalproduction.com/2024/01/13/trailerkuenstliche-intelligenz-trifft-medienproduktion/">Trailer! Artificial intelligence meets media production</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/belabeier/">Bela Beier</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/9-Endboard-Midjourney-Logos-und-Sendezeit-wurden-haendisch-hinzugefuegt.png?fit=1200%2C673&quality=72&ssl=1" width="1200" height="673" title="Trailer-Endboard mit Hilfe von Midjourney" alt="Trailer-Endboard mit Hilfe von Midjourney" /></div><div><figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/10-Schlechte-Schrift-Generierung.png?w=1200&quality=72&ssl=1"  alt=""  class="wp-image-146992" ></figure>





<p class="wp-block-paragraph">I have been working in the field of advertising trailer production for many years. Starting out as an editor and working my way up to the management level of global corporations, I have constantly scrutinised the “trailer” phenomenon, the process, the medium and the makers behind it. However, one thing has always remained the same… the way trailers are defined and produced. Can AI (artificial intelligence) break up what has been established over years, perhaps even decades? Can this new “intelligence” find new ways of producing and “seeing” trailers of the future? I tried to get to the bottom of these questions in a master’s thesis and made discoveries that I could never have imagined before. After more than 15 years as a media professional, I started a Master’s programme in Media Technology & Management at Munich University of Applied Sciences – I wanted to refresh my practice with new perspectives and continue to explore developments in media technology. In my final thesis entitled “Development of a model for the use of AI software in the advertising sector using the example of a trailer”, I investigated the possibilities that arise from the use of artificial intelligence in the production of advertising trailers. But why this topic? </p>





<figure class="wp-block-image size-full"><img data-recalc-dims="1"  fetchpriority="high"  decoding="async"  width="1200"  height="427"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/1-Trailer-Analyse-Bestandteile.png?resize=1200%2C427&quality=72&ssl=1"  alt="Trailer Analyse – Visuelle Bestandteile eines Trailers"  class="wp-image-146983" ><figcaption class="wp-element-caption">Trailer analysis – visual components of a trailer</figcaption></figure>





<p class="wp-block-paragraph">The work was to delve into the world of commercial trailer production and see how AI can change this process. I developed a model that integrates AI software into trailer production with the aim of increasing both efficiency and creativity. To develop this model, I collected various tools that were able to analyse and artificially generate the trailer components (generative AI). The master’s thesis, which I was able to complete in July 2023, was not only intended to make a contribution to this very popular field, but also to lay the foundation for future developments in audiovisual media production. The combination (to anticipate it here: No – we’re not all going to be put out of work by the AI Terminator) of human creativity and AI technology has the potential to fundamentally change advertising trailer production. </p>





<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/2-ChatGPT-Haluzinieren.png?w=1200&quality=72&ssl=1"  alt="Halluzinieren von ChatGPT"  class="wp-image-146985" ><figcaption class="wp-element-caption">Hallucinating ChatGPT</figcaption></figure>





<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/3-Datenverarbeitung-begrenzt.png?w=1200&quality=72&ssl=1"  alt="Begrenzte Datenverarbeitung von ChatGPT"  class="wp-image-146986" ><figcaption class="wp-element-caption">Limited data processing by ChatGPT</figcaption></figure>





<p class="wp-block-paragraph"><strong>Trailer? </strong></p>





<p class="wp-block-paragraph">Why a trailer and not another format as a test trial? The answer to this is complex. One reason is the complexity of trailers. They offer the opportunity to really put the capabilities of the technology to the test while leaving enough room for creative experimentation – a perfect basis for investigating different areas of Generative AI, from the conception and generation of an advertising text to video or sound generation. On the other hand, I myself have worked for a long time in the area of trailer production for various companies. I am familiar with the work processes and was therefore able to consistently and specifically identify and analyse relevant parts of trailer production for my Master’s thesis. </p>





<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="509"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/1-Trailer-Analyse-Bestandteile-2.png?resize=1200%2C509&quality=72&ssl=1"  alt="Trailer Analyse – Visuelle Bestandteile eines Endboards"  class="wp-image-146982" ><figcaption class="wp-element-caption">Trailer analysis – visual components of an end board</figcaption></figure>





<p class="wp-block-paragraph">I first analysed the production and content structure of a “classic” television trailer, i.e. the trailer on television, before or after the advertising, which refers to a film, series or similar in the programme. I decided to use a blockbuster film as an example. After “breaking down” such a trailer into the production process on the one hand and analysing the content on the other, I identified various components, such as the concept that defines the basic framework and content structure of a trailer. I also analysed the image and sound level for the components within a trailer. Subsequently, several AI tools were analysed to artificially evaluate or generate the respective components. </p>





<p class="wp-block-paragraph">I used a variety of tools, including specialised AI software for image and sound recognition as well as text, image and sound generation. In the initial conception of a trailer, I NATURALLY worked with ChatGPT. The image and sound recognition was implemented using various cloud models, such as Microsoft Azure or Amazon Rekognition. Tools from Midjourney and Runway were tested for image and sound generation and ElevenLabs and Soundraw for sound and music generation. These tools were not only used to create the individual components, but also to analyse the efficiency and cost-effectiveness of the entire process. There was also a focus on how these tools could be integrated into the overall production process, not just to automate individual aspects, but to enable a consistent and coherent use of AI technology throughout the production process – the aim was to create a coherent model for a possible AI production process.</p>





<p class="wp-block-paragraph">So I developed the model. It defined a clear pipeline in trailer production, ranging from the basic material available, through conception, image and sound processing, to the finalisation of the trailer. Attention was paid not only to the technical realisation, but also to how the AI tools could be integrated into the creative process in order to work not only efficiently, but also innovatively and with high quality.</p>





<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="605"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/5-Plugin-Zusammenfassung-eines-Transkripts-ueber-Youtube.png?resize=1200%2C605&quality=72&ssl=1"  alt="Zusammenfassung eines Transkripts mit dem ChatGPT-Plugin Video Summary"  class="wp-image-147003" ><figcaption class="wp-element-caption">Summary of a transcript with the ChatGPT plugin Video Summary</figcaption></figure>





<figure class="wp-block-image size-full"><img data-recalc-dims="1" decoding="async" width="1200" height="642" sizes="(max-width: 1200px) 100vw, 1200px" src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/5-Plugin-Zusammenfassung-eines-Transkripts-ueber-Youtube-2.png?resize=1200%2C642&quality=72&ssl=1" alt="ChatGPT-Prompt 
in Kombination mit dem Plugin Video 
Summary" class="wp-image-147002"/><figcaption class="wp-element-caption">ChatGPT prompt in combination with the Video Summary plugin</figcaption></figure>





<p class="wp-block-paragraph"><strong>Script evaluation via AI? </strong></p>





<p class="wp-block-paragraph">During my test phase, I initially tried to generate a trailer using artificial intelligence as much as possible and leave the creative and production decisions to it as far as possible. For example, I used ChatGPT to evaluate film scripts and let the tool decide which passages from the script could be relevant for a trailer. This quickly pushed ChatGPT to its limits, as it could only process a certain number of characters. </p>





<figure class="wp-block-image size-full"><img data-recalc-dims="1" decoding="async" width="1165" height="1004" sizes="(max-width: 1200px) 100vw, 1200px" src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/2-ChatGPT-Halluzinieren.png?resize=1165%2C1004&quality=72&ssl=1" alt="Halluziniertes Trailer-
Konzept von ChatGPT" class="wp-image-146984"/><figcaption class="wp-element-caption">Hallucinated trailer
Concept of ChatGPT</figcaption></figure>





<p class="wp-block-paragraph">Nevertheless, I managed to generate a selection using various approaches. For example, I chopped up the film script into individual parts and fed the AI piece by piece. I was then able to use Adobe Premiere Pro’s “text-based editing” to remove all unselected text passages, which meant that the irrelevant scenes were removed directly from the cut sequence. So you could say that the textual analysis by ChatGPT describes a trailer structure defined by artificial intelligence. Another method was shortening, which can also be implemented using plugins. I had such a script summarised in smaller and smaller parts until I was able to create a concept for a trailer from it. </p>





<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="1080" width="1168"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/8-Soundraw-Musikgenerierung-1168x1080.png?resize=1168%2C1080&quality=72&ssl=1"  alt="Musikgenerierung mit Soundraw"  class="wp-image-146990" ><figcaption class="wp-element-caption">Music generation with Soundraw</figcaption></figure>





<p class="wp-block-paragraph">One challenge was the so-called multimodality of the various tools, i.e. the overlapping of the different areas in a trailer production. Ideally, you would work with a tool in which you could simultaneously generate a concept, search for image material and integrate image, sound and music generation in parallel. Up to now, different applications have been used to generate the respective content in order to integrate it into another application. However, this can lead to errors in data transmission. The aim must therefore be to integrate the different areas within one interface. This would also open up the possibility of automating a complete process. Recent integrations of different generative AI within an application prove this trend. For example, it is now possible to generate images with DALL-E3 using the text tool ChatGPT. Runway is another example that extends the classic editing process with various AI tools within its own interface. Another example is the progress of AI applications via Adobe Firefly, Adobe’s generative AI. It should soon be possible to formulate prompts and edit video material in this way. </p>





<figure class="wp-block-image size-full"><img data-recalc-dims="1" decoding="async" width="1200" height="729" sizes="(max-width: 1200px) 100vw, 1200px" src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/7-Sprachsynthese-ueber-ElevenLabs.png?resize=1200%2C729&quality=72&ssl=1" alt="Sprachsynthese über 
ElevenLabs" class="wp-image-147007"/><figcaption class="wp-element-caption">Speech synthesis via ElevenLabs</figcaption></figure>





<p class="wp-block-paragraph">The AI tools were not without their weaknesses. Whilst they did a very good job in many ways, there were also areas where the generated content did not live up to expectations or where human creativity and intuition were still essential. During my test phase, for example, I declared the AI-generated music to be too inferior in quality for a major blockbuster film and therefore opted for music composed by humans (albeit selected by artificial intelligence). </p>





<p class="wp-block-paragraph">In the end, the ElevenLabs speech tool was chosen for the speech generation, although the original plan was to work with Descript. The reason for this was the fact that Descript still does not offer a German language. Only English voices can be generated. When generating text using ChatGPT, one of the challenges was “hallucinating” and supposedly correctly quoting film content. </p>





<p class="wp-block-paragraph">The text concepts generated for the trailer cut, which consisted of off-text and original sounds, were sometimes incorrect. ChatGPT invented film scenes and original sound passages. Nevertheless, they formed a good basis on which to build a concept. Surprisingly, the structure and dramaturgical composition of a trailer were recognised correctly throughout. </p>





<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/4-Auswahl-ChatGPT-Transkript-Szenen.png?w=1200&quality=72&ssl=1"  alt="Auswahl von Filmszenen durch ChatGPT"  class="wp-image-146988" ><figcaption class="wp-element-caption">Selection of film scenes by ChatGPT </figcaption></figure>





<p class="wp-block-paragraph">The generated graphics used for the endboard, i.e. the rear part of a trailer, proved to be very convincing. Graphic text generation, for example for broadcast information on the endboard, was again weak. All of this emphasises the need for a balanced combination of technological innovation and human creativity in order to be successful in media production and at the same time open up new avenues and possibilities.</p>





<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="587"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/4-Auswahl-ChatGPT-Transkript-Szenen-2.png?resize=1200%2C587&quality=72&ssl=1"  alt="Textbasierte Bearbeitung Adobe Premiere"  class="wp-image-146987" ><figcaption class="wp-element-caption">Text-based editing Adobe Premiere</figcaption></figure>





<p class="wp-block-paragraph">Finally, four trailers were generated and then evaluated by industry experts. These were individuals with broad expertise in media production, who were thus able to provide valuable feedback on the AI-generated trailers. They came from various TV stations such as ProSiebenSat1, Sky Deutschland, Discovery, but also from the practical agency world. </p>





<p class="wp-block-paragraph">The experts brought their years of experience and in-depth knowledge to the evaluation process, which helped to assess the quality, relevance and impact of the trailers created by AI, while also providing valuable insights into the acceptance, possible areas of improvement and potential of AI-supported production. Initially, it was not stated that the trailers were generated using artificial intelligence. </p>





<p class="wp-block-paragraph">Nevertheless (or perhaps precisely because of this), many parts of the trailers were rated as acceptable. The generated text concept, as well as the voiceover, were described as rather emotionless. </p>





<p class="wp-block-paragraph">The generated endboard, on the other hand, was rated as good to very good. In addition, a trailer was approved by the majority of experts, meaning that it would also have been used in a real situation. The results of the survey showed that integrating AI into a trailer production process can be very useful. The majority of respondents were open to the introduction of AI into the production process. </p>





<p class="wp-block-paragraph">Overall, the results showed that the use of AI in trailer production is not only feasible, but also makes economic sense. According to the model created in the study, cost savings of up to 79 per cent would be possible compared to conventional trailer production. This is primarily due to the increase in efficiency, i.e. the time saved in the creation of trailer content through AI. Basically, in the model, a trailer that would normally take three working days to produce could be created within one working day. It would therefore be possible to use the time gained to customise a trailer and thus implement more target group-oriented trailer marketing. </p>





<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="684"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/11-Animiertes-Endboard-ueber-Runway.png?resize=1200%2C684&quality=72&ssl=1"  alt="Animiertes Trailer-Endboard mit Runway"  class="wp-image-147014" ><figcaption class="wp-element-caption">Animated trailer endboard with Runway</figcaption></figure>





<p class="wp-block-paragraph"><strong>The future of AI in media production: a look into the crystal ball</strong></p>





<p class="wp-block-paragraph">My master’s thesis was intended to shed light on the potential of artificial intelligence in advertising trailer production. The question that arises is: how close are we to realising this potential? While the technology has undoubtedly progressed, we are not yet at the point where AI is seamlessly integrated into the media production process, even if AI is helping to optimise it in specific applications, such as Adobe Sensei. Especially in repeatable and standardised production processes, i.e. “assembly line” tasks, AI can offer significant time savings.</p>





<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="748"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/11-Animiertes-Endboard-ueber-Runway-2.png?resize=1200%2C748&quality=72&ssl=1"  alt="Prompt Eingabe innerhalb des Gen-2 Modells von Runway"  class="wp-image-147013" ><figcaption class="wp-element-caption">Prompt input within the Gen 2 model of Runway</figcaption></figure>





<p class="wp-block-paragraph">Looking five to ten years into the future, the picture could change dramatically. It is quite conceivable that AI will then no longer merely serve as a supplementary tool, but will be a central player in the creative process. Given the rapid developments in machine learning and deep learning, AI systems could be able to create content of greater complexity and nuance. These could not only support human producers, but in some cases even surpass their contributions.</p>





<p class="wp-block-paragraph">So media production is fraught with challenges – especially among professionals such as editors and producers. Will they recognise the benefits of these new tools and use them themselves? Or will they oppose a technology that they may see as a threat to their jobs? This could lead to strikes and other forms of resistance, as seen recently in Hollywood among screenwriters. There is also the inevitable learning curve. The media industry will need to educate itself to acquire the technical skills required to use these tools. While some professionals already have the knowledge, there is still a significant need for education.</p>





<p class="wp-block-paragraph">On a personal level, I am watching the developments in AI technology with great interest. OpenAI’s GPT-4 model, Google’s Gemini, Midjourney or Runway’s Gen-2, which demonstrate amazing capabilities in the areas of text and image generation, are impressive. Similarly, platforms such as Adobe Firefly offer enormous opportunities to combine processes. </p>





<p class="wp-block-paragraph"><strong>The transformative role of AI in advertising trailer production – a summary</strong></p>





<p class="wp-block-paragraph">In the constantly evolving world of media production, I wanted to set a scientifically sound point of reference with my master’s thesis “Development of a model for the use of AI software in the advertising sector using the example of a trailer”. </p>





<p class="wp-block-paragraph">By putting my results in writing, conducting a well-founded survey and having the written work corrected by an independent body (at this point, I would like to thank my supervising professor, Prof. Dr Martin Delp, once again), I was able to provide the debate with a concrete context and not just “say” what works and what doesn’t or what is “good” and what is not from my point of view. </p>





<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/12-Umfrage-Freigabe-Trailer-Medienexperten.png?w=1200&quality=72&ssl=1"  alt="Freigabe eines KI-Trailers durch Medienexperten"  class="wp-image-146993" ><figcaption class="wp-element-caption">Approval of an AI trailer by media experts</figcaption></figure>





<p class="wp-block-paragraph">I tried to dive deep into commercial trailer production and shed light on how artificial intelligence can change this landscape. The core finding of the research shows that AI not only has the potential to fundamentally change the way ad trailers are produced, but also has the ability to change the creative process. Through the targeted use of AI tools, production teams can not only organise their workflows more efficiently, but also pursue completely new creative approaches that were previously unknown.</p>





<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="893"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/12-Umfrage-KI-Interesse-Medienexperten.png?resize=1200%2C893&quality=72&ssl=1"  alt="KI-Interesse der Medienexperten"  class="wp-image-146994" ><figcaption class="wp-element-caption">Media experts’ interest in AI</figcaption></figure>





<p class="wp-block-paragraph">However, the master’s thesis I wrote is not just about highlighting the benefits and possibilities of AI – it also strongly emphasises the indispensable role of human input in the process. While AI has impressive technical capabilities, it is human intuition, creativity and expertise that ensures the content produced is accurate and resonates and reaches audiences. With this in mind, I argue for a symbiotic relationship where both sides utilise their strengths to achieve optimal results.</p>





<p class="wp-block-paragraph">For those seeking a deeper understanding of the nuances, methods and findings of this work, the master’s thesis is available in the Munich University of Applied Sciences library, among other places. It offers detailed insights into the research methodology, the tools used and the findings obtained. Anyone interested can also contact me directly: <a href="mailto:sebastian.gresmann@gmail.com">sebastian.gresmann@gmail.com</a> </p>





<p class="wp-block-paragraph">Oh yes… of course I generated parts of this article with the help of artificial intelligence. Who noticed it?</p>





<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="673"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/9-Endboard-Midjourney-Logos-und-Sendezeit-wurden-haendisch-hinzugefuegt.png?resize=1200%2C673&quality=72&ssl=1"  alt="Trailer-Endboard mit Hilfe von Midjourney"  class="wp-image-146991" ><figcaption class="wp-element-caption">Trailer endboard with the help of Midjourney</figcaption></figure>





<p class="wp-block-paragraph"></p><p>The post <a href="https://digitalproduction.com/2024/01/13/trailerkuenstliche-intelligenz-trifft-medienproduktion/">Trailer! Artificial intelligence meets media production</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/belabeier/">Bela Beier</a>. </p></div>]]></content:encoded>
					
		
		
		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/9-Endboard-Midjourney-Logos-und-Sendezeit-wurden-haendisch-hinzugefuegt.png?fit=1920%2C1076&#038;quality=72&#038;ssl=1" length="1020561" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/9-Endboard-Midjourney-Logos-und-Sendezeit-wurden-haendisch-hinzugefuegt.png?fit=1200%2C673&#038;quality=72&#038;ssl=1" width="1200" height="673" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title>Trailer-Endboard mit Hilfe von Midjourney</media:title>
	<media:description type="html"><![CDATA[Trailer-Endboard mit Hilfe von Midjourney]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/9-Endboard-Midjourney-Logos-und-Sendezeit-wurden-haendisch-hinzugefuegt.png?fit=1200%2C673&#038;quality=72&#038;ssl=1" width="1200" height="673" />
<post-id xmlns="com-wordpress:feed-additions:1">146977</post-id>	</item>
		<item>
		<title>EbSynth &#8211; a tool for animations from videos by style transfer of reference images</title>
		<link>https://digitalproduction.com/2024/01/11/ebsynth-a-tool-for-animations-from-videos-by-style-transfer-of-reference-images/</link>
		
		<dc:creator><![CDATA[Ralf Gliffe]]></dc:creator>
		<pubDate>Thu, 11 Jan 2024 13:10:12 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[DP2401]]></category>
		<category><![CDATA[Ebsynth]]></category>
		<category><![CDATA[KI]]></category>
		<category><![CDATA[subscribers]]></category>
		<category><![CDATA[Test]]></category>
		<guid isPermaLink="false">https://digitalproduction.com/?p=146033</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/input_image_key_result-4k-1.png?fit=1200%2C647&quality=72&ssl=1" width="1200" height="647" title="Das Video auf der Startseite gibt einen Einblick in die Wirkungsweise des Programms." alt="Das Video auf der Startseite gibt einen Einblick in die Wirkungsweise des Programms." /></div><div><p>EbSynth is a free, beta-tested video animation software that enables AI-powered image manipulation. Users can create keyframes and apply effects, although the simple user interface is challenging. Although the results are promising, the programme requires planning and practice to achieve high quality animations.</p>
<p>The post <a href="https://digitalproduction.com/2024/01/11/ebsynth-a-tool-for-animations-from-videos-by-style-transfer-of-reference-images/">EbSynth – a tool for animations from videos by style transfer of reference images</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/ralfgliffe/">Ralf Gliffe</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/input_image_key_result-4k-1.png?fit=1200%2C647&quality=72&ssl=1" width="1200" height="647" title="Das Video auf der Startseite gibt einen Einblick in die Wirkungsweise des Programms." alt="Das Video auf der Startseite gibt einen Einblick in die Wirkungsweise des Programms." /></div><div><div class='__iawmlf-post-loop-links' style='display:none;' data-iawmlf-post-links='[{&quot;id&quot;:2729,&quot;href&quot;:&quot;http:\/\/is.gd\/buzz_AItools&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;https:\/\/is.gd\/buzz_AItools&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:2730,&quot;href&quot;:&quot;http:\/\/www.ebsynth.com&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251209041325\/https:\/\/ebsynth.com\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-28 05:25:40&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2025-12-31 19:46:05&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-01-04 02:00:45&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-01-07 19:38:07&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-01-11 14:59:06&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-01-15 17:36:21&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-01-21 19:19:06&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-01-24 20:45:15&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-01-28 07:54:48&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-02-02 02:29:47&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-02-05 09:29:47&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-02-08 13:52:50&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-02-11 15:03:57&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-14 16:42:28&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-02-19 13:59:10&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-02-22 18:10:57&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-26 18:46:28&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-03-02 13:02:33&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-03-05 20:04:05&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-03-09 15:19:58&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-03-12 17:21:39&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-03-16 14:32:23&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-22 11:46:49&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-03-26 01:32:29&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-03-30 13:06:32&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-04-04 12:35:12&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-04-08 03:53:30&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-04-11 04:40:14&quot;,&quot;http_code&quot;:503},{&quot;date&quot;:&quot;2026-04-15 02:28:29&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-04-18 15:46:33&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-22 13:32:37&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-04-26 16:06:28&quot;,&quot;http_code&quot;:522},{&quot;date&quot;:&quot;2026-04-30 07:53:30&quot;,&quot;http_code&quot;:522}],&quot;broken&quot;:true,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-04-30 07:53:30&quot;,&quot;http_code&quot;:522},&quot;process&quot;:&quot;done&quot;}]'></div>
<p class="wp-block-paragraph">The actually spartan software is still in the beta phase and can be used free of charge, even commercially. An enthusiastic fan community shows some impressive results online, which were created in collaboration between EbSynth and various, mostly AI-controlled effect and video tools. We started the programme and applied some keyframes to short video sequences.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="772" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/start-4k-1.png?resize=1200%2C772&quality=72&ssl=1"  alt="So startet EbSynth – für manche Anwender etwas gewöhnungsbedürftig, gibt es nicht sehr viele UI-Elemente: Projekte öffnen, speichern und Bildsequenzen exportieren sowie einige Parameter, um Einfluss auf die Ausgabequalität zu nehmen. Zum Programmstart sollten Einzelbilder des zu bearbeitenden Videos und bearbeitete Schlüsselbilder schon vorhanden sein, damit die Pfadangaben zu den entsprechenden Verzeichnissen angegeben werden können. Bei Bedarf lässt sich auch der Maskenordner aktivieren. Mit den Buttons „Synth“ bzw. „Run All“ wird die Berechnung der Einzelbilder der neuen Animation gestartet. Das Ergebnis wird im Output-Ordner abgelegt. Neben den Wichtungen der Stärke der Effekte lassen sich unter „Advanced“ noch Werte für die Genauigkeit des Textur-Mappings, zum Verringern des Rauschens und zur Komplexität der erstellten Stile vorgeben. Mehrere Schlüsselbilder können jeweils bestimmte zeitliche Bereiche steuern und sich überlappen, um sanfte Überblendungen möglich zu machen."  class="wp-image-146089" ><figcaption class="wp-element-caption">This is how EbSynth starts – for some users it takes some getting used to, there are not many UI elements: open projects, save and export image sequences as well as some parameters to influence the output quality. When starting the programme, single images of the video to be edited and edited key frames should already be available so that the paths to the corresponding directories can be specified. If required, the mask folder can also be activated. Use the “Synth” or “Run All” buttons to start the calculation of the individual frames of the new animation. The result is saved in the output folder. In addition to the weighting of the strength of the effects, values for the accuracy of the texture mapping, for reducing noise and for the complexity of the created styles can be specified under “Advanced”. Several key frames can each control certain temporal areas and overlap to enable smooth transitions.</figcaption></figure>



<p class="wp-block-paragraph"><strong>Naturally stupid – or artificially intelligent?</strong></p>



<p class="wp-block-paragraph">You can’t hear it anymore: AI – artificial intelligence – will change all our lives. Some hope that smart decisions will finally improve our lives. Others are afraid. The fact that graphics and video applications are now so dominated by AI tools is something that the occasional user may well only realise when working with graphics and videos and EbSynth, for example. EbSynth “needs the support” of video and graphics software – and effective ones at that. Most modern tools promise to make use of the new artificial intelligence, as does EbSynth itself. To create and manipulate images and videos, there is hardly any way around “AI” (the website <a href="http://is.gd/buzz_AItools">is.gd/buzz_AItools</a> is said to list over 1000 AI tools, including a large number of graphics programmes).</p>



<p class="wp-block-paragraph"><strong>EbSynth – great image effects for a whole video?</strong></p>



<p class="wp-block-paragraph">The EbSynth website, <a href="http://www.ebsynth.com">www.ebsynth.com,</a> shows large-scale impressive effects on video sequences and advertises with the slogan: “Bring your paintings to animated life”. Otherwise it is “very tidy” and apart from the link to download the programme and a more detailed video with examples of effects, “before” and “after”, it only offers buttons for email contact, social media channels, including the Secret Weapons YouTube channel, which has almost 14,000 subscribers, and a FAQ list. The download button shows a Windows and an apple symbol.</p>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-8f761849 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:100%">
<figure class="wp-block-gallery has-nested-images columns-6 is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="1080" width="608"  decoding="async"  data-id="146082"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/siebewgtsich_1-1-608x1080.png?resize=608%2C1080&quality=72&ssl=1"  alt=""  class="wp-image-146082" ></figure>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="1080" width="608"  decoding="async"  data-id="146083"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/siebewgtsich_2-1.png?resize=608%2C1080&quality=72&ssl=1"  alt=""  class="wp-image-146083" ></figure>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="1080" width="608"  decoding="async"  data-id="146084"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/siebewgtsich_3-1-608x1080.png?resize=608%2C1080&quality=72&ssl=1"  alt=""  class="wp-image-146084" ></figure>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="1080" width="608"  decoding="async"  data-id="146085"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/siebewgtsich_4-1.png?resize=608%2C1080&quality=72&ssl=1"  alt=""  class="wp-image-146085" ></figure>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="1080" width="608"  decoding="async"  data-id="146086"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/siebewgtsich_5-1-608x1080.png?resize=608%2C1080&quality=72&ssl=1"  alt=""  class="wp-image-146086" ></figure>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="1080" width="608"  decoding="async"  data-id="146087"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/siebewgtsich_6-1.png?resize=608%2C1080&quality=72&ssl=1"  alt="Die Katze ist klug. Sie bewegt sich sparsam und in Zeitlupe. Darum gelingt es EbSynth auch beim ersten Anlauf und mit nur einem Schlüsselbild, ihr einigermaßen zu folgen. Hier wurden imitierte Malstile von DAP (Digital Auto Painter) verwendet."  class="wp-image-146087" ></figure>
<figcaption class="blocks-gallery-caption wp-element-caption"><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-secondary-color">The cat is smart. It moves sparingly and in slow motion. That’s why EbSynth manages to follow it reasonably well at the first attempt and with only one key image. Imitated painting styles from DAP (Digital Auto Painter) were used here.</mark></figcaption></figure>
</div>
</div>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"><br />If you haven’t “learnt the ropes” beforehand, you will probably be a little confused when you first start the programme: The start window of EbSynth – the actual programme window – is almost as spartan as the website. There are no menus apart from links for Open, Save and Export to After Effects.</p>



<p class="wp-block-paragraph"><br />In order for the programme to do anything with a new project, paths for prepared directories with corresponding image files must be placed in fields for “Keyframes”, “Video” and optionally also for “Mask” (“Select” buttons or via drag & drop). Values can also be assigned for weighting the influence of the effects of the keyframes or the original video. Under “Advanced” there are input fields for mapping, de-flicker, diversity and synthesis detail, which are explained in the FAQ. An output directory (for individual images) can be specified and there is an option to switch on the GPU for rendering. The calculation process is started with one of the two green buttons: “Synth” if only one keyframe is used, “Run All” if several keyframes are used.</p>



<p class="wp-block-paragraph"><br />MAPPING: A higher mapping ensures that the output dashes appear in the same position as in the keyframe. If the mapping is lower, EbSynth can rearrange the input features so that they appear in other places in the output. Good values are between 5 and 50.</p>



<p class="wp-block-paragraph"><br />DE-FLICKER: De-flicker suppresses texture flickering between consecutive frames. Good values are between 0.3 and 2.0. If the value is set to zero, the output sequence flickers as if each image were painted independently each image is painted independently. The stronger the value for de-flicker, the more coherent the output will be over time.</p>



<p class="wp-block-paragraph"><br />DIVERSITY: The visual variety of the style in the keyframes in the output sequence is determined by the diversity. Good values are between 1,000 and 20,000. If the diversity is set too low, some repeating patterns may appear in the output, similar to the artefacts that can be caused by overlapping when setting a clone brush in Photoshop.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="647"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/input_image_key_result-4k-1.png?resize=1200%2C647&quality=72&ssl=1"  alt="Das Video auf der Startseite gibt einen Einblick in die Wirkungsweise des Programms."  class="wp-image-146077" ><figcaption class="wp-element-caption">The video on the home page gives an insight into how the programme works.</figcaption></figure>



<p class="wp-block-paragraph"><br /><strong>Synthesis – the workflow</strong></p>



<p class="wp-block-paragraph">In order to be able to edit a video with EbSynth, the beta version must still be available as a sequence of individual images – in the “Video” folder – which can be done with most common video programmes. Depending on the image content or movement in the scene, the copy of an image in the sequence can be made into a keyframe, or it may be necessary to define several keyframes. Painting or other effects are then applied to these keyframes. It is important that the same names and the same resolution are used as for the images in the “Video” directory. A missing file in the video directory can also lead to premature termination of the programme. In many examples found online, AI software such as Stable Diffusion is used to manipulate the key frames. There are also commercial offers for manipulating videos that specifically advertise the use of Stable Diffusion and EbSynth. “Simple painting” with any paint software or rotoscoping with Davinci Resolve or After Effects, for example, are also possible. With suitable content, complete animations can be created with just a few key frames. The FAQs point out that it makes sense to prepare videos that EbSynth is to process properly during filming.</p>



<p class="wp-block-paragraph"><br />Among other things, it is emphasised that the image material used should be correctly exposed and as diffuse as possible so that EbSynth can track the effects in the moving image content well. Hard and moving shadows are also problematic. For the clothing of characters, it is recommended to use appealing, clearly recognisable prints, for example. Flat textures, monochrome fabrics and reflective material and repeating patterns that tend to moiré are also problematic, as with “normal” video recordings. If visual tracking with a keyframe is no longer sufficient, it is recommended to manually rework the first “unclean” image to create a next keyframe. In EbSynth, areas can be defined for the individual keyframes. Crossfades can later be defined where the keyframe areas overlap. When EbSynth is finished with its synthesis, it can be exported to After Effects. However, the finished individual frames, which are located in the “Output” folder, can be further processed with any other video software. Keyframes and effects can be painted onto transparent backgrounds. Painted and reference frames must be precisely matched (fit on top of each other). However, it is also possible to rotoscope the foreground and add an alpha mask sequence as a mask input in EbSynth (“Mask” folder). </p>



<p class="wp-block-paragraph"></p>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-8f761849 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:100%">
<figure class="wp-block-image size-large"><img data-recalc-dims="1"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/seit3-4k-1.png?w=1200&quality=72&ssl=1"  alt=""  class="wp-image-146081" ></figure>



<figure class="wp-block-image size-large"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="628"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/seit2-1.png?resize=1200%2C628&quality=72&ssl=1"  alt=""  class="wp-image-146080" ></figure>



<figure class="wp-block-image size-large"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="628"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/seit1-1.png?resize=1200%2C628&quality=72&ssl=1"  alt=""  class="wp-image-146079" ><figcaption class="wp-element-caption">The website for EbSynth is just as spartan as the programme. The animated images are nice to look at.</figcaption></figure>
</div>
</div>



<p class="wp-block-paragraph">Initial tests have shown that EbSynth does its job. But without thorough planning and preparation, it can quickly happen that EbSynth transfers the desired effects to the video completely incorrectly (actually slowly, because results only become visible after the calculation process). Short, quiet sequences that fulfil the conditions listed – and possibly splitting the video material into several scenes, or filming people who overlap in the image separately (cropped, later rearranged as video layers) can help to improve the results.</p>



<p class="wp-block-paragraph"><strong>Conclusion</strong></p>



<p class="wp-block-paragraph">In our opinion, EbSynth is an interesting approach and even in the beta version it produces appealing results. The biggest advantage is that image manipulations in videos only need to be carried out on specific images (keyframes). EbSynth then transfers the respective image style quite precisely to the individual frames of the video. It is possible to create complete animations in a simple style with just a few rotoscope drawings.</p>



<p class="wp-block-paragraph"><br />EbSynth’s UI is fairly spartan and the workflow essentially consists of splitting the original video into individual frames in suitable software. Depending on the action, one or more keyframes must be defined and manipulated using any tool. A mask can be defined for certain areas and some parameters can be set to control the output quality. Trial and error is the order of the day here. The results of EbSynth’s work are once again individual images that need to be processed further.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large"><img data-recalc-dims="1"  decoding="async"  data-id="146070"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/girl_1-1.png?w=1200&quality=72&ssl=1"  alt=""  class="wp-image-146070" ></figure>



<figure class="wp-block-image size-large"><img data-recalc-dims="1"  decoding="async"  data-id="146071"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/girl_2-1.png?w=1200&quality=72&ssl=1"  alt=""  class="wp-image-146071" ></figure>



<figure class="wp-block-image size-large"><img data-recalc-dims="1"  decoding="async"  width="771"  height="434"  data-id="146075"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/girl_6-1.png?resize=771%2C434&quality=72&ssl=1"  alt=""  class="wp-image-146075" ></figure>



<figure class="wp-block-image size-large"><img data-recalc-dims="1"  decoding="async"  width="772"  height="434"  data-id="146073"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/girl_4-1.png?resize=772%2C434&quality=72&ssl=1"  alt=""  class="wp-image-146073" ></figure>



<figure class="wp-block-image size-large"><img data-recalc-dims="1"  decoding="async"  width="771"  height="434"  data-id="146072"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/girl_3-1.png?resize=771%2C434&quality=72&ssl=1"  alt=""  class="wp-image-146072" ></figure>



<figure class="wp-block-image size-large"><img data-recalc-dims="1"  decoding="async"  width="771"  height="434"  data-id="146074"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/girl_5-1.png?resize=771%2C434&quality=72&ssl=1"  alt=""  class="wp-image-146074" ></figure>
<figcaption class="blocks-gallery-caption wp-element-caption"><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-secondary-color">The website for EbSynth is just as spartan as the programme. The animated images are nice to look at.</mark></figcaption></figure>



<p class="wp-block-paragraph"><br />The technique is only suitable for short sequences and the results are heavily dependent on the scene content, among other things. We find that the workflow is still a little cumbersome and results are not very good due to the “tight” UI and lack of programme help, despite the fact that it is actually easy to use, results require a lot of time and practice. However, EbSynth also produces fascinating results. We are looking forward to the further development of the programme (including the price).</p><p>The post <a href="https://digitalproduction.com/2024/01/11/ebsynth-a-tool-for-animations-from-videos-by-style-transfer-of-reference-images/">EbSynth – a tool for animations from videos by style transfer of reference images</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/ralfgliffe/">Ralf Gliffe</a>. </p></div>]]></content:encoded>
					
		
		
		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/input_image_key_result-4k-1.png?fit=1920%2C1035&#038;quality=72&#038;ssl=1" length="691063" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/input_image_key_result-4k-1.png?fit=1200%2C647&#038;quality=72&#038;ssl=1" width="1200" height="647" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title>Das Video auf der Startseite gibt einen Einblick in die Wirkungsweise des Programms.</media:title>
	<media:description type="html"><![CDATA[Das Video auf der Startseite gibt einen Einblick in die Wirkungsweise des Programms.]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/input_image_key_result-4k-1.png?fit=1200%2C647&#038;quality=72&#038;ssl=1" width="1200" height="647" />
<post-id xmlns="com-wordpress:feed-additions:1">146033</post-id>	</item>
		<item>
		<title>Countdown to the FMX! 2 weeks!</title>
		<link>https://digitalproduction.com/2023/04/13/countdown-zur-fmx-2-wochen/</link>
		
		<dc:creator><![CDATA[Bela Beier]]></dc:creator>
		<pubDate>Thu, 13 Apr 2023 09:41:14 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Avatar]]></category>
		<category><![CDATA[FMX]]></category>
		<category><![CDATA[ICVFX]]></category>
		<category><![CDATA[KI]]></category>
		<category><![CDATA[StageCraft]]></category>
		<category><![CDATA[Stuttgart]]></category>
		<category><![CDATA[virtual production]]></category>
		<category><![CDATA[Wētā FX]]></category>
		<guid isPermaLink="false">https://www.digitalproduction.com/?p=116061</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2023/04/APFS.jpg?fit=1200%2C633&quality=80&ssl=1" width="1200" height="633" title="" alt="" /></div><div><p>Only 2 weeks to go until FMX! Now confirmed: Avatar 2r and Ant-Man and the Wasp, as well as expert panels on virtual production and AI-based voice and image generators.</p>
<p>The post <a href="https://digitalproduction.com/2023/04/13/countdown-zur-fmx-2-wochen/">Countdown to the FMX! 2 weeks!</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/belabeier/">Bela Beier</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2023/04/APFS.jpg?fit=1200%2C633&quality=80&ssl=1" width="1200" height="633" title="" alt="" /></div><div><div class='__iawmlf-post-loop-links' style='display:none;' data-iawmlf-post-links='[{&quot;id&quot;:2872,&quot;href&quot;:&quot;https:\/\/www.cudocompute.com&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251216110653\/https:\/\/www.cudocompute.com\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-28 06:31:10&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2025-12-28 06:31:10&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:2873,&quot;href&quot;:&quot;https:\/\/www.nick.de&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251218165712\/https:\/\/www.nick.de\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-28 06:31:11&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2025-12-28 06:31:11&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:2874,&quot;href&quot;:&quot;https:\/\/pixstone.com&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20250320085249\/https:\/\/www.pixstone.com\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-28 06:31:16&quot;,&quot;http_code&quot;:503}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2025-12-28 06:31:16&quot;,&quot;http_code&quot;:503},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:2875,&quot;href&quot;:&quot;https:\/\/silverdraft.com&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251123092847\/https:\/\/silverdraft.com\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-28 06:31:17&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2025-12-28 06:31:17&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:2876,&quot;href&quot;:&quot;https:\/\/vrbn.io&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251109215931\/https:\/\/vrbn.io\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-28 06:31:20&quot;,&quot;http_code&quot;:403},{&quot;date&quot;:&quot;2026-04-30 13:19:30&quot;,&quot;http_code&quot;:307}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-04-30 13:19:30&quot;,&quot;http_code&quot;:307},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:2146,&quot;href&quot;:&quot;https:\/\/www.wacom.com&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20201130110155\/http:\/\/www.wacom.com\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-27 23:35:28&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-04 20:07:59&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-15 16:48:25&quot;,&quot;http_code&quot;:503},{&quot;date&quot;:&quot;2026-02-18 11:15:48&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-24 08:20:29&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-02-24 08:20:29&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:117,&quot;href&quot;:&quot;https:\/\/www.wetafx.co.nz&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251218173126\/https:\/\/www.wetafx.co.nz\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-27 12:28:07&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-07 23:23:39&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-16 19:14:59&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-14 14:57:40&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-18 14:17:11&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-24 19:47:50&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-10 18:17:41&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-24 20:15:07&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-12 21:14:20&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-04-12 21:14:20&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:2877,&quot;href&quot;:&quot;https:\/\/amcrs.de&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251211004310\/https:\/\/amcrs.de\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-28 06:31:23&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2025-12-28 06:31:23&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;}]'></div>
<p><strong>The creation of the photorealistic world of Pandora for AVATAR: THE WAY OF WATER</strong></p>
<p>Wētā FX brought the photorealistic world of Pandora to life in <strong>AVATAR: THE WAY OF WATER</strong> by basing their work on scientific fact.<strong> Pavani Rao Boddapati</strong> (VFX Supervisor), Sam Cole (Associate VFX Supervisor) and <strong>Stephen Clee</strong> (Animation Supervisor) will discuss Wētā’s return to Pandora from its beginnings in R&D and the development of new technology for on-set work to the final artistic touches of a shot, sharing behind-the-scenes looks, stories and exclusive moments with the FMX audience.</p>
<p><strong>Stuart Adcock talks about Wētā’s “Anatomically Plausible Facial System”</strong></p>
<p>Wētā FX’s new facial expression software, used on <strong>AVATAR: THE WAY OF WATER</strong>, goes beyond the state of the art with a new set of controls to recreate actors’ facial expressions. <strong>Stuart Adcock</strong>, Head of Facial Motion at Wētā FX, talks about Wētā’s new Anatomically Plausible Facial System (APFS) and how it was used to create believable, emotional acting performances in James Cameron’s AVATAR sequel.</p>
<p><strong>Visual effects and virtual production for ANT-MAN AND THE WASP: QUANTUMANIA</strong></p>
<p><strong>Charmaine Chan</strong> (Associate VFX Supervisor) and <strong>Laurie Priest</strong> (CG Supervisor) from Industrial Light & Magic will talk about the visual effects and virtual production techniques used in Marvel’s hit film <strong>ANT-MAN AND THE WASP: QUANTUMANIA</strong>. In particular, they will focus on the StageCraft shoot that Chan oversaw: StageCraft technology was used to project virtual backgrounds onto giant LED walls in real time, eliminating the need for green screens and creating the film’s unique locations. The actors were placed inside the immersive LED studio, where practical sets were combined with digital extensions on the walls. StageCraft was originally developed by ILM in 2018 for the first season of Lucasfilm’s hit Disney series THE MANDALORIAN.</p>
<p><strong>Industry panel on the economics of virtual production techniques</strong></p>
<p>The panel “The Economics of Virtual Production – ICVFX” will focus on the extent to which in-camera visual effects (ICVX) can contribute to the economic viability of virtual production. The panellists will consider real budgets and their share of the overall budget, as well as the schedule required to realise a project and the number of crew members needed for a successful production. Focusing on these factors – budgets, schedules and personnel – can lead to greater acceptance of virtual production techniques. The panel will also provide insights into the latest developments and innovative approaches to production business models. The panellists are <strong>James Thomas</strong>, Virtual Production Executive at Amazon Studios; <strong>Chris Bannister</strong>, Executive Producer of Virtual Production at Industrial Light & Magic; <strong>Lauren Paul</strong>, VP of Sales & Marketing at Lux Machina Consulting | NEP Virtual Studios and <strong>Paolo Tamburrino</strong>, Sr. Industry Strategy Manager, Autodesk & Executive Producer.</p>
<p><strong><img data-recalc-dims="1"  decoding="async"  class="size-full wp-image-116062 aligncenter"  src="https://i0.wp.com/www.digitalproduction.com//srv/htdocs/wp-content/uploads/2023/04/AI2.jpg?resize=892%2C675&quality=80&ssl=1"  alt=""  width="892"  height="675" >AI panel on art, archetypes and algorithms</strong></p>
<p>Artificial intelligence (AI) has opened up whole new worlds in the fields of creativity and innovation, for example through large language models (LLMs) such as the chatbot ChatGPT and image generators that can produce authentic-looking, creative text and images from any input. However, these models also reflect the subconscious patterns and archetypes that permeate the data, which come from the collective unconscious of human culture – and which have been used to train these models. In this AI panel, artist <strong>Dave McKean</strong>; <strong>Sam Hodge</strong>, founder of Kognat; <strong>Andrew Cochrane</strong>, Immersive Content Creator at The AV Club Productions, Inc; and <strong>Scott Broock</strong>, founder of Totem Networks, LLC, will talk about how these models can be used to create entirely new myths that enrich our understanding of ourselves and the world. However, they will also discuss the ethical challenges and risks of AI applications that exploit these archetypes or stereotypes because they use them in the context of fake news for opinion- and emotion-mongering.</p>
<p><strong>Forum News</strong></p>
<p><center></center>Below you will find the latest announcements from the forum areas. <strong>Workshops</strong>: <a href="https://www.cudocompute.com/" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://www.cudocompute.com/&source=gmail&ust=1681459284096000&usg=AOvVaw0G7gOHcSVvUDItY1xYSmlu">Cudo Compute</a>, <a href="https://www.nick.de/" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://www.nick.de/&source=gmail&ust=1681459284096000&usg=AOvVaw2oGARO6YkA182q4q25QOKl">Nickelodeon</a>, <a href="https://pixstone.com/" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://pixstone.com/&source=gmail&ust=1681459284096000&usg=AOvVaw24t_R1eY-1rJmAewKGRK_8">PixStone Images</a>, <a href="https://silverdraft.com/" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://silverdraft.com/&source=gmail&ust=1681459284096000&usg=AOvVaw3Bu1dpNDo1lLspByrIppcP">Silverdraft</a>, <a href="https://vrbn.io/" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://vrbn.io/&source=gmail&ust=1681459284096000&usg=AOvVaw35rm3QBfq5Ny0GeLnQgG_2">vrbn studios</a>, <a href="https://www.wacom.com/" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://www.wacom.com/&source=gmail&ust=1681459284096000&usg=AOvVaw3RFkagvKwzim9_HwYOfoYT">Wacom</a>.</p>
<p><strong>Gold Partners </strong><a href="https://www.wetafx.co.nz/" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://www.wetafx.co.nz/&source=gmail&ust=1681459284096000&usg=AOvVaw2jZ6vZtMrbdh4-UnakCTx0">Wētā FX</a><a> and </a><a href="https://amcrs.de/" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://amcrs.de/&source=gmail&ust=1681459284096000&usg=AOvVaw1QtmEAcmV7v0ph9YAsEWOU">AMCRS</a><a> are the two Gold Partners of this year’s FMX.</a></p>
<p><center></center>You can find more information <a href="https://amcrs.de/" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=https://amcrs.de/&source=gmail&ust=1681459284097000&usg=AOvVaw2zuIM8D19un9WzC2wJd-1R">here</a>.</p><p>The post <a href="https://digitalproduction.com/2023/04/13/countdown-zur-fmx-2-wochen/">Countdown to the FMX! 2 weeks!</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/belabeier/">Bela Beier</a>. </p></div>]]></content:encoded>
					
		
		
		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2023/04/APFS.jpg?fit=1920%2C1013&#038;quality=80&#038;ssl=1" length="23698" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2023/04/APFS.jpg?fit=1200%2C633&#038;quality=80&#038;ssl=1" width="1200" height="633" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title></media:title>
	<media:description type="html"><![CDATA[]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2023/04/APFS.jpg?fit=1200%2C633&#038;quality=80&#038;ssl=1" width="1200" height="633" />
<post-id xmlns="com-wordpress:feed-additions:1">116061</post-id>	</item>
		<item>
		<title>Does Alex make Toni unemployed?</title>
		<link>https://digitalproduction.com/2019/04/13/macht-alex-den-toni-arbeitslos-retro-artikel/</link>
		
		<dc:creator><![CDATA[Uli Plank]]></dc:creator>
		<pubDate>Sat, 13 Apr 2019 08:00:00 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[artikel]]></category>
		<category><![CDATA[DP Article]]></category>
		<category><![CDATA[DP1904]]></category>
		<category><![CDATA[Künstliche Intelligenz]]></category>
		<category><![CDATA[KI]]></category>
		<category><![CDATA[Plug-in]]></category>
		<category><![CDATA[Premiere Pro]]></category>
		<category><![CDATA[subscribers]]></category>
		<guid isPermaLink="false">https://www.digitalproduction.com/?p=99485</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2022/02/Macht-Alex-den-Toni-arbeitslos_001.jpg?fit=705%2C644&quality=80&ssl=1" width="705" height="644" title="" alt="" /></div><div><p>A look back: How does audio AI work in practice? "Alex Audio Butler" consists of VST plug-ins that are designed to handle the sound mixing independently. The big players in the VST game: DaVinci Resolve, Premiere Pro and Adobe Audition.</p>
<p>The post <a href="https://digitalproduction.com/2019/04/13/macht-alex-den-toni-arbeitslos-retro-artikel/">Does Alex make Toni unemployed?</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/uliplank/">Uli Plank</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2022/02/Macht-Alex-den-Toni-arbeitslos_001.jpg?fit=705%2C644&quality=80&ssl=1" width="705" height="644" title="" alt="" /></div><div><p>Alex Audio Butler (AAB for short) is a small series of VST plug-ins that plug into popular video programmes and promise to handle sound mixing independently. Currently these are DaVinci Resolve, Premiere Pro and Adobe Audition. Others are in the works. AAB runs under Windows 10 and Mac OS 10.13 or higher, and even natively with M1 on the Mac, but it does not work with the AppStore version of Resolve. (But who wants that?)</p>
<p>My expert colleague Björn Eichelbaum gives you a general overview of AI in the audio sector in this issue from page 72 onwards. I, on the other hand, am a tester without any audio expertise; at best, in my younger years I was able to use the equaliser on my stereo system to crank up my music to disco levels. In demanding film projects, I have always respected the important role of sound and left this work to the professionals.</p>
<p><strong><img data-recalc-dims="1"  decoding="async"  class="alignnone size-full wp-image-99486"  src="https://i0.wp.com/www.digitalproduction.com//srv/htdocs/wp-content/uploads/2022/02/Macht-Alex-den-Toni-arbeitslos_001.jpg?resize=705%2C644&quality=80&ssl=1"  alt=""  width="705"  height="644" ><img data-recalc-dims="1"  decoding="async"  class="alignnone size-full wp-image-99487"  src="https://i0.wp.com/www.digitalproduction.com//srv/htdocs/wp-content/uploads/2022/02/Macht-Alex-den-Toni-arbeitslos_002.jpg?resize=703%2C646&quality=80&ssl=1"  alt=""  width="703"  height="646" ><img data-recalc-dims="1"  decoding="async"  class="alignnone size-full wp-image-99488"  src="https://i0.wp.com/www.digitalproduction.com//srv/htdocs/wp-content/uploads/2022/02/Macht-Alex-den-Toni-arbeitslos_003.jpg?resize=703%2C646&quality=80&ssl=1"  alt=""  width="703"  height="646" ></strong></p>
<p><strong>Final mix from the robot</strong></p>
<p>But what about a small, underfunded project with no budget at all? We carried out a practical test on a video portrait with which a university from an emerging country wanted to introduce itself. Alex Audio Butler from the Dutch company Unimule was asked to control the volume, use a compressor if necessary and take care of the audio ducking (i.e. reducing the background music or effects when speech starts).</p>
<p>We worked with Resolve, where AAB can be found after installation under the filters for Fairlight in Uncategorised > VST. When editing, we naturally made sure to assign the sounds to individual tracks according to their function. AAB differentiates between Voice, Music and Sound FX (although the latter were not used in our project). However, you should not only place the little helpers on the individual tracks, but also on the master.</p>
<p>After selecting the function for the respective track, you don’t have to deal with hertz, decibel or attack and decay times in the settings. Instead, the respective task is selected from a few generally understandable terms, which are also provided with explanatory texts. Even a video enthusiast should have no problem understanding them – if necessary, it helps to try them out. The PDF manual is limited to installation and licensing. In itself, this is sufficient, but you can also find additional videos under “Alex Audio Butler” on YouTube.</p>
<p><img data-recalc-dims="1"  decoding="async"  class="alignnone size-full wp-image-99489"  src="https://i0.wp.com/www.digitalproduction.com//srv/htdocs/wp-content/uploads/2022/02/Macht-Alex-den-Toni-arbeitslos_004.jpg?resize=700%2C535&quality=80&ssl=1"  alt=""  width="700"  height="535" ></p>
<p><img data-recalc-dims="1"  decoding="async"  class="alignnone size-full wp-image-99490"  src="https://i0.wp.com/www.digitalproduction.com//srv/htdocs/wp-content/uploads/2022/02/Macht-Alex-den-Toni-arbeitslos_005.jpg?resize=700%2C437&quality=80&ssl=1"  alt=""  width="700"  height="437" ><img data-recalc-dims="1"  decoding="async"  class="alignnone size-full wp-image-99491"  src="https://i0.wp.com/www.digitalproduction.com//srv/htdocs/wp-content/uploads/2022/02/Macht-Alex-den-Toni-arbeitslos_006.jpg?resize=545%2C863&quality=80&ssl=1"  alt=""  width="545"  height="863" ></p>
<p><strong>Push </strong></p>
<p>If you try to output the complete work straight away, you often get an error message saying that Alex hasn’t finished analysing it yet. There’s no point in waiting, instead you have to give the little guy a little help. You should either listen to the entire film once or simply output the sound alone two or three times, which is usually quicker than calculating all the images.</p>
<p>You will then receive a message that the analysis was successful and you can now render the entire film with video. However, the screen messages should be a little larger – we could hardly recognise them on a 27-inch monitor with 2,560 x 1,440 pixels, although the fonts in Resolve were still clearly legible.</p>
<p><strong>Result </strong></p>
<p>The result is certainly worth listening to – a complete audio amateur could hardly do better. In our case, there was essentially the usual problem with inexperienced speakers, who often start loudly and then get quieter and quieter or slowly increase in volume. AAB manages this quite well with internal keyframes, without raising the background noise during pauses, because it recognises the human voice. A compressor can also be used in several stages. Background music from two completely different sources on one track was well regulated to the same subjective volume. However, you can also use several tracks with different settings for the music. You only specify the volume for music in the foreground and the auto ducking ratio. When assessing compression and audio ducking, it also depends on the monitoring system. We were quite satisfied with the result via loudspeakers on the computer and also via high-quality headphones. However, we were criticised after the demonstration on a sound system. No problem: we quickly changed the compression and ducking, gave it a new push and the university was also satisfied.</p>
<p><strong>Comment </strong></p>
<p>What AAB cannot do at all is repair errors and faults in the recording. This has to be done by specialists with trained ears through precise adjustment of appropriate filters (which nowadays can also be based on AI). Alex will also not deliver a perfect feature film mix, where the emotional impact depends on very subtle factors. But the Butler can handle simple projects with interviews, voice-over and background music with technically clean recordings – currently for the introductory price of 79 euros.</p><p>The post <a href="https://digitalproduction.com/2019/04/13/macht-alex-den-toni-arbeitslos-retro-artikel/">Does Alex make Toni unemployed?</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/uliplank/">Uli Plank</a>. </p></div>]]></content:encoded>
					
		
		
		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2022/02/Macht-Alex-den-Toni-arbeitslos_001.jpg?fit=705%2C644&#038;quality=80&#038;ssl=1" length="24817" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2022/02/Macht-Alex-den-Toni-arbeitslos_001.jpg?fit=705%2C644&#038;quality=80&#038;ssl=1" width="705" height="644" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title></media:title>
	<media:description type="html"><![CDATA[]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2022/02/Macht-Alex-den-Toni-arbeitslos_001.jpg?fit=705%2C644&#038;quality=80&#038;ssl=1" width="705" height="644" />
<post-id xmlns="com-wordpress:feed-additions:1">99485</post-id>	</item>
	</channel>
</rss>
