<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="https://digitalproduction.com/wp-content/plugins/xslt/public/template.xsl"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	xmlns:rssFeedStyles="http://www.wordpress.org/ns/xslt#"
>

<channel>
	<title>DP2304 - DIGITAL PRODUCTION</title>
	<atom:link href="https://digitalproduction.com/tag/dp2304/feed/" rel="self" type="application/rss+xml" />
	<link>https://digitalproduction.com</link>
	<description>Magazine for Digital Media Production</description>
	<lastBuildDate>Thu, 19 Feb 2026 07:57:24 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
<site xmlns="com-wordpress:feed-additions:1">236729828</site>	<item>
		<title>C&#038;R.me &#8211; post-production in the cloud</title>
		<link>https://digitalproduction.com/2023/11/15/cr-me-post-production-in-the-cloud/</link>
		
		<dc:creator><![CDATA[Bela Beier]]></dc:creator>
		<pubDate>Wed, 15 Nov 2023 11:13:00 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[AWS for VFX]]></category>
		<category><![CDATA[C&R.me]]></category>
		<category><![CDATA[cloud-native studios]]></category>
		<category><![CDATA[DevOps for VFX]]></category>
		<category><![CDATA[DP2304]]></category>
		<category><![CDATA[episodic production cloud]]></category>
		<category><![CDATA[European film VFX]]></category>
		<category><![CDATA[Ftrack]]></category>
		<category><![CDATA[GPU compute cloud]]></category>
		<category><![CDATA[Houdini]]></category>
		<category><![CDATA[hybrid cloud pipeline]]></category>
		<category><![CDATA[hybrid cloud VFX]]></category>
		<category><![CDATA[Infrastructure as Code VFX]]></category>
		<category><![CDATA[Maya cloud workflows]]></category>
		<category><![CDATA[multi-cloud VFX solutions]]></category>
		<category><![CDATA[Nuke]]></category>
		<category><![CDATA[on-premise VFX infrastructure]]></category>
		<category><![CDATA[Orca Studios]]></category>
		<category><![CDATA[real-time collaboration VFX]]></category>
		<category><![CDATA[remote desktop VFX]]></category>
		<category><![CDATA[remote workstations VFX]]></category>
		<category><![CDATA[Skylite Studios]]></category>
		<category><![CDATA[subscribers]]></category>
		<category><![CDATA[Teradici PCoIP protocol]]></category>
		<category><![CDATA[VFX cloud scalability]]></category>
		<category><![CDATA[VFX licensing issues]]></category>
		<category><![CDATA[VFX virtual machines]]></category>
		<category><![CDATA[virtual studio setup]]></category>
		<guid isPermaLink="false">https://digitalproduction.com/?p=153195</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/11/ark2_trailer2_still_1-1.jpg?fit=1200%2C675&quality=80&ssl=1" width="1200" height="675" title="Der Trailer für Ark 2 – gemacht von Nexodus! (nexod.us/showcase/ark-2-trailer)" alt="Der Trailer für Ark 2 – gemacht von Nexodus! (nexod.us/showcase/ark-2-trailer)" /></div><div><p>It's been a few years now since the term "cloud" has gone from buzzword to hype to working reality - but so far there are<br />
very few tools that can do everything in the cloud. Sure, some tools have access or even their "own clouds", but wouldn't an "everything in the cloud" solution be better?</p>
<p>The post <a href="https://digitalproduction.com/2023/11/15/cr-me-post-production-in-the-cloud/">C&R.me – post-production in the cloud</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/belabeier/">Bela Beier</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/11/ark2_trailer2_still_1-1.jpg?fit=1200%2C675&quality=80&ssl=1" width="1200" height="675" title="Der Trailer für Ark 2 – gemacht von Nexodus! (nexod.us/showcase/ark-2-trailer)" alt="Der Trailer für Ark 2 – gemacht von Nexodus! (nexod.us/showcase/ark-2-trailer)" /></div><div><div class='__iawmlf-post-loop-links' style='display:none;' data-iawmlf-post-links='[{&quot;id&quot;:2745,&quot;href&quot;:&quot;http:\/\/is.gd\/mihai_imdb&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;https:\/\/is.gd\/mihai_imdb&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:2746,&quot;href&quot;:&quot;http:\/\/candr.me&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20240525210316\/http:\/\/candr.me\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-28 05:36:23&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-23 09:05:13&quot;,&quot;http_code&quot;:206}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-01-23 09:05:13&quot;,&quot;http_code&quot;:206},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:2747,&quot;href&quot;:&quot;http:\/\/is.gd\/woman_king&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;https:\/\/is.gd\/woman_king&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:2748,&quot;href&quot;:&quot;http:\/\/is.gd\/santo_orca&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;https:\/\/is.gd\/santo_orca&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:2749,&quot;href&quot;:&quot;http:\/\/www.dveas.de&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251115085438\/https:\/\/dveas.de\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-28 05:36:39&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-23 09:07:05&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-01-23 09:07:05&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;}]'></div>
<p class="wp-block-paragraph">If you ask around, there are very few who are really completely in the cloud – and one who is here in Munich is Mihai Satmarean. He earned his VFX spurs at Trixter as Head of IT and has also worked at Dassault Systems Mindware and DevOps. You can find an up-to-date overview of his projects here: <a href="http://is.gd/mihai_imdb">is.gd/mihai_imdb</a> or for more information visit <a href="http://candr.me">candr.me</a>.</p>



<p class="wp-block-paragraph"><strong>DP: Hello Mihai! Why did you develop C&R.me?</strong><br />Mihai Satmarean: It was an idea I had been toying with since my early days in the studios, given the inflexibility of local data centres and many other limitations. I had been studying parallel computing and trying to understand networking and storage in a hybrid cloud/on-premises environment. And Corona accelerated all that and even made the pure cloud approach the number one priority.<br />I was very inspired by the technology of Bernie Kimbacher, who has published some tutorials together with AWS. The main proponent of the idea in practice was a former Trixter colleague called Adrian Corsei, who was Head of Studio at Orca at the time of the discussion and called me at some point asking about the possibility of creating a complete virtual studio.</p>



<p class="wp-block-paragraph">I saw the opportunity to bring together all the previous work and ideas into a functioning PoC. So we put our heads together virtually, and within no time we had something running on AWS. It was all remote and asynchronous. Eight months later, we met in person for the first time.</p>



<p class="wp-block-paragraph">There has been a trend for some time now to move as much infrastructure as possible to the cloud, especially if there is not much hardware “on-prem”. Mainly for reasons of flexibility, scalability and to avoid large upfront investments. Building infrastructure in the cloud makes it possible to link costs directly to projects and to scale the infrastructure with the project(s) with minimal upfront investment. Corona, working from home and supply chain issues have accelerated this idea, but I wouldn’t say they are the main reasons or drivers.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="563" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/11/Screenshot-2023-05-06-at-12.15.39-1.jpg?resize=1200%2C563&quality=80&ssl=1"  alt=""  class="wp-image-153242" ><figcaption class="wp-element-caption">The system at a glance: Together with the pipeline software – for example FTrack – the cloud is controlled via the devops.</figcaption></figure>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="346" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/11/Captura-de-ecran-din-2022-08-21-la-07.38.08-1.jpg?resize=1200%2C346&quality=80&ssl=1"  alt=""  class="wp-image-153241" ><figcaption class="wp-element-caption">And the documentation of the connections is also included in the interface.</figcaption></figure>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"><strong>DP: What would that look like for an artist?</strong><br />Mihai Satmarean: We can now offer better workstations – with C&R.me (The Configuration and Resources Management Engine), artists can set up their own workstations using the project management tool. It’s very different – instead of IT provisioning, configuring and handing over workstations, we now have a ‘black box API endpoint’ with coded provisioning instructions. Through some pipeline magic, our project management software, FTrack, can request workstations and return the details to the user. Teradici’s PCoIP protocol is used to connect to the remote (cloud) workstation’s screen(s). And for the artist, this means that they get an interface and “activate” a workstation and the rest is done automatically.</p>



<p class="wp-block-paragraph">Artists have a local client for the protocol and can securely connect to the workstation’s PCOIP agent. The main difference for the artist is that they connect to the workstation via a remote desktop client such as Teradici rather than directly. This means that the quality of the connection, such as latency and bandwidth, has a big impact on the artist’s overall experience. This is of course easier to manage in an office environment than when the artist is working from home, but overall it gives the artist great freedom as they can access the same infrastructure both in the office and at home. In addition, the artist has full control over their machine(s) and can create, delete, wake up, shut down/restart or even switch between different machines depending on what they need them for.</p>



<p class="wp-block-paragraph"><strong>DP: Has this actually been implemented or is it still in the planning phase?</strong><br />Mihai Satmarean: No planning – we’re in the middle of it. It looks like this – I’ll take the last project with the Spanish Orca Studios. We started as an experiment, designed our API and built some infrastructure – amazingly, we soon had a proof of concept and launched a project. Then another and another, and we realised that we had a working design because the scalability of artists is infinite. I think we can now take on projects of any size. </p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="675" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/11/Captura-de-ecran-din-2020-11-11-la-09.53.08-2-1.jpg?resize=1200%2C675&quality=80&ssl=1"  alt=""  class="wp-image-153243" ><figcaption class="wp-element-caption">Behind the script: The call of the machine</figcaption></figure>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph">There’s still a lot to do in terms of refactoring pieces of code, concepts, etc., but I think we now have an infrastructure as a service. Of course, that’s not all, we still need pipelines, artists, supervisors and other people, but the way it works today, it’s mainly DevOps and Infrastructure as Code – and with that we actually offer Infrastructure as a Service.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="709" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/11/Screenshot-2023-05-06-at-12.21.54-1.jpg?resize=1200%2C709&quality=80&ssl=1"  alt=""  class="wp-image-153236" ><figcaption class="wp-element-caption">… and here in production. Read more? skylitestudios.com</figcaption></figure>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"><strong>DP: …and someone has already done that?</strong><br />Mihai Satmarean: Yes, of course. We had three projects where we did exactly that – of course in individual implementations, with Orca Studios, Nexodus and Skylite Films. For example, on “Women King”<a href="http://is.gd/woman_king">(is.gd/woman_king</a>), Nexodus used our toolset to integrate more than 30 artists into their pipeline, all of whom were distributed – there was no central office. Another project called “Santo” by Orca Studios<a href="http://is.gd/santo_orca">(is.gd/santo_orca</a>) was realised in Spain with artists from all over Europe working on a TV series about a mysterious drug dealer. We also did other things with Orca, like “The Good Nurse”.</p>



<p class="wp-block-paragraph"><strong>DP: What were the pipelines in these projects?</strong><br />Mihai Satmarean: Tools were common programmes like Nuke, Maya and so on. So far we only had cloud-native studios, so not much other infrastructure. But we are currently working on a project where we connect to a small server for licences and other services.</p>



<p class="wp-block-paragraph"><strong>DP: How long did the set-up time take for each project?</strong><br />Mihai Satmarean: Depending on the complexity of the studio and the scope of their work, our PoC project took about half a year until we had something concrete. Currently, we could get a medium-sized studio on board as a proof of concept in less than a month.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="680" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/11/Captura-de-ecran-din-2022-07-21-la-12.10.00-1.jpg?resize=1200%2C680&quality=80&ssl=1"  alt=""  class="wp-image-153227" ><figcaption class="wp-element-caption">You don’t have to read any code: Here’s a screenshot from the AWS administration at Skylite Studios.</figcaption></figure>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"><strong>DP: Can’t people just rent a cloud instance themselves?</strong><br />Mihai Satmarean: The cloud providers are not made for “consumers” (or artists), but mainly for application and infrastructure developers. Theoretically, you can do everything yourself, but as with any DIY project, it depends on your skill level. With some of the tools we’ve developed, it’s much cheaper and quicker to involve us – or any other cloud IT expert – than to retrain yourself or your artists. We now have a combined expertise of over 100 years in IT and VFX. In other words, longer than IT has been around.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="682" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/11/Captura-de-ecran-din-2022-08-16-la-06.53.34-1.jpg?resize=1200%2C682&quality=80&ssl=1"  alt=""  class="wp-image-153237" ><figcaption class="wp-element-caption">And the documentation of the connections is also included in the interface.</figcaption></figure>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"><strong>DP: If it’s a product – even a service – does it have a price?</strong><br />Mihai Satmarean: We started it about two years ago, with four people working on it at least one day a week, until it became a full-time job from the first projects. As with any cloud project, the infrastructure costs are quite low at the beginning, as practically only the development costs need to be covered. Cloud providers also offer cost savings or “credits” for cloud migrations, etc. With this type of project, the costs increase with the size of the project (artists, farm, data, etc.). However, since every project is different, I can’t give a specific price for this – it depends on what you’re looking for.</p>



<p class="wp-block-paragraph"><strong>DP: Assuming we bring C&R.me into a project – how long does it take before I can start working?</strong><br />Mihai Satmarean: In our previous projects, we had a big migration that took about 2 months – today it would take a couple of weeks for a new start of a “Cr&Me-based studio”, not more. There are certain things that play a role – the current infrastructure in the studio, the complexity of the pipeline and where storage and libraries and licences are managed.</p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"><strong>DP: Let’s talk about licences: What can I use besides Blender? Where will the licence servers be located and how will they communicate with the infrastructure?</strong><br />Mihai Satmarean: Licences are still difficult to manage – like in traditional studios. AWS has done some good things there by making their tools available already licensed – which means Deadline is there. We run Mari, Substance, Nuke, Maya, 3DEqualiser, Houdini etc. more or less without any problems. But beware: some tools require a physical machine as a licence server, and apart from that we’ve only had problems with some Adobe products (if I remember correctly). But, from a bird’s eye view, it’s still difficult – there are regional restrictions, so the easiest way for the studio is to buy the licences – this works for rented licences too. Medium and larger studios usually already have these.</p>



<p class="wp-block-paragraph">Some tools try to find the physical machine or its hardware – and that obviously doesn’t work. I don’t want to say what the problems are because the engineers have promised to fix it! But these are “man-made limitations”, not fundamental problems.</p>



<p class="wp-block-paragraph">If you are a smaller studio, you may need to speak to your preferred reseller, for example Dve As<a href="http://www.dveas.de">(www.dveas.de)</a> who can help you get things rolling because with most professional software suites, simply clicking the ‘buy’ button on the webshop is not enough. Another point – with the geographical restrictions – is to run certain parts via VPNs, and some tools are even “computer-bound” – in which case the artist has to do some administrative work if a change of computer is required.</p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"><strong>DP: What are other restrictions?</strong><br />Mihai Satmarean: Not many – we have frameworks to connect to other specialised cloud services. We even already have one, but we can’t talk about it publicly (yet). Over the last year we have met various providers of VFX-specific cloud tools – it depends on the type of tool/service and its maturity, but in most cases there is not much difference to on-premise installation. If specific hardware is required that comes with the service and the cloud provider does not have a solution for it, then of course not.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="871" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/11/image-1.jpg?resize=1200%2C871&quality=80&ssl=1"  alt=""  class="wp-image-153232" ><figcaption class="wp-element-caption">The Teradici system at a glance.</figcaption></figure>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"><strong>DP: What is needed in terms of preparation?</strong><br />Mihai Satmarean: We currently rely on the Teradici PcOIP protocol, which was recently acquired by HP and renamed “HP Anywhere”, for which there are thin clients and zero clients. So far, out of 100 artists, only 2 have complained about slowness and it wasn’t even the bandwidth but the local LAN. For the artist it takes a minute to “switch on the machine” and then he or she can connect to any prepared, already running or “on demand” machine.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="543" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/11/Captura-de-ecran-din-2020-11-19-la-09.11.26-1.jpg?resize=1200%2C543&quality=80&ssl=1"  alt=""  class="wp-image-153224" ><figcaption class="wp-element-caption">Virtual machines are directly pre-configured ..</figcaption></figure>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="434" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/11/Captura-de-ecran-din-2021-08-10-la-11.09.45-1.jpg?resize=1200%2C434&quality=80&ssl=1"  alt=""  class="wp-image-153226" ><figcaption class="wp-element-caption">… including the necessary instances ..</figcaption></figure>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1"  fetchpriority="high"  decoding="async"  width="1200"  height="654"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/11/WhatsApp-Image-2023-04-10-at-09.25.53-1.jpg?resize=1200%2C654&quality=80&ssl=1"  alt=""  class="wp-image-153228" ><figcaption class="wp-element-caption">…and of course the necessary tools and extensions.</figcaption></figure>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"><strong>DP: Let’s be honest: cloud infrastructure depends on the ISP – does that even work in Germany?</strong><br />Mihai Satmarean: The artist needs a decent connection, but nothing excessive and you shouldn’t be too far away from the infrastructure. Working on a thin client with a 14.4 modem from a suburb of Passau on a server in Calcutta is not so convenient – but would theoretically even be feasible (laughs).</p>



<p class="wp-block-paragraph">Working on remote machines takes some time to get used to (some can handle it quickly and some just don’t like the idea and hold back). Also, some tasks require a better setup than others (modelling vs animation vs texturing etc).</p>



<p class="wp-block-paragraph">Actual bandwidth also depends on screen size, for example. To give you a figure: Two 4K screens require more than one screen, and the “lowest” speed for working efficiently with two screens would be about a 1 Mbit connection. That seems low, but remember: Teradici compresses the images and only sends “updates” – for the most part. So you can get away with a lower connection. But faster is always better!</p>



<p class="wp-block-paragraph"><strong>DP: But then you can’t quickly transfer the material from your computer to the cloud?</strong><br />Mihai Satmarean: Your compositors deliver new film material? (Laughs) Storage is exclusively in the cloud, where we maintain two copies in different regions to back up and optimise delivery. This is because it can take more than a week to copy everything to a new region with VFX data volumes. With this approach, we could switch the entire production to another region almost instantly. And the artists hardly upload anything after the ingestion process.</p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"><strong>DP: If this works, could you increase production more or less indefinitely?</strong><br />Mihai Satmarean: Yes, you can add new artists and studios in a matter of minutes – for example, if you want to work in Nuke, you can bring a compositing specialist into the project with just a few clicks – the infrastructure behind it scales “with you”. What you don’t need costs nothing, and virtual machines are always available. And if they are not used, the virtual machines are just configuration files on a computer. For “networked productions” – as is often the case with European films – I think this is a great opportunity to reduce costs and invest the money in the most important thing – the artists.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="674" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/11/Captura-de-ecran-din-2022-11-09-la-14.16.59-1.jpg?resize=1200%2C674&quality=80&ssl=1"  alt=""  class="wp-image-153238" ><figcaption class="wp-element-caption">Configuring new machines is also comparatively easy – and even with sliders and cost displays. It’s rarely been this easy to build a workstation!</figcaption></figure>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="351" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/11/Captura-de-ecran-din-2022-11-26-la-12.22.47-1.jpg?resize=1200%2C351&quality=80&ssl=1"  alt=""  class="wp-image-153230" ><figcaption class="wp-element-caption">During operation, there are log files that show either just a green light/red light to see if everything is working. (Of course, we could name all the machines in a more appealing way)</figcaption></figure>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="1060" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/11/Captura-de-ecran-din-2023-03-07-la-09.49.29-1.jpg?resize=1200%2C1060&quality=80&ssl=1"  alt=""  class="wp-image-153231" ><figcaption class="wp-element-caption">Who sits where? Depending on the pipeline, different regions and data centres<br />can be addressed.</figcaption></figure>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"><strong>DP: And what are the next things that will be developed for C&R.me?</strong><br />Mihai Satmarean: There are a few things on the agenda – Windows machines are difficult to automate (but this is getting better with recent updates), but most of the artist machines are already running on Linux – Centos, RockyLinux, Ubuntu etc. These are easy to automate and set up. These are easy to automate and set up. GPU compute time is still relatively expensive in the cloud – not compared to buying the hardware of course, but still a factor.<br />When we talk about the price of hardware: Different regions don’t always have the same types and quantities of machines as well as differences in pricing. This all plays a role in choosing a region. </p>



<p class="wp-block-paragraph">Think about it: It’s easy to create your own “private cloud” if you already have a data centre – either your own hardware as a private cloud that you can use like a public cloud – e.g. your “core” artist machines that always need to be running, and only during peak productions do you expand to the public cloud providers – with the ability to connect individual artists or departments at any time. I’ve seen this being useful for episodic productions and TV series. And so we are currently working on the possibility of enabling multi-cloud pivoting, i.e. switching from one cloud provider to another. </p>



<p class="wp-block-paragraph">This is linked to “hybrid deployments” – you can use your on-premise hardware as a private cloud – and when production requires it, connect it to the public clouds such as AWS or Azure. We work with other cloud providers specialising in VFX – soon we’ll be able to name them (laughs). This way, you have your “baseline” and can ramp up or down as needed.</p>



<p class="wp-block-paragraph"><strong>DP: And if I want to have this toolset for my own production?</strong><br />Mihai Satmarean: Well, you would email me at<a href="mailto:contact@skylitetek.com"> contact@skylitetek.com</a> – and we’ll find something interesting!</p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p><p>The post <a href="https://digitalproduction.com/2023/11/15/cr-me-post-production-in-the-cloud/">C&R.me – post-production in the cloud</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/belabeier/">Bela Beier</a>. </p></div>]]></content:encoded>
					
		
		
		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/11/ark2_trailer2_still_1-1.jpg?fit=3840%2C2160&#038;quality=80&#038;ssl=1" length="267865" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/11/ark2_trailer2_still_1-1.jpg?fit=1200%2C675&#038;quality=80&#038;ssl=1" width="1200" height="675" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title>Der Trailer für Ark 2 – gemacht von Nexodus! (nexod.us/showcase/ark-2-trailer)</media:title>
	<media:description type="html"><![CDATA[Der Trailer für Ark 2 – gemacht von Nexodus! (nexod.us/showcase/ark-2-trailer)]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/11/ark2_trailer2_still_1-1.jpg?fit=1200%2C675&#038;quality=80&#038;ssl=1" width="1200" height="675" />
<post-id xmlns="com-wordpress:feed-additions:1">153195</post-id>	</item>
		<item>
		<title>What about Roundtrips? Premiere Pro and DaVinci Resolve</title>
		<link>https://digitalproduction.com/2023/09/09/what-about-roundtrips-premiere-pro-and-davinci-resolve/</link>
		
		<dc:creator><![CDATA[Uli Plank]]></dc:creator>
		<pubDate>Sat, 09 Sep 2023 14:26:00 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[Blackmagic Design]]></category>
		<category><![CDATA[Adobe After Effects]]></category>
		<category><![CDATA[avid media composer]]></category>
		<category><![CDATA[conforming workflow]]></category>
		<category><![CDATA[DNxHD codec]]></category>
		<category><![CDATA[DP2304]]></category>
		<category><![CDATA[Final Cut Pro X]]></category>
		<category><![CDATA[media management]]></category>
		<category><![CDATA[non-linear editing]]></category>
		<category><![CDATA[offline editing]]></category>
		<category><![CDATA[Premiere Pro]]></category>
		<category><![CDATA[ProRes codec]]></category>
		<category><![CDATA[proxy editing]]></category>
		<category><![CDATA[roundtripping video]]></category>
		<category><![CDATA[video editing]]></category>
		<category><![CDATA[XML workflow]]></category>
		<guid isPermaLink="false">https://digitalproduction.com/?p=159321</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Wayang_in_PPro.jpg?fit=1200%2C277&quality=80&ssl=1" width="1200" height="277" title="Screenshot" alt="A screenshot of a video editing software interface displaying a timeline filled with clips and audio tracks. Various video thumbnails are visible on the left side, with blue indicators on the timeline representing different segments." /></div><div><p>Professionals experienced in teamwork will probably already be familiar with the following. But what about everyone else who, after initial successes, approaches projects that involve real money and real customers? Mistakes in the workflow and the need to fix them can push your real hourly wage well below the minimum promised by your government, not to mention the loss of your first important client.</p>
<p>The post <a href="https://digitalproduction.com/2023/09/09/what-about-roundtrips-premiere-pro-and-davinci-resolve/">What about Roundtrips? Premiere Pro and DaVinci Resolve</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/uliplank/">Uli Plank</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Wayang_in_PPro.jpg?fit=1200%2C277&quality=80&ssl=1" width="1200" height="277" title="Screenshot" alt="A screenshot of a video editing software interface displaying a timeline filled with clips and audio tracks. Various video thumbnails are visible on the left side, with blue indicators on the timeline representing different segments." /></div><div><div class='__iawmlf-post-loop-links' style='display:none;' data-iawmlf-post-links='[{&quot;id&quot;:2810,&quot;href&quot;:&quot;https:\/\/www.provideocoalition.com\/how-to-answer-when-someone-ask-you-to-move-a-project-from-avid-to-premiere-pro-or-vice-versa&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20250717093356\/https:\/\/www.provideocoalition.com\/how-to-answer-when-someone-ask-you-to-move-a-project-from-avid-to-premiere-pro-or-vice-versa\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-28 05:58:05&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-03 10:15:44&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-08 17:18:46&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-14 23:56:54&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-19 18:39:54&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-27 12:57:27&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-30 14:32:42&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-08 22:55:34&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-22 14:11:13&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-04-22 14:11:13&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:2811,&quot;href&quot;:&quot;https:\/\/www.macprovideo.com\/article\/audio-software\/fcp-x-to-logic-pro-x-the-fine-print&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20240405171648\/https:\/\/www.macprovideo.com\/article\/audio-software\/fcp-x-to-logic-pro-x-the-fine-print&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-28 05:58:06&quot;,&quot;http_code&quot;:403},{&quot;date&quot;:&quot;2026-01-03 10:15:44&quot;,&quot;http_code&quot;:403},{&quot;date&quot;:&quot;2026-01-08 17:18:44&quot;,&quot;http_code&quot;:403},{&quot;date&quot;:&quot;2026-01-14 23:56:53&quot;,&quot;http_code&quot;:403},{&quot;date&quot;:&quot;2026-01-19 18:40:00&quot;,&quot;http_code&quot;:403},{&quot;date&quot;:&quot;2026-01-27 12:57:26&quot;,&quot;http_code&quot;:403},{&quot;date&quot;:&quot;2026-03-30 14:32:41&quot;,&quot;http_code&quot;:403},{&quot;date&quot;:&quot;2026-04-08 22:55:32&quot;,&quot;http_code&quot;:403},{&quot;date&quot;:&quot;2026-04-22 14:11:11&quot;,&quot;http_code&quot;:403}],&quot;broken&quot;:true,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-04-22 14:11:11&quot;,&quot;http_code&quot;:403},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:2812,&quot;href&quot;:&quot;https:\/\/larryjordan.com\/articles\/workflow-apple-final-cut-pro-x-to-adobe-audition-and-back&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20211123131810\/https:\/\/larryjordan.com\/articles\/workflow-apple-final-cut-pro-x-to-adobe-audition-and-back\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-28 05:58:07&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-03 10:15:45&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-08 17:18:46&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-14 23:56:54&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-19 18:40:02&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-27 12:57:27&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-30 14:33:28&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-08 22:55:32&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-22 14:11:14&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-04-22 14:11:14&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:2813,&quot;href&quot;:&quot;https:\/\/www.imagineproducts.com\/product\/shotput-pro&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251022162004\/https:\/\/www.imagineproducts.com\/product\/shotput-pro&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-28 05:58:09&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-03 10:15:48&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-08 17:18:47&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-19 18:40:17&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-27 12:57:36&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-30 14:36:18&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-08 22:55:33&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-22 14:11:15&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-04-22 14:11:15&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:2122,&quot;href&quot;:&quot;https:\/\/www.videotoolshed.com\/handcrafted-timecode-tools\/qtchange&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20250717213854\/https:\/\/www.videotoolshed.com\/handcrafted-timecode-tools\/qtchange\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-27 23:24:03&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2025-12-31 18:21:09&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-05 20:40:54&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-19 18:40:54&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-26 21:25:50&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-04 00:26:44&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-18 21:34:50&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-27 15:19:47&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-05 21:36:05&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-15 15:33:56&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-30 14:41:39&quot;,&quot;http_code&quot;:503},{&quot;date&quot;:&quot;2026-04-02 23:34:36&quot;,&quot;http_code&quot;:503},{&quot;date&quot;:&quot;2026-04-08 22:55:41&quot;,&quot;http_code&quot;:503},{&quot;date&quot;:&quot;2026-04-12 04:39:05&quot;,&quot;http_code&quot;:503},{&quot;date&quot;:&quot;2026-04-15 10:18:58&quot;,&quot;http_code&quot;:503},{&quot;date&quot;:&quot;2026-04-22 14:11:29&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-04-22 14:11:29&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:2814,&quot;href&quot;:&quot;https:\/\/youtu.be\/MQJ9W2bOXmA&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;https:\/\/www.youtube.com\/watch?v=MQJ9W2bOXmA&feature=youtu.be&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;}]'></div>
<p class="wp-block-paragraph">Professionals experienced in teamwork will probably already be familiar with the following. But what about everyone else who, after initial successes, approaches projects that involve real money and real customers? Mistakes in the workflow and the need to fix them can push your real hourly wage well below the minimum promised by your government, not to mention the loss of your first important client.</p>



<p class="wp-block-paragraph">We will highlight the challenges of roundtripping (i.e. the reciprocal handover of editing steps) with Premiere Pro (PPro for short) and DaVinci Resolve (DR for short). But the solutions are similar for other pairings, although the DR manual contains considerably more information on exchanging with Avid and even Final Cut Pro X than for Premiere. The two manufacturers probably don’t like each other that much …</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="754" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Tabelle_Resolve.jpg?resize=1200%2C754&quality=80&ssl=1"  alt=""  class="wp-image-159400" ><figcaption class="wp-element-caption">This table in the manual of DaVinci Resolve may already indicate the difficulties.</figcaption></figure>



<h4 id="the-problem" class="wp-block-heading">The Problem</h4>



<p class="wp-block-paragraph">Unfortunately, some semi-informed people in the infinite expanses of the Internet keep claiming that such a co-operation is quite simple. But what’s much worse is that some project participants pick up on such information and even believe it (kind of reminds you of Corona, doesn’t it?). If these people then have something to say in the project – perhaps even more than those who are better informed – the whole thing can become exhausting. There’s little point in trying to talk a team member who really wants to do the rough cut on their own device out of their usual programme if the production is behind it.</p>



<p class="wp-block-paragraph">It’s better to warn this person about the risk of working with original files – possibly even on the chip from the camera – and having to look at dull clips in log (and grumble about the camera person). You’d better tell them that you’ll provide them with carefully backed up and prepared material for the edit, which will also run on a less powerful laptop or even a tablet (yes, Luma Fusion can export an XML). The biggest remaining risk then is that the person on the other end uses effects that are lost in transit. In addition to the ones we’ve done here, you’ll need your own tests and then precise agreements on what is and isn’t allowed. Hard cuts always work, but even a fade can cause problems.</p>



<h4 id="hand-over-entire-projects" class="wp-block-heading">Hand over Entire Projects?</h4>



<p class="wp-block-paragraph">Forget it! First of all, we must unfortunately clarify what does not work: You can’t transfer projects between different NLEs. No! Nope! No way! Nada! The project management and their file formats are too different, as are the individual tools and their possibilities. You don’t even have to assume that the manufacturers are deliberately closing themselves off. These systems have literally grown over decades, and file management is ultimately the foundation of non-destructive video editing. Nobody changes this without absolutely compelling reasons.</p>



<p class="wp-block-paragraph">Technical development alone means that software is usually not even fully compatible with its own previous versions. Even if some programs offer to save in an older format, it is better for everyone involved in the project to use the same version. If it is necessary for reasons of hardware performance for someone to work with an older version, the same procedures and tips apply as below, just as if you were dealing with different software, unless compatibility is expressly guaranteed by the manufacturer. Even then, we recommend carrying out your own tests. We are not alone in this opinion: <a href="https://www.provideocoalition.com/how-to-answer-when-someone-ask-you-to-move-a-project-from-avid-to-premiere-pro-or-vice-versa/" data-type="link" data-id="https://www.provideocoalition.com/how-to-answer-when-someone-ask-you-to-move-a-project-from-avid-to-premiere-pro-or-vice-versa/">ProVideo Coalition</a>.</p>



<p class="wp-block-paragraph">In addition, each manufacturer has its own solutions for working with specialised software, i.e. for compositing, audio editing (DAW) or video compression. Adobe has After Effects, Audition and Media Encoder with Dynamic Linking – not always entirely painless, as you can find out in the relevant forums. Apple has Motion, Logic Pro and Compressor, but in essence this only involves the semi-automatic transfer of rendered files or files to be compressed. Logic Pro can read XML files from FCPX, but even here there are stumbling blocks, as you can read at <a href="https://www.macprovideo.com/article/audio-software/fcp-x-to-logic-pro-x-the-fine-print">macProvideo</a>. DaVinci Resolve goes the furthest by having integrated Fusion and Fairlight years ago. However, until today the separate version of Fusion is still more stable, the transfer can be done via the VFX Connect Clip and is then similar to the procedure with FCPX and Motion.</p>



<p class="wp-block-paragraph">I won’t go into sound editing here, but if you want to work with Logic Pro, you can read the article above and take the diversion via FCPX if necessary. The same applies to Audition, see <a href="https://larryjordan.com/articles/workflow-apple-final-cut-pro-x-to-adobe-audition-and-back/">Larry Jordan</a>‘s workflow here. The topic of working with Pro Tools would require a separate, extensive article by an audio specialist (no, it’s not uncomplicated either, even though this is repeatedly claimed). And if it doesn’t always work within the family, how is it supposed to work with the competition? As I said: Forget it! Anyone presenting serious tips and workflows on the internet will therefore talk about timelines and not entire projects.</p>



<h4 id="editing-material" class="wp-block-heading">Editing Material</h4>



<p class="wp-block-paragraph">For all non-conformists around here: Make friends with the term conform! This requires a clear division of labour, detailed agreements and careful testing of the workflow, but then you can definitely cut with one system and do the colour design with the next. As a rule, versions of the camera clips that are easier to save and edit are created for this purpose – the offline clips. The resulting edited versions have to be precisely linked to the camera originals for grading. DR was originally a pure colour grading system. It therefore offers a wide range of options for combining edited versions from other systems with the originals via re-conform, which the manual describes in detail in Chapter 56 “Conforming and Relinking Clips”.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img data-recalc-dims="1"  decoding="async"  width="399"  height="410"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Proxy_Generation_in_DR.png?resize=399%2C410&quality=72&ssl=1"  alt=""  class="wp-image-159549" ><figcaption class="wp-element-caption">DaVinci Resolve generates proxies in an easy format with burn-ins.</figcaption></figure>
</div>


<p class="wp-block-paragraph">The most common one is probably the collaboration between Premiere for editing and Resolve for colour grading, simply because of their widespread acceptance, as both programs are available for PC and Mac. We don’t want to start a religious war here about who is better (or has the fairer business model). The fact is that some people have been editing on PPro for years, but DR has the more comprehensive grading. Even though both programs largely (but by no means completely) understand the same video formats, you only really need the full quality of elaborate RAW or high-resolution log files when grading. An online/offline workflow is particularly helpful if you are not sitting next to each other, being connected to shared storage.</p>



<h4 id="creating-offline-material-in-resolve-proxies" class="wp-block-heading">Creating Offline Material in Resolve (Proxies)</h4>



<p class="wp-block-paragraph">It is therefore advisable to use DR to capture and back up the originals. The Clone Tool is available for the backup if you don’t want to use one of the specialised programs such as <a href="https://www.imagineproducts.com/product/shotput-pro">ShotPut Pro</a>. After backing up to several physically separate media, import the clips into a timeline with the appropriate frame rate.</p>



<p class="wp-block-paragraph">This is where the first pitfall occurs: even though Resolve can handle a timeline with mixed frame rates (fps = frames per second), this is not recommended for roundtripping. Since such clips with deviating fps are often only intended for slow motion, the originals should be set to the target speed in the Clip Attributes. This results in the best image quality without any additional computing effort. In principle, Resolve can also handle different fps rates when importing from PPro, but this only makes sense if the final render is also carried out in Resolve.</p>



<p class="wp-block-paragraph">Other changes such as speed ramping can be problematic (more on this later). If, on the other hand, clips are to retain their different fps or the other party wants to work with more complex fades and effects, you should point out possible problems and allow for additional work. It is better to limit yourself to simple cuts and fades for this approach during offline editing in PPro and to do more complex work together on a well-equipped Resolve workstation. Then you just have to make sure that your originals all contain a correct timecode (TC for short). They shouldn’t all start with 0:00:00:00, which usually indicates unreadable TC or none at all, and should have unique names.</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="276" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Wayang_in_PPro.jpg?resize=1200%2C276&quality=80&ssl=1"  alt=""  class="wp-image-159404" ><figcaption class="wp-element-caption">It’s absolutely doable to move a timeline of over 4 hours across, with 3 cameras and several audio tracks byXML.</figcaption></figure>



<p class="wp-block-paragraph">Separately recorded sound should already be prepared (and checked) in Resolve, synced by TC, with Waveform or, if necessary, by hand. The best way to do this is to create a timeline with the maximum number of audio tracks that any of your clips require. Then put all the clips, e.g. for a working day, into a timeline as a day roll and the originals into an appropriately named bin. If the recordings are in a log format or the camera operator used a special LUT when recording, you should convert them to Rec. 709 or include the LUT so that there are no complaints regarding the picture from uninformed people. DR offers extensive options for the naming and bin arrangement of such clips with its “Smart Bins”.</p>



<p class="wp-block-paragraph">It is best to include the TC and the file names in the image, so that you have an additional option to check for placement (unless the recipients object). The automatic creation of proxies in DR is now capable of such “burn-in” for the tasks described here, by activating “Render timeline effects” while rendering with the option “Individual clips”. If you have a very high shooting ratio, you can save lots of space by first sitting down with your partners to cull, creating a rough cut and consolidating it using Media Management. Be sure to link to the new clips and render the result as proxies first. Unfortunately, there is no option when transcoding with Media Management to burn information into the image or make an initial colour correction in Media Management itself. You therefore have to output the timeline as individual clips via the Deliver page.</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="259" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Clips_not_found.jpg?resize=1200%2C259&quality=80&ssl=1"  alt=""  class="wp-image-159381" ><figcaption class="wp-element-caption">Sometimes clips may not be found in the expected location…</figcaption></figure>



<p class="wp-block-paragraph">The sound always comes across as the original without corrections, but if desired in several tracks or even in separate files. Linear PCM in 48 kHz is recommended as the format; also with a higher bit depth or sample rate if some sources allow this. For the image, use an I-frame codec that does not place too much load on the receiver’s computer and is readable in any case. MOV (and not MP4) is recommended as the container because there are no problems with the TC track. ProRes is suitable as a codec for all systems, but CineForm or DNxHD/HR is just as good for PCs. MXF in OP1A as a container is also okay for Premiere (but as MXF OP-Atom only for Avid). Surprisingly, PPro can write ProRes into MXF, while DR doesn’t offer that combination.</p>



<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="607"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Conform_Options.jpg?resize=1200%2C607&quality=80&ssl=1"  alt=""  class="wp-image-159383" ><figcaption class="wp-element-caption">but as a long established colour grading software it offers extensive options to conform clips.</figcaption></figure>



<p class="wp-block-paragraph">These codecs are easily scalable in terms of quality and file size, but are of course larger than H.264/265. Nevertheless, you should avoid such GOP codecs, because depending on the hardware performance, they may run worse on the target computer. Even when producing in 4K or UHD, offline editing can be done in HD if the originals have been checked for image sharpness beforehand. As DR allows two installations, you can run the transcoding on a weaker second computer, as it blocks the workstation for a little longer depending on the hardware. DR will point out any missing clips. If your proxies carry the same name and are in a subfolder named “Proxy” under the one with the full-res originals, DR can switch between them automatically.</p>



<h4 id="attention-pitfalls" class="wp-block-heading">Attention, Pitfalls!</h4>



<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  width="1200"  height="594"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/No_Match.jpg?resize=1200%2C594&quality=80&ssl=1"  alt=""  class="wp-image-159390" ><figcaption class="wp-element-caption">Something like this typically happens with missing timecode.</figcaption></figure>



<p class="wp-block-paragraph">Some semi-professional cameras pack a non-standard TC into an MP4 header, which is then not read everywhere. It often happens that TC in MP4 is recognised by PPro, but not in FCPX, for example. If you determine with MediaInfo that there is a TC, you can re-wrap the material to MOV, then the TC should be fully readable. This can be done quickly and losslessly with a tool such as Shutter Encoder (donationware), even in batch processing.</p>



<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  width="1178"  height="768"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Shutter_Encoder.png?resize=1178%2C768&quality=72&ssl=1"  alt=""  class="wp-image-159531" ></figure>



<p class="wp-block-paragraph">If there is no TC at all, you can use QTchange from <a href="https://www.videotoolshed.com/handcrafted-timecode-tools/qtchange/">Videotoolshed</a> to add one to the MOV based on the creation time. This is not necessarily accurate enough for sound synchronisation, but at least it is close. Alternatively, you can find a few Python scripts in the Resolve forum. It is also quite bad when amateur cameras keep assigning the same names after changing the storage medium. This can often be fixed in one of the camera menus, but if it has already happened, it is better to also use the “Reelname” or “Reelnumber” field in the metadata (in PPro and FCPX this is called “Tape” or “Tape name”).</p>



<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  width="1159"  height="280"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/QTChange-1.png?resize=1159%2C280&quality=72&ssl=1"  alt=""  class="wp-image-159528" ><figcaption class="wp-element-caption">QTchange can add missing timecode based on creation time and rename clips if needed.</figcaption></figure>



<p class="wp-block-paragraph">This field should always be used to ensure the link back to the originals. In the professional sector, it is usually already filled in by the cameras, but they do not generate duplicate clip names anyway. If this information is missing, it can be added in QtChange too. However, it is not read by every programme. If this is not the case, as in Resolve, you must place the clips of each memory chip in a separate bin and use this specifically for conforming (Conform from Bin). Now you can pass the rendered material through for editing. But how do the editing decisions get back to Resolve from PPro, FCPX or other editing programmes?</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="821" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Burn_In.jpg?resize=1200%2C821&quality=80&ssl=1"  alt=""  class="wp-image-159379" ><figcaption class="wp-element-caption">You should burn in at least the clip’s name and the timecode for offline clips.</figcaption></figure>



<h4 id="via-edl" class="wp-block-heading">Via EDL</h4>



<p class="wp-block-paragraph">The oldest method is an EDL (Edit Decision List), which recognises a maximum of 2 video tracks and 4 mono audio tracks. The most common format is CMX3600, named after an editing control system from the 1980s when people still worked with three mechanical tape machines. This is practically always understood, but unfortunately only consists of the TC information for hard cuts and any crossfades, plus the clip names. Some of the other fades are named in the EDL, but they become fades in DR. Any effects are ignored.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="986" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/EDL_w_Diss.jpg?resize=1200%2C986&quality=80&ssl=1"  alt=""  class="wp-image-159386" ><figcaption class="wp-element-caption">A traditional EDL won’t transport more than 2 video tracks, 4 audio tracks and cross-dissolves.</figcaption></figure>



<p class="wp-block-paragraph">Anyone who turns up their nose at such a stone-age format should realise that it is quite useful for a very reliable and uncomplicated transfer method. However, this requires a clear division of labour and discipline (how awful). It is also not ideal for RAW formats and not perfect for roundtripping, but rather a one-way street from editing to grading if the material or parts of it have not been prepared in Resolve (see above). Rather, the editors cut the originals in their favourite program and render a high-quality version in one piece at the end.</p>



<p class="wp-block-paragraph">Advantages: All the options that the editing programme is capable of, including resizing, speed ramps, frame rate adjustments, etc., are baked into a single film file at this point. Titles, motion graphics or VFX can also come from a programme such as After Effects. This rules out any misunderstandings on the side of the target software. However, you should switch off any grading attempts beforehand, as the transfer to Resolve may take place in a slightly reduced colour space (and probably not without good reason).</p>



<p class="wp-block-paragraph">For sources with high compression and a maximum colour depth of 10 bits, such as from filming photo devices or mobile phones, DNxHR HQX 10 bit or ProRes 422 HQ is completely sufficient for transfer. For better sources, DNxHR 444 12 bit or ProRes 4444 (for pixel peepers with a lot of storage space also in XQ) is used. This film should, however, be finally approved as far as editing and effects are concerned, because now only the grading is done. For this purpose, an EDL is also output for the timeline, which is imported into DR as a pre-conformed EDL in addition to the film. In the last step, you have to point to the folder of the clip from PPro. DR then splits everything into individual cuts again, allowing colour grading for each clip.</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="232" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Pre_conformed_EDL.jpg?resize=1200%2C232&quality=80&ssl=1"  alt=""  class="wp-image-159394" ><figcaption class="wp-element-caption">This EDL is used to separate the cuts for grading.</figcaption></figure>



<figure class="wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="278" width="1200"  decoding="async"  data-id="159385"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/EDL_Methode.jpg?resize=1200%2C278&quality=80&ssl=1"  alt=""  class="wp-image-159385" ><figcaption class="wp-element-caption">Fades get marked, but there are no ‘handles’.</figcaption></figure>
</figure>



<p class="wp-block-paragraph">Disadvantages: You cannot colour grade picture-in-picture effects or superimposed graphics separately in DR. There are two options for this: You can either adjust the colours in the editing program using its tools or, if there are only a few elements that require time-consuming correction, you can place them individually at the end of the timeline (preferably separated by a short black) and grade them in Resolve. The result can then be inserted back into the original programme at the desired position. The same applies to dissolves, because the two clips are not accessible separately for the fade period. DR recognises a dissolve based on the EDL, but this contains clips that have already been mixed. If necessary, a sliding correction with keyframes can serve as a workaround here. This technical limitation is hardly noticeable with short dissolves.</p>



<p class="wp-block-paragraph">In addition, the metadata of the camera originals is not available in DR, so you have to take care of the colour management yourself. This method is therefore not suitable if you have sources in RAW or changing log formats. With semi-professional cameras, this hardly matters because information about their image profiles is usually missing in DR anyway. Careful note-taking and good coordination are therefore required. However, the EDL itself provides some clues, as at least the names of the originals appear there, which usually allow the camera to be identified. Although DR also offers two quite powerful methods for automatically recognising cuts, these are not 100% reliable for fast action or some effect transitions and are more recommended for archive material for which an EDL no longer exists.</p>



<p class="wp-block-paragraph">The big advantage: There are no misunderstandings between the programmes, and formats that DR does not like on PC (such as DV), or generally incompatible ones like MPEG-2 or ProRes RAW, can be used. Any changes to the cut must be made in DR. However, these are limited to hard cuts with a change of position or shortening, as the original files are not accessible to DR in this way – and therefore no extensions or new transitions. For titles and graphics, however, you can pass a duplicated timeline without these elements to DR, render the result from the grading and add the rest in the original editing software. Finally, you can adjust their colours and contrast directly without having to use DR.</p>



<p class="wp-block-paragraph">Darren Mostyn has the <a href="https://youtu.be/MQJ9W2bOXmA">best tutorial</a> on this, but even he is somewhat superficial and does not mention many problems. Ultimately, we are dealing with destructive editing here, whereas any modern editing system (NLE) works non-destructively in that you can access the unaltered (and hopefully saved) original files at any time. Such restrictions are certainly not to the taste of anyone who is used to the endless tweaking of digital media right up to the last minute. If all the material was initially prepared by DR and delivered to PPro, access to the original files also works via regular import of an EDL, but XML can do much more.</p>



<h4 id="via-xml" class="wp-block-heading">Via XML</h4>



<p class="wp-block-paragraph">The abbreviation stands for Extensible Markup Language, so it is a universal language that is not only suitable for editing information. As an open standard, it is used for a wide variety of data descriptions and is (somewhat) readable for both humans and computers. Apple has already used it in Final Cut Pro 7 for the exchange of editing information. This oldest format (XML 1.0) has developed into a quasi-standard and is the only one that PPro reads or outputs if Final Cut Pro XML is selected for export. These files transport much more information than an EDL, but are still compact enough to be sent by e-mail or cloud service. Unfortunately, you should still not expect the target programme to understand all of the source’s options. We have therefore thoroughly tested how well PPro and DR understand each other via XML.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img data-recalc-dims="1"  decoding="async"  width="542"  height="329"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Export_FCP_XML.png?resize=542%2C329&quality=72&ssl=1"  alt=""  class="wp-image-159563" ><figcaption class="wp-element-caption">This is the only version of an XML Premiere Pro can export.</figcaption></figure>
</div>


<h4 id="premiere-to-resolve" class="wp-block-heading">Premiere to Resolve</h4>



<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  width="893"  height="423"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Translation_Report.png?resize=893%2C423&quality=72&ssl=1"  alt=""  class="wp-image-159565" ><figcaption class="wp-element-caption">Unfortunately, the report by Premiere Pro doesn’t really mention all the issues.</figcaption></figure>



<p class="wp-block-paragraph">In Premiere, we simply select the desired sequence and go to File > Export > Final Cut Pro XML. We can still assign a name, but DR does not read it anyway, instead the one in the XML file. PPro kindly creates a text file called FCP translation results, but this is largely worthless: there are far fewer references than there are actual problems. A tip for importing: First load all sources into the media page, sorted neatly into bins if you like. Then load the XML from Premiere via Import > Timeline, but switch off the automatic import of the media. You can also change the name of the timeline here. If only it were that easy in Premiere (see below)!</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="1038" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Import_Timeline_DR.jpg?resize=1200%2C1038&quality=80&ssl=1"  alt=""  class="wp-image-159388" ><figcaption class="wp-element-caption">If your project has its original clips and proxies already arranged in DaVinci Resolve, you don’t need to import again.</figcaption></figure>



<p class="wp-block-paragraph">At first glance, we are pleased that all video and audio tracks of a timeline with over 4 hours arrive without errors. Even timeline markers are included. Although they are all blue, their position and duration are correct, and a comprehensive commentary text also arrives – a very welcome tool for team coordination (called notes in DR). Clip markers, on the other hand, don’t work. Another communication aid is to switch off video tracks (this does not work for audio tracks). That’s helpful, because you can simply place Adobe-specific items such as titles or linked After Effects clips on a deactivated track for export.</p>



<p class="wp-block-paragraph">As for fades, there is of course the cross-dissolve (sadly only the “video” version, “film” may look nicer), a fade to white or black and even a wipe from PPro is coming across as an edge wipe. Caution: Temporally asymmetrical transitions get centred on the cut. All other transitions become crossfades. This is stated in the aforementioned protocol, but does not fully correspond to the information in the DR manual. Regarding filters, even standards such as a Gaussian blur or unsharp masking do not come across, even if both NLEs offer them. Perspective effects are also lost, although both programmes are capable of those too.</p>



<p class="wp-block-paragraph">Level changes of the audio tracks are ignored, but video opacity values are even transferred with keyframes. Only very slight deviations can be detected in all functioning keyframes, but do not tinker with the interpolation method! Even speed changes arrive including keyframes, but don’t get too excited: linear interpolation is arriving in DR, as with the other animated values. At best, you can use it to suggest what you want, but the aesthetic fine-tuning has to be done during final production. Still images are lost, so you should turn them into video clips at the source. Finally, stereo audio tracks get turned into mono for no apparent reason. You’ll need to reconfigure them in the Clip Attributes.</p>



<p class="wp-block-paragraph">A smaller pitfall is the TC of the timeline. It normally starts at zero for PPro and at one hour for DR – this can easily be changed on import, in the project or for the single timeline. Scaling and position, even rotation, are coming across, including animation. There is another issue to take care of: PPro has two methods to scale a clip to the timeline resolution. If the clip is set to “Scale to frame size” that is a virtual scaling and it will not be observed in the XML, accordingly the clip will arrive in DR at its original size. “Fit to frame” will look just the same in PPro, but the clip will be scaled in DR too. </p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img data-recalc-dims="1"  decoding="async"  width="729"  height="140"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2023/09/Fit_to_frame.png?resize=729%2C140&quality=72&ssl=1"  alt=""  class="wp-image-159728" ><figcaption class="wp-element-caption">“Fit to frame” will scale your clips to the timeline resolution. This setting will be respected in DaVinci Resolve.</figcaption></figure>
</div>


<p class="wp-block-paragraph">You can set this behaviour in the presets before importing media to your PPro project to make it the default. But please don’t change the anchor point, that would ruin everything. Behind this is a fundamental difference: PPro is working in absolute pixel values, while DR is resolution independent (generally) and works in percentages. Unfortunately, only linear interpolation is used here too, so it is not always looking nice. Scaling of different source sizes can be set in the DR project too, or be fixed in the Inspector. Also pay attention to the resolution of the timeline, or it will be switched to the format of the largest video source.</p>



<h4 id="resolve-to-premiere" class="wp-block-heading">Resolve to Premiere</h4>



<p class="wp-block-paragraph">Pay attention to the format when exporting from DR: Only FCP 7 V5 XMLs are accepted by PPro via the import command. First of all, the positive: Not only do all tracks and even still images arrive here, but also the information for muting audio or deactivating video tracks. However, this almost exhausts the communication options: only clip markers are received, but they are useless without any text or names, and duration markers are not received at all. At least you don’t have to limit communication to notes on paper: right-click on the timeline and select Timelines > Export > Timeline Markers to EDL to get a list of timecodes, comment texts and the colour specification as text (note: no clip markers).</p>



<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  width="1181"  height="647"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/TL_Marker_to_EDL.jpg?resize=1181%2C647&quality=80&ssl=1"  alt=""  class="wp-image-159402" ><figcaption class="wp-element-caption">Written communication in the team can be done by an EDL for markers.</figcaption></figure>



<p class="wp-block-paragraph">Crossfading is possible, but only in the standard version (video). Dip to colour usually becomes a black fade, but with pure white it also works in this direction. Edge wipe comes across, even with a change of direction, but don’t get overconfident straight away: A free choice of angle becomes the next 45-degree step. Opacity, zoom, position and rotation including keyframes only work if the project in PPro has already been set accordingly. If the individual clip is subsequently rearranged in DR, all keyframes are removed. Here too, the anchor point must remain in the centre, otherwise you get nonsense – without any warning. Speed ramps seem to work at first glance, but they are completely wrong. Filters: Forget them! Adjustment layers too.</p>



<p class="wp-block-paragraph">What can be easily deactivated in DR can be a bit annoying in PPro: Every time a timeline is re-imported, some clips are re-imported as duplicates, even though they already exist. At some point, you may end up with an endless list of identical clips. But there is a workaround if it happens to you: Create a temporary project, load your XML timeline there, link all the media that may still be missing and save it. Now switch to the original project, go to the temporary project in the Media Browser and link it using Dynamic Link. You can now navigate to the imported sequence and open it in the source window. It can then be dragged into the current project without reloading all clips. Cumbersome, but clearer in the end. All this considered, the path from PPro to DR works much better than the other direction.</p>



<h2 id="advice" class="wp-block-heading">Advice</h2>



<p class="wp-block-paragraph">It should be clear that this information can only be a snapshot, as both NLEs are constantly being further developed. We have tested with DaVinci Resolve 19.1.3 and Premiere Pro 25.1 and have by no means tried out all the transitions and filters – this could fill several pages. In chapter 55 of the Resolve manual under “Preparing to Move Your Project to DaVinci Resolve” there are detailed tables on this, but in our own tests they were by no means correct in all points (although Premiere is hardly mentioned anyway).</p>



<p class="wp-block-paragraph">It is therefore essential to carry out extensive workflow tests with your own material and all the desired design tools. If this is not respected by the production company, it is better to keep your hands off the project or make it clear in the contract that the corresponding additional services will be charged by the hour. Of course, with the XML method, only the information is handed over for editing and not the video material. We must therefore ensure ourselves that it can be found by the other system. This can be particularly confusing under Windows if the respective drive has been assigned a new letter. The link commands in the respective NLE usually solve the problem. </p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="468" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Difference.jpg?resize=1200%2C468&quality=80&ssl=1"  alt=""  class="wp-image-159384" ><figcaption class="wp-element-caption">A reference clip used in difference mode makes it easy to spot any deviations.</figcaption></figure>



<p class="wp-block-paragraph">To be sure that the whole edit has arrived correctly, a reference clip should always be rendered on the source system, preferably again with TC and clip names burnt in. You can load this in DR as “offline” in the left-hand viewer, then it is linked via TC and runs constantly synchronised with the right-hand image of the timeline (pay attention to the start TC!). Alternatively, you can right-click on “Difference” in the timeline viewer. Then everything except the burn-ins should remain black when scrolling through if no errors have occurred.</p>



<h4 id="updated-on-september-10th-2025" class="wp-block-heading">Updated on September 10th, 2025</h4>



<p class="wp-block-paragraph">As of today, DR is supporting ProRes RAW too, so it can be transferred and used on both sides.</p><p>The post <a href="https://digitalproduction.com/2023/09/09/what-about-roundtrips-premiere-pro-and-davinci-resolve/">What about Roundtrips? Premiere Pro and DaVinci Resolve</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/uliplank/">Uli Plank</a>. </p></div>]]></content:encoded>
					
		
		
		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Wayang_in_PPro.jpg?fit=2560%2C590&#038;quality=80&#038;ssl=1" length="163296" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Wayang_in_PPro.jpg?fit=1200%2C277&#038;quality=80&#038;ssl=1" width="1200" height="277" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title>Screenshot</media:title>
	<media:description type="html"><![CDATA[A screenshot of a video editing software interface displaying a timeline filled with clips and audio tracks. Various video thumbnails are visible on the left side, with blue indicators on the timeline representing different segments.]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Wayang_in_PPro.jpg?fit=1200%2C277&#038;quality=80&#038;ssl=1" width="1200" height="277" />
<post-id xmlns="com-wordpress:feed-additions:1">159321</post-id>	</item>
		<item>
		<title>Beyerdynamic gets spacey</title>
		<link>https://digitalproduction.com/2023/08/15/beyerdynamic-gets-spacey/</link>
		
		<dc:creator><![CDATA[Bela Beier]]></dc:creator>
		<pubDate>Tue, 15 Aug 2023 12:50:00 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[beyerdynamic]]></category>
		<category><![CDATA[beyerdynamic space]]></category>
		<category><![CDATA[DP2304]]></category>
		<category><![CDATA[Microphone]]></category>
		<category><![CDATA[subscribers]]></category>
		<guid isPermaLink="false">https://digitalproduction.com/?p=150464</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/10/space-charcoal-light-speakerphone-beyerdynamic.png?fit=1181%2C803&quality=72&ssl=1" width="1181" height="803" title="" alt="" /></div><div><p>Just between us: sometimes we don't understand our readers - after testing the Poly 20 in the last issue, we were asked if we knew of any alternatives that might be a little more music orientated. And that in a VFX magazine? Well, now that we know the enquirer, it's clear: the reader needs more bass! (Don't worry, Christof, Olaf and Arianna - I won't mention your names!) - and a company from Heilbronn has something like this in its range.</p>
<p>The post <a href="https://digitalproduction.com/2023/08/15/beyerdynamic-gets-spacey/">Beyerdynamic gets spacey</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/belabeier/">Bela Beier</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/10/space-charcoal-light-speakerphone-beyerdynamic.png?fit=1181%2C803&quality=72&ssl=1" width="1181" height="803" title="" alt="" /></div><div><p class="wp-block-paragraph">If we were a pure audio magazine, we would only have to say “Guys, there’s a Bluetooth Mobile Work speaker from Beyerdynamic” and everyone would know what we mean. Beyerdynamic has always been a fixture in the recording studio – legendary headphones and reference microphones for the stage, presentation and recording studio. But now also office products?</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="1080" width="743"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/10/Clipboard-Image-4k.png?resize=743%2C1080&quality=72&ssl=1"  alt=""  class="wp-image-150467" ></figure>



<h2 id="space" class="wp-block-heading">Space?</h2>



<p class="wp-block-paragraph">Beyerdynamic Space is a “personal speakerphone” for phone calls and meetings, i.e. a mobile hands-free solution for everyday use. It is connected via Bluetooth, USB or a USB-WL adapter. The 5-watt loudspeaker generates a maximum sound pressure level of 73.8 dBA (pink noise) or 81 dB at one kilohertz. The four built-in microphones work omnidirectionally in the range from 100 Hz to 10 kHz and communication is via Bluetooth 5.0.</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="900" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/10/2023-05-23-13.11.49-4k.png?resize=1200%2C900&quality=72&ssl=1"  alt=""  class="wp-image-150468" ><figcaption class="wp-element-caption">Banana for Scale</figcaption></figure>



<p class="wp-block-paragraph"><br />Charging is via USB C, Beyerdynamic claims 20 hours of “operating time” with a charging time of 2.5 hours. The round “puck” is just under 4 centimetres high and has a diameter of 13 centimetres; there are “touch areas” on the top for volume up/down, answering and hanging up calls, microphone off, as well as a battery indicator and Bluetooth function. Everything is very clear and easy to use. If you press one of the buttons, either a voice output or a colour change of the LED provides information (LED can be switched off!). There is a tripod thread on the underside as well as a holder for the wireless adapter and four vibration-absorbing feet.<br /></p>



<h2 id="pro" class="wp-block-heading">Pro</h2>



<p class="wp-block-paragraph">Enough data, what does it look like in everyday use? First of all: Here too, the focus is on “speaking” and not “making music”. Nevertheless, voices sound richer and with more bass than with the Poly 20, but it’s just a loudspeaker – so don’t expect studio quality!<br />Nevertheless, Beyerdynamic can’t deny its origins and has integrated a voice and music mode that switches automatically, turning the conference device into a roaring puck that sounds surprisingly good – if you just want to listen to a bit of music and don’t want to master something. Operation is child’s play and the device is as solidly built as you would expect from Beyerdynamic.<br />I reckon you can percussively silence the meeting pain in the arse without damaging the device. (Every meeting has a pain in the arse – if you don’t notice it, you’re one yourself)</p>



<p class="wp-block-paragraph"><br />Speaking of meetings: The microphones are arranged omnidirectionally – so several people can speak into the device and all are recorded at the same volume – this worked surprisingly well in the test – in a three-way conversation (and a cat), everyone was easy to understand and nobody had to shout. Beyerdynamic calls this feature “360° Smart Mic Technology”, which allows the Space to recognise whether the incoming sound is a voice or an interference signal. Reverberation, echoes and feedback are thus cancelled out. To summarise: the implemented echo cancellation works perfectly. Anyone who wants to use the device at trade fairs and in open-plan offices will appreciate the Kensington Lock. Wireless devices in particular tend to disappear, but not with a K-Lock.<br />Speaking of lots of people: There is the so-called “Business Mode”: A key combination of “Remove” and “Bluetooth” switches to Business Mode, which prevents the device from connecting undesirably via Bluetooth with a neighbouring device that was last paired with Space.</p>



<figure class="wp-block-image size-full"><img data-recalc-dims="1"  decoding="async"  width="1181"  height="511"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/10/space-speakerphone-beyerdynamic-perspective_transparent.png?resize=1181%2C511&quality=72&ssl=1"  alt=""  class="wp-image-150465" ></figure>



<p class="wp-block-paragraph">And a feature that sets the Space apart from the competition: There is a microphone thread on the underside – ¼ inch, absolutely standard size. So if you want to attach an “overhead” or to a microphone arm: simply screw it on and you’re done.<br />The orientation in the room doesn’t matter to the device and you can set it up/arrange it wonderfully at a fixed workstation to suit your personal aesthetics – and if you take it in the discreet grey version (there would also be a white and a blue version), you have a solid communication puck without disturbing the look.<br />A word about firmware and updates – one was released during the test period – you download the “Beyerdynamic Update Hub” and everything worked painlessly straight away. If you remove the software again for reasons of PC hygiene, nothing is left behind. A plus point! And another one: data sheets, software, guides and so on are available without registration. This shouldn’t be standard, but it’s becoming increasingly rare – and the Heilbronn-based company is doing it right. And something we haven’t tested: According to the manufacturer, two Spaces can also be paired and used as stereo speakers – if it’s ever the big conference room you want.</p>



<h2 id="cons" class="wp-block-heading">Cons</h2>



<p class="wp-block-paragraph">No sun without shade – Beyerdynamic can make studio microphones, but Space is intended for office use, so we have a different sound profile. If you want to know what it sounds like, you can listen to an uncompressed *flac file that I (hoarsely) read out: bit.ly/DP_space_test. And every now and then the Bluetooth connection was a bit “overzealous” – without the business mode it likes to jump back and forth. And that was the only thing we had to criticise – everything else always worked straight away.</p>



<h2 id="conclusion" class="wp-block-heading">Conclusion</h2>



<p class="wp-block-paragraph">Is the Space better or worse than the Poly 20 from the last issue? To be honest: I’m happy with both devices, but in a direct comparison I tended to use the Space when I wanted music and the Poly when I was in the office. Why?<br />Because I have the Space attached to a microphone arm – a USB cable for charging along the arm – and it is the ideal solution for the “fixed” mobile workplace. The bass is definitely “fatter” and the overall sound is better. However, those who are always on the move might miss the power bank function.<br />The price is also slightly higher than the Poly – we’ve seen it new for 160 to 180 euros – we haven’t seen it used yet. This could mean two things: Beyerdynamic is smaller as a company and hasn’t sold as many units, or: Those who have it don’t want to give it away. We suspect the latter.</p>



<h2 id="outlook" class="wp-block-heading">Outlook</h2>



<p class="wp-block-paragraph">And because we were already in dialogue, the people at Beyerdynamic sent us two more devices that we will be testing in the next issue: The “Fox” USB studio microphone and the DT990 Pro – a microphone/headphone combo that makes the leap between “mobile office” and “content creator”. The speakers are already in the office and we’ll be unpacking them soon.</p>



<figure class="wp-block-gallery has-nested-images columns-4 is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="900" width="1200"  decoding="async"  data-id="150468"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/10/2023-05-23-13.11.49-4k.png?resize=1200%2C900&quality=72&ssl=1"  alt=""  class="wp-image-150468" ></figure>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="1080" width="743"  decoding="async"  data-id="150467"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/10/Clipboard-Image-4k.png?resize=743%2C1080&quality=72&ssl=1"  alt=""  class="wp-image-150467" ></figure>



<figure class="wp-block-image size-large"><img data-recalc-dims="1"  decoding="async"  width="1181"  height="803"  data-id="150466"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/10/space-charcoal-light-speakerphone-beyerdynamic.png?resize=1181%2C803&quality=72&ssl=1"  alt=""  class="wp-image-150466" ></figure>



<figure class="wp-block-image size-large"><img data-recalc-dims="1"  decoding="async"  width="1181"  height="511"  data-id="150465"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/10/space-speakerphone-beyerdynamic-perspective_transparent.png?resize=1181%2C511&quality=72&ssl=1"  alt=""  class="wp-image-150465" ></figure>
</figure><p>The post <a href="https://digitalproduction.com/2023/08/15/beyerdynamic-gets-spacey/">Beyerdynamic gets spacey</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/belabeier/">Bela Beier</a>. </p></div>]]></content:encoded>
					
		
		
		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/10/space-charcoal-light-speakerphone-beyerdynamic.png?fit=1181%2C803&#038;quality=72&#038;ssl=1" length="217793" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/10/space-charcoal-light-speakerphone-beyerdynamic.png?fit=1181%2C803&#038;quality=72&#038;ssl=1" width="1181" height="803" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title></media:title>
	<media:description type="html"><![CDATA[]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/10/space-charcoal-light-speakerphone-beyerdynamic.png?fit=1181%2C803&#038;quality=72&#038;ssl=1" width="1181" height="803" />
<post-id xmlns="com-wordpress:feed-additions:1">150464</post-id>	</item>
		<item>
		<title>Your own particle system with simulation nodes</title>
		<link>https://digitalproduction.com/2023/08/10/your-own-particle-system-with-simulation-nodes/</link>
		
		<dc:creator><![CDATA[Gottfried Hofmann]]></dc:creator>
		<pubDate>Thu, 10 Aug 2023 08:41:00 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[Blender]]></category>
		<category><![CDATA[Blender particle simulation]]></category>
		<category><![CDATA[Blender real-time graphics]]></category>
		<category><![CDATA[Cloth]]></category>
		<category><![CDATA[custom particle effects]]></category>
		<category><![CDATA[DP logo animation]]></category>
		<category><![CDATA[DP2304]]></category>
		<category><![CDATA[fire simulation]]></category>
		<category><![CDATA[geometry nodes]]></category>
		<category><![CDATA[Geometry Nodes baking]]></category>
		<category><![CDATA[Node]]></category>
		<category><![CDATA[particle system]]></category>
		<category><![CDATA[procedural animation]]></category>
		<category><![CDATA[Simulation Zone]]></category>
		<category><![CDATA[simulation-based effects]]></category>
		<category><![CDATA[soft body simulation]]></category>
		<category><![CDATA[subscribers]]></category>
		<category><![CDATA[Turbulence simulation]]></category>
		<guid isPermaLink="false">https://digitalproduction.com/?p=156111</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/12/aufhaenger.jpg?fit=1200%2C675&quality=80&ssl=1" width="1200" height="675" title="" alt="" /></div><div><p>The Digital Production logo has been through a lot recently. It was thrown in Houdini with a wet towel and turned to earth in Tyflow. And now it is<br />
set on fire in Blender - where will it end?</p>
<p>The post <a href="https://digitalproduction.com/2023/08/10/your-own-particle-system-with-simulation-nodes/">Your own particle system with simulation nodes</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/gottfriedhofmann/">Gottfried Hofmann</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/12/aufhaenger.jpg?fit=1200%2C675&quality=80&ssl=1" width="1200" height="675" title="" alt="" /></div><div><p class="wp-block-paragraph">There are features in Blender that users have been waiting decades for. Until now, these have included a new particle system developed from scratch. In Blender 3.6, an alternative to the outdated, conventional particles is now finally available: The “Simulation Area” in the Geometry Nodes. As a side effect, not only particles can be simulated, but also cloth, soft bodies, etc. However, you still have to create the simulation manually from scratch. And that’s what we’re going to do in this article using a particle system as an example.</p>



<h2 id="burn-logo-burn" class="wp-block-heading">Burn, logo, burn!</h2>



<p class="wp-block-paragraph">As a concrete example, an object should burst into flames, because the fire simulation in Blender also has its quirks, so we use an old-school particle simulation for a stylised fire effect. The Digital Production logo serves as an example object, but you can use any other 3D object.<br /></p>



<h2 id="what-actually-is-a-simulation-in-blender" class="wp-block-heading">What actually is a Simulation in Blender?</h2>



<p class="wp-block-paragraph">In Blender, all processes whose state in a frame depends on the state in the previous frame are a simulation.<br />This classically includes particles, Cloth, Soft Bodies, Rigid Bodies, Fire, Water, Smoke and the Blender speciality “Dynamic Paint”. On the other hand, there are tools such as the modifiers from the “Modify”, “Generate” and “Deform” categories as well as<br />geometry Nodes, where each frame can be each frame is independent of the others.</p>



<h2 id="simulation-zone" class="wp-block-heading">Simulation Zone</h2>



<p class="wp-block-paragraph">In Blender 3.6, you can now set up a so-called “Simulation Zone” in these geometry nodes zone”, which is the area in which the simulations run. You can visualise this as follows. At the input of the simulation zone you can feed in data. These are read out once and remain in the simulation zone from then on Simulation zone.  Processed data can be output. These are then fed back into the simulation in the next frame via sent back to the simulation zone via the input and can be further processed there. This can be the position of a particle, but also its size or any other property size or any other property, which can be accessed with Geometry Nodes. Changing the size with the lifetime of a particle, for example, was previously not possible at all. Thanks to geometry nodes, we now have almost complete freedom when it comes to the structure of particle systems. However, this is still accompanied by the requirement that you have to create everything yourself. On the geometry node side, Blender 3.6 only offers the simulation zone and the associated features such as baking, but not yet any high-level tools such as emitters or force fields. These will probably be delivered in the future as nodegroup assets similar to the hair assets that have found their way into Blender 3.5. Until then, however, manual work is the order of the day.</p>



<h2 id="create-a-new-scene-and-leave-the-default-cube" class="wp-block-heading">Create a new scene and leave the default cube</h2>



<p class="wp-block-paragraph"><br />Leave the default cube alive for a change. It should act as a  container for our particle system. It is best to give it a suitable name such as “Particle Nodes Container” using the shortcut F2. Then switch to the Geometry Nodes workspace and click on “New” to create a new node tree. Assign a suitable name here too, such as “Fire Particle System”.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="880" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/12/mein-erstes-partikelsystem.jpg?resize=1200%2C880&quality=80&ssl=1"  alt=""  class="wp-image-156124" ><figcaption class="wp-element-caption">My first particle system: the “Hello World” of particle systems, so to speak. Particles are distributed on the faces of the cube, which fly upwards in the following frames thanks to the offset in the set position node.</figcaption></figure>



<p class="wp-block-paragraph"></p>



<h2 id="enter-the-zone" class="wp-block-heading">Enter the Zone</h2>



<p class="wp-block-paragraph">Use “Shift A – Simulation – Simulation Zone” to create a sub-zone in which the simulation will take place later. This is highlighted in burgundy and has its own input and output. If nodes are interposed, the highlighted area becomes larger. Nodes that are located within it have access to simulation data and are themselves part of the Simulation. Nodes from outside can be connected to the nodes in the zone, but then have no access to the simulation themselves, which will prove to be practical later on.</p>



<h2 id="distribute-points-on-surfaces" class="wp-block-heading">Distribute points on surfaces</h2>



<p class="wp-block-paragraph">A particle system is based on points, so our first task is to add them. For now, the geometry of the default cube will serve as the emitter. Add a “Point – Distribute Points on Faces” node and place it between the geometry inputs and outputs of the simulation zone. Nothing should happen yet, however, as the simulation zone is not yet connected to anything. Drag the geometry output of the simulation output node to the geometry input of the group output node and the geometry input of the group input node to the input of the simulation input node with the same name. If the playhead in the timeline is set to frame 1, points should now appear in the viewport.</p>



<h2 id="set-in-motion" class="wp-block-heading">Set in motion</h2>



<p class="wp-block-paragraph">However, the dots are not yet moving, i.e. we have a particle system but not yet a simulation. A node that changes or updates the position of the particles in each frame is still missing. Add a “Geometry – Write – Set Position” node and place it between the Points output of the Distribute Points on Faces node and the Geometry input of the Simulation Output node. Under “Offset”, set the value for Z to 0.1. If you now start the animation from frame 1, the particles move upwards at a constant speed, as 0.1 is added to the Z position in each frame.</p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="599" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2023/08/mein-besseres-partikelsystem.jpg?resize=1200%2C599&quality=80&ssl=1"  alt=""  class="wp-image-156137" ><figcaption class="wp-element-caption">My better particle system: With just a few nodes, we were able to create a particle system with animatable emission.</figcaption></figure>



<p class="wp-block-paragraph"></p>



<h2 id="my-first-particle-system" class="wp-block-heading">My first Particle system</h2>



<p class="wp-block-paragraph">Congratulations, you have just created your first own particle system with the Blender Simulation Nodes. It consists of an emitter that distributes particles on the surfaces of the input object. These particles are shifted upwards by a constant factor in each frame. The structure corresponds to a legacy blender particle system in which the start and end of the particle emission fall on the same frame.</p>



<h2 id="influence-from-outside" class="wp-block-heading">Influence from outside</h2>



<p class="wp-block-paragraph">Another typical way of emitting particles is recurring emission over several frames, a kind of inflow object. This is also the default setting of the legacy particle system. In the simulation nodes, we have to regularly add new particles from outside the simulation area. This requires an additional object. At this point, the previous cube becomes the container of the particle system and another object takes on the role of the emitter.</p>



<h2 id="its-a-logo" class="wp-block-heading">It’s a logo!</h2>



<p class="wp-block-paragraph">In our example, we use the DP logo, but you can use any mesh objects. Add an “Input – Scene – Object Info” node. This has an orange input socket. Connect it to the empty socket of the group input node. An input field for objects now appears in the modifier. You can name it by opening the sidebar in the Node Editor with the N key and entering a suitable name such as “Emitter Object” in the Group tab under “Inputs”. You can even define a tooltip here.</p>



<h2 id="degraded-to-a-mere-container" class="wp-block-heading">Degraded to a mere container </h2>



<p class="wp-block-paragraph">Switch the object info node to “Relative” so that the points also appear in the correct position later if you move, scale or rotate the object. Then connect the geometry output to the mesh input of the Distribute Points on Faces node and disconnect the geometry input of the Simulation Input node. This cuts the connection to the original geometry of the cube; it is now just a container for the simulation.</p>



<h2 id="union" class="wp-block-heading">Union</h2>



<p class="wp-block-paragraph">Add an object of your choice to the scene and select it in the Geometry Nodes modifier of the container. If you now play the animation from frame one using the space bar, particles will only appear once again. In order for the emitter to emit particles permanently, the newly added points must be merged with the existing ones in each step.<br />Add a “Geometry – Join Geometry” node and place it on the connection between “Distribute Points on Face” and “Set Position”. The Join Geometry node has a slightly elongated input socket. This illustration is intended to indicate that any number of nodes can be plugged in here. Connect the geometry output of the simulation input node to it.</p>



<h2 id="randomness-at-any-time" class="wp-block-heading">Randomness at any time</h2>



<p class="wp-block-paragraph">If you now start the simulation from frame one, you will see a stream of particles from the emitter. But they still look like threads because they are generated from exactly the same position on the surface of the object at each frame. However, we need a different distribution in each frame so that it looks like particles are being emitted from the entire surface. Add a node “Input – Scene – Scene Time” and connect the frame output to the seed input of the Distribute Points on Faces node. Also connect the Density input of the node to the empty socket of the Group Input node in order to be able to control the emission density from outside.</p>



<h2 id="animated-particle-emission" class="wp-block-heading">Animated particle emission</h2>



<p class="wp-block-paragraph">If you now play the animation, you will not only see a particle beam flying away from your object, you can even animate how many particles the emitter generates per frame. This was previously not so easy to do with the legacy particle system in Blender. This is where the strength of Simulation Nodes comes into play, because you no longer have to worry about such limitations.</p>



<h2 id="for-life" class="wp-block-heading">For life</h2>



<p class="wp-block-paragraph">Another feature of particle systems is the option of giving each particle a lifetime and reading out its current age. In the simulation nodes, we achieve this by setting an “age” attribute for each point at birth, which is then incremented by one in each frame. Add a node “Attribute – Capture Attribute” between Distribute Points on Faces and Join Geometry. A float with the value 0.0 is now assigned to each point when it is created. Connect the attribute output to the empty input socket of the simulation output node. A corresponding output now appears at the simulation input node.</p>



<h2 id="marry" class="wp-block-heading">Marry</h2>



<p class="wp-block-paragraph">Just as with the Join Geometry node for the points, we also need to find a way to “marry” the age of the existing particles with the newly added ones. Add a “Utilities – Math – Math” node. This is already set to the correct “Add” operation by default. Now all particles have the attribute and it is looped through the  simulation. However, we are not yet counting up. Duplicate the add node and place it between the existing add node and the simulation output node and set the value in the lower input to 1.0. Now one is added to the age with each frame.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="284" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2023/08/lebenszeit.jpg?resize=1200%2C284&quality=80&ssl=1"  alt=""  class="wp-image-156138" ><figcaption class="wp-element-caption">Age and lifetime: A new lifetime factor has been added to the particle system. If this is exceeded, the corresponding points are removed from the simulation. A corresponding output attribute has been added so that the shading can later be influenced based on the age of the particles.</figcaption></figure>



<p class="wp-block-paragraph"></p>



<h2 id="age-vs-lifetime" class="wp-block-heading">Age vs. lifetime</h2>



<p class="wp-block-paragraph">Age has not yet had any effect. We can use it to make particles die or disappear after a certain time in frames. Duplicate one of the add nodes and place it in a free area in the simulation zone. Connect the upper input to the second add node and change the The operation to “Greater Than”. Add then use “Input – Group – Group Input” to add another Group Input node and connect the threshold input of the Greater Than node to the free socket of the Socket of the group input node. Name the new parameter “Lifetime”, a default value of 50.0 makes sense here. Add a “Geometry – Operations – Delete Geometry” node and place it between the Geometry output of the Set Position node and the Geometry input of the Simulation Output node. Connect the Selection input to the the Value output of the Greater Than Node. From now on, all particles that are older than their lifetime will be removed from the simulation. If you play the animation now, the particles will disappear again from frame 50.</p>



<h2 id="normalisation" class="wp-block-heading">Normalisation</h2>



<p class="wp-block-paragraph">We can also use the age of the points as an output attribute so that we can later colour the particles differently in Cycles depending on their age. As Cycles likes to be fed with values between 0.0 and 1.0, we should normalise it to this value range beforehand. First connect the attribute output of the simulation output node to the free socket of the group output node. If you now open the Ouput Attributes panel in the modifier, you will see an empty field. Here you can later give the attribute a name so that you can access it in the shader, e.g. “age”. It will then also appear as a separate column in the Spreadsheet Editor. You can change the labelling of the field and the tooltip again in the Group tab of the sidebar of the Node Editor, for example to “Age”.</p>



<h2 id="force-fields" class="wp-block-heading">Force fields</h2>



<p class="wp-block-paragraph">The second component we would add to the legacy particle system would be force fields to control the movement of the particles. In the simulation nodes, this is done within the simulation zone via vectors that control the offset of the Set Position node. In our case these would be two components. A kind of wind that blows the particles in a desired direction and a field for swirling. We can simplify the wind extremely by assuming a constant movement in one direction. If we add a Z component, we have also integrated the buoyancy.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="591" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2023/08/partikelsystem.jpg?resize=1200%2C591&quality=80&ssl=1"  alt=""  class="wp-image-156140" ><figcaption class="wp-element-caption">The finished particle system: Once the noise texture has been integrated as a turbulence field and the material has been set, the particle system is complete.</figcaption></figure>



<h2 id="drift" class="wp-block-heading">Drift</h2>



<p class="wp-block-paragraph">Connect the offset input of the Set Position node to the free socket of the Group Input node and name the newly created input “Wind Force”. A value of 0.05, 0.01 and 0.025 causes the particles to drift gently and slightly backwards.</p>



<h2 id="swirl" class="wp-block-heading">Swirl</h2>



<p class="wp-block-paragraph">We need a second force to swirl the particles. We can extract this from a noise texture. This is because the RGB colours of the texture can also be interpreted as XYZ values of a vector. These must be merged with the previous forces, again using maths. Add a “Utilities – Vector – Vector Math” node and place it between the wind force socket of the Group Input node and the offset input of the Set Position node. Click on the lower, free vector input of the add node and drag the mouse to a free position. There should be a plus symbol next to the mouse cursor. If you now release the mouse, a search field will appear. Enter “Noise” there and select “Noise Texture – Colour” from the search results.</p>



<h2 id="adjustments" class="wp-block-heading">Adjustments</h2>



<p class="wp-block-paragraph">A noise texture appears whose colour output is directly connected to the vector input of the add node. If you now play the animation, the particles shoot off at a diagonal. This is because the noise texture only outputs positive values between 0.0 and 1.0 for each channel. The result should be colours, and their channel values are defined in Blender as a Range between 0.0 and 1.0.</p>



<h2 id="negative" class="wp-block-heading">Negative</h2>



<p class="wp-block-paragraph">However, the particles should move in all directions, even in the opposite direction to an axis, i.e. in a negative direction. To achieve this, we have to subtract 0.5 from all channels, then the range is between -0.5 and 0.5. Duplicate the Vector Math node, place it between the colour output of the Noise Texture node and the existing Vector Math node, which is currently set to “Add”, and set the operator of the new node to “Subtract”. Enter 0.5, 0.5 and 0.5 in the lower vector field.</p>



<h2 id="buzz" class="wp-block-heading">Buzz</h2>



<p class="wp-block-paragraph">If you now play the animation, you will see quite a hustle and bustle. The turbulence caused by the noise texture is still much too strong. Duplicate a Vector Math node again, place it between Subtract and Add and set the operation to “Scale”. Set the lower input to 0.2 and connect it to the free socket of the to the free socket of the Group Input Node. Name the new input parameter “Turbulence Strength” and view the animation. Set the Lifetime to 100 and the particles will now be swirled around by the noise texture.</p>



<h2 id="control" class="wp-block-heading">Control</h2>



<p class="wp-block-paragraph">How coarse or fine the turbulence is can be controlled via the scale input of the noise texture node. Set it to 1.0 and also connect it to the Group Input node and name the parameter “Turbulence Scale”. If you now play the animation, the particles will flow as if you had added a turbulence force field to a legacy particle system and set the flow value to 1.0. However, this stream-like flow is not quite the way fire moves. The flames flicker and constantly change direction.</p>



<h2 id="the-fourth-dimension" class="wp-block-heading">The fourth dimension</h2>



<p class="wp-block-paragraph">To simulate this effect with the Noise Texture, switch the drop-down in the Noise Texture to “4D”. A new input “W” has now been added. This can be used to permanently change the noise texture. In other programmes, the parameter is called “Evolution”, which really is a good name. To animate this, you do not need to set any keyframes. Instead, connect it to the Seconds output of the Scene Time node and the particles will wobble and flicker when the animation is played.</p>



<h2 id="set-material" class="wp-block-heading">Set material</h2>



<p class="wp-block-paragraph">Before you can move on to shading, you need to give the particles a material. To do this, add a “Material – Set Material” node between the geometry output of the simulation output node and the geometry input of the group output node. Select an existing material from the drop-down menu and edit the name and the shader in the next step.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" height="936" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2023/08/shading.jpg?resize=1200%2C936&quality=80&ssl=1"  alt=""  class="wp-image-156142" ><figcaption class="wp-element-caption">Rendering in Cycles: The points generated by the simulation nodes cannot yet be displayed with Eevee, so we use Cycles as the render engine. The ‘age’ attribute is used for colouring, the name of which we have assigned in the modifier and entered exactly as it is in the attribute node.</figcaption></figure>



<h2 id="rendering-in-cycles" class="wp-block-heading">Rendering in Cycles</h2>



<p class="wp-block-paragraph">To render the particles, we need the Cycles render engine, as Eevee cannot yet display the points. In the Render tab of the Properties Editor, change the render engine to Cycles and switch to the Shading Workspace. Switch on the Render Preview in the viewport; the particles now appear as small spheres. In the Shader Editor, select the same material in the material drop-down that you have also assigned in the geometry nodes. Now you can also change the name, e.g. to “Particle Material”.</p>



<h2 id="cycles-should-recognise-them-by-their-name" class="wp-block-heading">Cycles should recognise them by their name</h2>



<p class="wp-block-paragraph">Add a new node “Input – Attribute” and enter the exact name you have given to the Age attribute in the “Name” field and connect the Fac output to the Base Colour input of the Principled BSDF node. The particles are now coloured in a gradient from black to white, depending on their age. The perfect input for a colour ramp.</p>



<h2 id="ramp" class="wp-block-heading">Ramp</h2>



<p class="wp-block-paragraph">Insert a “Converter – Colour Ramp” node between Fac and Base Color. Set another stop by pressing the plus icon of the node and set the stop on the far left to a light, desaturated orange. Then set the value for “Value” in the colour wheel to 5.0. Now the particles reflect more light than hits them. A nice effect, which is not physically correct at all, but gives a little more detail than when using emission. Set the second stop to a rich red with a value of 2.0 and the last stop to pure black. Also set the interpolation in the dropdown in the top right-hand corner of the node to “Ease”.</p>



<h2 id="fadeout" class="wp-block-heading">Fadeout</h2>



<p class="wp-block-paragraph">The spheres are now coloured, but it would be nice if they fade out as if the flames were burning out or dissipating like smoke. Connect the Fac output of the Attribute node to the Alpha input of the Principled BSDF node. Now it looks as if the logo is smoking at the beginning and the smoke is turning into fire. For the fire to finally fade out, we need another colour ramp. Set three stops again and the interpolation to “Ease”. The centre stop is given a pure white and the right-hand stop a pure black. The left stop is given a value of 0.1 so that the particles on the emitter are still slightly visible and virtually envelop it.</p>



<h2 id="artefacts" class="wp-block-heading">Artefacts</h2>



<p class="wp-block-paragraph">Black artefacts should now have appeared in the tips of the flames, depending on how many particles you use. The black spots are caused by the fact that Cycles only visits a certain number of surfaces, the so-called bounces. And every time a ray passes through one of the spheres, that’s two transparent bounces. Set the number of transparent bounces in the Light Paths panel of the Render Properties to 256. Now the flame tongues fade out cleanly.</p>



<h2 id="light-and-shadow" class="wp-block-heading">Light and shadow</h2>



<p class="wp-block-paragraph">As the particles in our setup are dependent on light from outside to glow, it is worth loading an HDRI texture into the world. You can achieve the black background by opening the “Ray Visibility” panel in the world properties and unchecking “Camera” and “Glossy”. The latter is a small preparatory step for the next step.</p>



<h2 id="laying-the-floor" class="wp-block-heading">Laying the floor</h2>



<p class="wp-block-paragraph">Add a tarp to the scene and scale it by a factor of 50. Then add a new material and set the value for “Metallic” in the Principled BSDF node to 1.0. You can adjust the strength of the reflection by making the base colour lighter or darker. For an exact replication of the result, set the value of “Value” in the colour selection of the base colour to 0.5. Next, delete the light source that is still present and, if necessary, make the material of the emitter object darker so that it is clearly recognisable in contrast to the light “smoke”.</p>



<h2 id="bake-a-cake" class="wp-block-heading">Bake a cake</h2>



<p class="wp-block-paragraph">After placing the camera, it’s time to render. You can render both a still image and an animation. It would be practical if the simulation data could be saved so that you don’t have to simulate again and again. This process is called “Baking” and can be found for the simulation nodes in the Physics tab of the Properties Editor. Open the Simulation Nodes panel there and click on “Bake”. All frames in the timeline are now simulated and saved. A new simulation is not necessary.</p>



<h2 id="outlook" class="wp-block-heading">Outlook</h2>



<p class="wp-block-paragraph">This article was intended to provide an overview of how particle systems in the new are structured in the new simulation nodes. From this basis you can proceed further. You could also change the radius of the particles with age. A feature that is also not so easily possible with the legacy particle system. Or you can modify it so that the time-dependent calculations always take place in seconds instead of frames, making your system independent of the project’s frame rate. In order to make the fire flicker even better, you could also<br />also modulate the emission with a safely time-varying noise texture. Even more: you can make the result more realistic by giving the particles a very slight initial velocity along the normal of the emission object.</p>



<h2 id="conclusion" class="wp-block-heading">Conclusion</h2>



<p class="wp-block-paragraph">With the new simulation nodes in Blender 3.6, particle systems can be created whose thickness exceeds that of the existing legacy particle system. Thanks to the power of the geometry nodes, there is enormous potential ahead of you, but it still needs to be realised. Because there are still no high-level nodes, you have to click together things like an emitter or a force field yourself. It is to be expected that there will be numerous node groups and assets in the future, both from the developers themselves and from the community.</p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"></p><p>The post <a href="https://digitalproduction.com/2023/08/10/your-own-particle-system-with-simulation-nodes/">Your own particle system with simulation nodes</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/gottfriedhofmann/">Gottfried Hofmann</a>. </p></div>]]></content:encoded>
					
		
		
		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/12/aufhaenger.jpg?fit=4096%2C2304&#038;quality=80&#038;ssl=1" length="148913" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/12/aufhaenger.jpg?fit=1200%2C675&#038;quality=80&#038;ssl=1" width="1200" height="675" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title></media:title>
	<media:description type="html"><![CDATA[]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/12/aufhaenger.jpg?fit=1200%2C675&#038;quality=80&#038;ssl=1" width="1200" height="675" />
<post-id xmlns="com-wordpress:feed-additions:1">156111</post-id>	</item>
		<item>
		<title>Photoshop und Express Beta – Refill my Picture?</title>
		<link>https://digitalproduction.com/2023/07/08/photoshop-und-express-beta-refill-my-picture/</link>
		
		<dc:creator><![CDATA[Nils Calles]]></dc:creator>
		<pubDate>Sat, 08 Jul 2023 20:17:00 +0000</pubDate>
				<category><![CDATA[Sponsored]]></category>
		<category><![CDATA[360-degree retouching]]></category>
		<category><![CDATA[Adobe]]></category>
		<category><![CDATA[Adobe Express Beta]]></category>
		<category><![CDATA[Adobe Firefly]]></category>
		<category><![CDATA[Adobe Illustrator AI]]></category>
		<category><![CDATA[After Effects alternative]]></category>
		<category><![CDATA[AI image generation]]></category>
		<category><![CDATA[AI masking]]></category>
		<category><![CDATA[Compositing]]></category>
		<category><![CDATA[content-aware fill]]></category>
		<category><![CDATA[DP2304]]></category>
		<category><![CDATA[equirectangular editing]]></category>
		<category><![CDATA[Motion Graphics]]></category>
		<category><![CDATA[Pano2VR workflow]]></category>
		<category><![CDATA[Photoshop Generative Fill]]></category>
		<category><![CDATA[seamless texture generation]]></category>
		<category><![CDATA[set extension VFX]]></category>
		<category><![CDATA[subscribers]]></category>
		<guid isPermaLink="false">https://digitalproduction.com/?p=159485</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/DP_AI_AmbientHotelExtended_Spaceships_01.jpg?fit=1200%2C742&quality=80&ssl=1" width="1200" height="742" title="" alt="A surreal landscape depicting a suburban area with flying saucers in the sky above. The sunset casts an orange hue, while a red creature stands on a building. Solar panels are visible on the roof, alongside a river and parked boats." /></div><div><p>As soon as the Adobe Firefly beta is officially available to every Adobe Cloud subscriber, the features are also being introduced in the Photoshop beta and then immediately in the Adobe Express beta.</p>
<p>The post <a href="https://digitalproduction.com/2023/07/08/photoshop-und-express-beta-refill-my-picture/">Photoshop und Express Beta – Refill my Picture?</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/nilscalles/">Nils Calles</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/DP_AI_AmbientHotelExtended_Spaceships_01.jpg?fit=1200%2C742&quality=80&ssl=1" width="1200" height="742" title="" alt="A surreal landscape depicting a suburban area with flying saucers in the sky above. The sunset casts an orange hue, while a red creature stands on a building. Solar panels are visible on the roof, alongside a river and parked boats." /></div><div><p class="wp-block-paragraph">In Photoshop, generative filling, which is also fed with English prompts, has been added to content-based filling. In the Adobe Express beta, you can now also generate images with prompts and the text effects from Firefly are integrated. And in Adobe Illustrator Beta, you can now leave the colour design to the AI.</p>



<p class="wp-block-paragraph">Content-based filling has been around for a while in Photoshop and is often used to remove unwanted parts of an image. Generative filling now works in a similar way. You create a selection and if you don’t enter a prompt, the AI simply tries to fill the selection sensibly. The results are usually much better than with content-based filling because the engine analyses the image content better and recognises objects.</p>



<h2 id="a-test" class="wp-block-heading">A test!</h2>



<p class="wp-block-paragraph">To do this, I first removed the hotel logos and the parked cars from a sunset snapshot from the hotel window. Surprisingly, this works better when the selection extends a little beyond the objects. After a few attempts, the vehicles were removed using suitable overlays. The AI also generated the slightly larger, clunky car at the front. </p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/PS01_Hotel_Original.jpg?quality=80&ssl=1"><img data-recalc-dims="1" height="786" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/PS01_Hotel_Original.jpg?resize=1200%2C786&quality=80&ssl=1"  alt=""  class="wp-image-159490" ></a></figure>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph">For the set extension after the four sides, I simply extended the workspace with the cropping tool and then created a selection in each case that contained a few pixels from the original image. As you can see, the AI then uses an empty prompt to generate mostly really suitable image parts that even take colour mood and shadows into account. However, the image parts to be expanded must not be too large, otherwise the generated image part will become muddy. As far as I know, the number of pixels generated is currently limited to 1024 × 1024 pixels. Anything larger than this is stretched. It is therefore advisable to generate larger extensions in stages. In the lower part, I used the “river with reflection” prompt to create a river in the image, on which I generated the same with the “boat” prompt, although this turned out a little too bright.</p>



<p class="wp-block-paragraph"><br />The other prompts “red table and seats” “red flowers” “plants” on the roof railing and “alligator” worked well. The “blonde girl” and “reflective solar panel” are also a little bright, but with a little reworking you can adjust them a little. Unfortunately, the number in the prompts doesn’t really work. I wanted to conjure up a large number of spaceships in the sky, but no matter whether I had “armada”, “fleet”, “many”, “100” or “millions of steampunk spaceships” in the prompt, there were always only a few and of course different variants each time.</p>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/DP_AI_AmbientHotel_Extended01.jpg?quality=80&ssl=1"><img data-recalc-dims="1" height="742" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/DP_AI_AmbientHotel_Extended01.jpg?resize=1200%2C742&quality=80&ssl=1"  alt=""  class="wp-image-159491" ></a><figcaption class="wp-element-caption">The first prompts, still without spaceships. </figcaption></figure>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph"><br />My favourite object, however, is the “red hairy monster” on the roof. Three versions are created for each generation and if they don’t fit, three new ones are created. You can also adjust the prompt a little each time and hope that it will eventually match what you had in mind. However, there are no guarantees. But as long as that’s the case, we creatives don’t need to worry too much about the AI taking our jobs, because it’s not possible to implement concrete specifications for real customer orders in this way. However, AI can be very helpful in finding ideas. Even the sometimes very strange excesses.</p>



<figure class="wp-block-image"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/MannheimKaufmannsmuehle05.jpg?quality=80&ssl=1"><img data-recalc-dims="1" height="600" width="1200"  decoding="async"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/MannheimKaufmannsmuehle05.jpg?resize=1200%2C600&quality=80&ssl=1"  alt=""  class="wp-image-159495" ></a><figcaption class="wp-element-caption">This is what my outfit and the future of Mannheim’s creative Jungbusch neighbourhood could look like if you let the AI have a go. But this is by no means intended as a suggestion for Mannheim’s urban planners.</figcaption></figure>



<p class="wp-block-paragraph"></p>



<h2 id="equirectangular" class="wp-block-heading">Equirectangular</h2>



<p class="wp-block-paragraph">The AI seems to recognise the Equirectangular format in Photoshop, because my attempts to modify Böckstrasse in Jungbusch in Mannheim, where a historic building burnt down, also worked well directly in Photoshop. However, it should be noted that when saving as a JPG, Photoshop does not automatically set the flag by which Facebook, for example, recognises that it is a 360 degree image<br />Image. You have to select all layers and create a new panorama layer from the selected layers via the 3D menu. This must then be exported as a panorama using the same menu. Unfortunately, the panorama workflow in Photoshop is a bit slow. It is quicker to export the files saved as PSDs with Pano2VR via quick share, where you can also specify the initial perspective and the geo and meta data.</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/MannheimKaufmannsmuehleORIG.jpg?quality=80&ssl=1"><img data-recalc-dims="1" height="600" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/MannheimKaufmannsmuehleORIG.jpg?resize=1200%2C600&quality=80&ssl=1"  alt=""  class="wp-image-159496" ></a><figcaption class="wp-element-caption">This is what it looks like in Böckstrasse in Jungbusch in Mannheim, where a historic building has burnt down. </figcaption></figure>



<p class="wp-block-paragraph">The removal of passers-by, bars, barriers, excavators and ruined buildings, as well as the modification of my outfit and my robot dog look pretty good in perspective. I even have hair on my head again. The only problem I still had was with my hands. Hopefully the city planners in Mannheim won’t take too much of an example from my building design. </p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/MannheimKaufmannsmuehle03.jpg?quality=80&ssl=1"><img data-recalc-dims="1" height="600" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/MannheimKaufmannsmuehle03.jpg?resize=1200%2C600&quality=80&ssl=1"  alt=""  class="wp-image-159498" ></a><figcaption class="wp-element-caption">Centre: This is what it might have looked like without excavators, barriers and fencing (and with hair on my head), according to the AI.</figcaption></figure>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph">Of course, there is still plenty of room for optimising the workflow. For example, it would be more effective if a mask were also generated for the objects so that they could also be resized and repositioned. Because if you change the position of the selection, the generated background borders no longer fit and if you generate the same prompt, the result is often a completely different object. It would be good to be able to set the prompts here. It would also be nice if the prompts could be included in the layer names so that the objects can be assigned more easily.</p>



<figure class="wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-3 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/IMG_20230216_083454_00_merged.jpg?quality=80&ssl=1"><img data-recalc-dims="1" height="600" width="1200"  decoding="async"  data-id="159500"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/IMG_20230216_083454_00_merged.jpg?resize=1200%2C600&quality=80&ssl=1"  alt=""  class="wp-image-159500" ></a><figcaption class="wp-element-caption">IMG_20230216_083454_00_168.insp</figcaption></figure>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Monsters_IMG_20230216_083454_00_merged_share_.jpg?quality=80&ssl=1"><img data-recalc-dims="1" height="600" width="1200"  decoding="async"  data-id="159499"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Monsters_IMG_20230216_083454_00_merged_share_.jpg?resize=1200%2C600&quality=80&ssl=1"  alt=""  class="wp-image-159499" ></a></figure>
<figcaption class="blocks-gallery-caption wp-element-caption">Retouching 360-degree images is also much easier with Generative Fill. Especially with the patch function in Pano2VR, the automatically equalised patches in Photoshop can be combined very well with generative fills.</figcaption></figure>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Anould_Pano2VR_PS.jpg?quality=80&ssl=1"><img data-recalc-dims="1" height="676" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Anould_Pano2VR_PS.jpg?resize=1200%2C676&quality=80&ssl=1"  alt=""  class="wp-image-159502" ></a><figcaption class="wp-element-caption">Even classic landscapes can be made even better with Firefly – even regions worth seeing such as the<br />Vosges are better with red pixel!</figcaption></figure>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Anould_Pano2VR.jpg?quality=80&ssl=1"><img data-recalc-dims="1" height="675" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/Anould_Pano2VR.jpg?resize=1200%2C675&quality=80&ssl=1"  alt=""  class="wp-image-159503" ></a></figure>



<p class="wp-block-paragraph"></p>



<h2 id="seamless-textures-with-generative-fill" class="wp-block-heading">Seamless textures with generative fill</h2>



<p class="wp-block-paragraph"></p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/TexturVerschiebenEffekt.jpg?quality=80&ssl=1"><img data-recalc-dims="1" height="647" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/TexturVerschiebenEffekt.jpg?resize=1200%2C647&quality=80&ssl=1"  alt=""  class="wp-image-159506" ></a></figure>



<p class="wp-block-paragraph"></p>



<p class="wp-block-paragraph">In this example, I have reduced a mobile phone image to 1024 × 1024. Then I used the Move effect to move the seam to the centre. Now simply draw a large rectangular selection around the seam and generate the content with an empty prompt. Then bake in the generated part and repeat the whole thing again with a horizontal shift. This method can also be used to create seamless ring panoramas from widescreen images, for example for enviroment maps in 3D programmes.</p>



<figure class="wp-block-gallery has-nested-images columns-5 is-cropped wp-block-gallery-4 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/RoteBlatterTextur01.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1024"  height="1024"  data-id="159508"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/RoteBlatterTextur01.jpg?resize=1024%2C1024&quality=80&ssl=1"  alt=""  class="wp-image-159508" ></a></figure>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/RoteBlatterTextur02.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1024"  height="1024"  data-id="159507"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/RoteBlatterTextur02.jpg?resize=1024%2C1024&quality=80&ssl=1"  alt=""  class="wp-image-159507" ></a></figure>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/RoteBlatterTextur03.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1024"  height="1024"  data-id="159511"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/RoteBlatterTextur03.jpg?resize=1024%2C1024&quality=80&ssl=1"  alt=""  class="wp-image-159511" ></a></figure>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/RoteBlatterTextur04.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1024"  height="1024"  data-id="159509"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/RoteBlatterTextur04.jpg?resize=1024%2C1024&quality=80&ssl=1"  alt=""  class="wp-image-159509" ></a></figure>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/RoteBlatterTextur05.jpg?quality=80&ssl=1"><img data-recalc-dims="1"  decoding="async"  width="1024"  height="1024"  data-id="159510"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/RoteBlatterTextur05.jpg?resize=1024%2C1024&quality=80&ssl=1"  alt=""  class="wp-image-159510" ></a></figure>
<figcaption class="blocks-gallery-caption wp-element-caption">In five steps, every image becomes a seamless texture.</figcaption></figure>



<p class="wp-block-paragraph"></p>



<h2 id="adobe-express-beta" class="wp-block-heading">Adobe Express Beta</h2>



<p class="wp-block-paragraph">Some Firefly functions have also been integrated here. The advantage is that you can combine these with the other functions. This creates cool online image compositions for social media or even flyers and posters. The integration of animation, videos and external media has also been improved. This allows you to produce attractive moving trailers and motion graphics without having to use After Effects.</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/EXPRESS_01_Steampunk-Spaceships.jpg?quality=80&ssl=1"><img data-recalc-dims="1" height="607" width="1200"  decoding="async"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/EXPRESS_01_Steampunk-Spaceships.jpg?resize=1200%2C607&quality=80&ssl=1"  alt=""  class="wp-image-159504" ></a></figure>



<h2 id="conclusion" class="wp-block-heading">Conclusion</h2>



<p class="wp-block-paragraph">It is important to note what is stated in the Adobe Generative AI Beta user guidelines. While Generative AI features are in beta, all generated output is for personal use only and may not be used commercially<br />personal use only and may not be used commercially. It remains to be seen what the licence model will look like after the beta phase. Of course, we hope that it will simply become part of the Adobe Creative Cloud subscription for commercial use, as it was in the beta. The addictive factor should not be underestimated, because it’s fun to work with these new tools.</p><p>The post <a href="https://digitalproduction.com/2023/07/08/photoshop-und-express-beta-refill-my-picture/">Photoshop und Express Beta – Refill my Picture?</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/nilscalles/">Nils Calles</a>. </p></div>]]></content:encoded>
					
		
		
		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/DP_AI_AmbientHotelExtended_Spaceships_01.jpg?fit=4096%2C2532&#038;quality=80&#038;ssl=1" length="270729" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/DP_AI_AmbientHotelExtended_Spaceships_01.jpg?fit=1200%2C742&#038;quality=80&#038;ssl=1" width="1200" height="742" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title></media:title>
	<media:description type="html"><![CDATA[A surreal landscape depicting a suburban area with flying saucers in the sky above. The sunset casts an orange hue, while a red creature stands on a building. Solar panels are visible on the roof, alongside a river and parked boats.]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/02/DP_AI_AmbientHotelExtended_Spaceships_01.jpg?fit=1200%2C742&#038;quality=80&#038;ssl=1" width="1200" height="742" />
<post-id xmlns="com-wordpress:feed-additions:1">159485</post-id>	</item>
	</channel>
</rss>
