<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="https://digitalproduction.com/wp-content/plugins/xslt/public/template.xsl"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	xmlns:rssFeedStyles="http://www.wordpress.org/ns/xslt#"
>

<channel>
	<title>Storyboarding - DIGITAL PRODUCTION</title>
	<atom:link href="https://digitalproduction.com/tag/storyboarding/feed/" rel="self" type="application/rss+xml" />
	<link>https://digitalproduction.com</link>
	<description>Magazine for Digital Media Production</description>
	<lastBuildDate>Fri, 31 Oct 2025 08:19:24 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
<site xmlns="com-wordpress:feed-additions:1">236729828</site>	<item>
		<title>AI-Workspace: Flora Fauna</title>
		<link>https://digitalproduction.com/2025/04/25/ai-workspace-flora-fauna/</link>
		
		<dc:creator><![CDATA[Jörn-Erik Burkert]]></dc:creator>
		<pubDate>Fri, 25 Apr 2025 05:45:00 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[topnews]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI image generation]]></category>
		<category><![CDATA[AI video generation]]></category>
		<category><![CDATA[Concept]]></category>
		<category><![CDATA[Flora Fauna AI]]></category>
		<category><![CDATA[FloraFaunaAI]]></category>
		<category><![CDATA[Gemini]]></category>
		<category><![CDATA[Google Veo2]]></category>
		<category><![CDATA[Luma]]></category>
		<category><![CDATA[Luma Ray 2 Flash]]></category>
		<category><![CDATA[Node]]></category>
		<category><![CDATA[post-production]]></category>
		<category><![CDATA[previs]]></category>
		<category><![CDATA[Recraft]]></category>
		<category><![CDATA[Recraft Stable Diffusion]]></category>
		<category><![CDATA[Runway]]></category>
		<category><![CDATA[Runway Gen-3]]></category>
		<category><![CDATA[storyboard AI tool]]></category>
		<category><![CDATA[Storyboarding]]></category>
		<category><![CDATA[subscribers]]></category>
		<category><![CDATA[VFX]]></category>
		<category><![CDATA[VFX AI tools]]></category>
		<category><![CDATA[VFX concept art]]></category>
		<guid isPermaLink="false">https://digitalproduction.com/?p=165465</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/04/Post-Image-Flora-Fauna-AI.png?fit=1200%2C658&quality=72&ssl=1" width="1200" height="658" title="" alt="AI-Workspace: Flora Fauna AI" /></div><div><p>There are many different tools for visualisation with AI. "Flora Fauna AI" takes an interesting alternative approach here. The system aims to combine many technologies under one bonnet and be an AI workspace.</p>
<p>The post <a href="https://digitalproduction.com/2025/04/25/ai-workspace-flora-fauna/">AI-Workspace: Flora Fauna</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/jeburkert/">Jörn-Erik Burkert</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/04/Post-Image-Flora-Fauna-AI.png?fit=1200%2C658&quality=72&ssl=1" width="1200" height="658" title="" alt="AI-Workspace: Flora Fauna AI" /></div><div><div class='__iawmlf-post-loop-links' style='display:none;' data-iawmlf-post-links='[{&quot;id&quot;:1760,&quot;href&quot;:&quot;https:\/\/www.florafauna.ai&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251223000150\/https:\/\/www.florafauna.ai\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-27 21:15:07&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2025-12-31 02:50:04&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-05 09:28:20&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-10 07:02:13&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-13 22:54:42&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-19 09:25:27&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-25 07:11:44&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-29 21:09:39&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-04 13:09:40&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-07 18:25:17&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-10 20:08:07&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-15 09:33:29&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-18 15:17:49&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-21 15:42:53&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-24 20:05:12&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-27 23:26:14&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-03 07:38:31&quot;,&quot;http_code&quot;:503},{&quot;date&quot;:&quot;2026-03-06 18:13:28&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-09 19:14:29&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-12 20:59:11&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-16 11:18:07&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-24 03:25:01&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-27 07:05:38&quot;,&quot;http_code&quot;:503},{&quot;date&quot;:&quot;2026-03-30 21:29:50&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-06 17:42:59&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-12 03:37:16&quot;,&quot;http_code&quot;:503}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-04-12 03:37:16&quot;,&quot;http_code&quot;:503},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:1761,&quot;href&quot;:&quot;https:\/\/www.ghibli.jp&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251218165049\/https:\/\/www.ghibli.jp\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-27 21:15:12&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2025-12-31 02:55:44&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-05 17:00:12&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-10 07:02:21&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-13 22:54:54&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-19 09:25:29&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-25 07:11:58&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-01-30 12:49:46&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-04 13:10:02&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-07 18:26:11&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-11 06:56:56&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-18 15:18:16&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-21 15:42:58&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-02-26 23:29:57&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-02 17:12:51&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-06 18:13:29&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-11 09:01:14&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-16 16:39:17&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-24 03:25:50&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-27 07:06:05&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-03-30 21:29:59&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-06 17:43:03&quot;,&quot;http_code&quot;:206},{&quot;date&quot;:&quot;2026-04-12 03:37:22&quot;,&quot;http_code&quot;:206}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-04-12 03:37:22&quot;,&quot;http_code&quot;:206},&quot;process&quot;:&quot;done&quot;}]'></div>
<p class="wp-block-paragraph">Available systems with artificial intelligence (AI) play to their strengths in different ways when creating images and videos. AI artists therefore use different solutions depending on the workflow. With <a href="https://www.florafauna.ai/" target="_blank" rel="noreferrer noopener">&#8220;Flora Fauna AI&#8221;</a>, you get a solution that combines popular AI systems and lets them interact with each other via a node network. No installation is necessary as the software runs completely in a compatible browser. Images and videos can be loaded into the workspace via the import function.</p>



<p class="wp-block-paragraph"><a href="https://digitalproduction.com/tag/nodes/">Nodes</a> for text, images and videos are available for use. Notes can be inserted and linked in the workspace with a click of the mouse. Navigation in the network is simple and has many similarities to CG packages. Notes can also be inserted in boxes to describe the workflow. Work in groups is provided for and existing projects can be shared.</p>



<h2 id="nodes-in-flora-fauna-ai" class="wp-block-heading">Nodes in Flora Fauna AI</h2>



<p class="wp-block-paragraph">Text nodes are a way to improve your own prompts, but also to analyse images. This can be style or colour schemes. This information can be incorporated into the process when generating and modifying motifs. This enables style transfers or the colouring of images or videos, for example. The user can choose from various AI modes for the text nodes:</p>



<ul class="wp-block-list">
<li>Claude 3 Sonnet</li>



<li>GPT o4 mini</li>



<li>Google Gemini 2.0 Flash</li>
</ul>



<p class="wp-block-paragraph">To create still images, simply type a prompt into the input field of the image node and start the calculation. Text or other images can also be used as input during this process. Two images with people can be combined to create completely new scenarios. The style option makes it easy to automatically change the result. This creates a great deal of flexibility when working.</p>



<figure data-wp-context="{&quot;imageId&quot;:&quot;69e0e9c4a4f70&quot;}" data-wp-interactive="core/image" data-wp-key="69e0e9c4a4f70" class="wp-block-image size-full wp-lightbox-container"><img data-recalc-dims="1"  fetchpriority="high"  decoding="async"  width="1200"  height="676"  data-wp-class--hide="state.isContentHidden"  data-wp-class--show="state.isContentVisible"  data-wp-init="callbacks.setButtonStyles"  data-wp-on--click="actions.showLightbox"  data-wp-on--load="callbacks.setButtonStyles"  data-wp-on--pointerdown="actions.preloadImage"  data-wp-on--pointerenter="actions.preloadImageWithDelay"  data-wp-on--pointerleave="actions.cancelPreload"  data-wp-on-window--resize="callbacks.setButtonStyles"  sizes="(max-width: 1200px) 100vw, 1200px"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/04/AI-Workspace-Flora-Fauna-AI-Car-Project.png?resize=1200%2C676&#038;quality=72&#038;ssl=1"  alt="AI-Workspace: Flora Fauna AI - Car Project"  class="wp-image-165469" ><button
			class="lightbox-trigger"
			type="button"
			aria-haspopup="dialog"
			data-wp-bind--aria-label="state.thisImage.triggerButtonAriaLabel"
			data-wp-init="callbacks.initTriggerButton"
			data-wp-on--click="actions.showLightbox"
			data-wp-style--right="state.thisImage.buttonRight"
			data-wp-style--top="state.thisImage.buttonTop"
		>
			<svg xmlns="http://www.w3.org/2000/svg" width="12" height="12" fill="none" viewBox="0 0 12 12">
				<path fill="#fff" d="M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z" />
			</svg>
		</button></figure>



<p class="wp-block-paragraph">The nodes can be quickly duplicated for different variants and the parameters or the AI engine for the calculation can be changed. &#8220;Flora Fauna AI&#8221; automatically generates variations of an image on request. When calculating the images, the system has support for various AI engines, including</p>



<ul class="wp-block-list">
<li>Flux Dev and Flux Pro 1.1</li>



<li>Ideaogram 2.0</li>



<li>Luma Photon</li>



<li>Recraft V3 Stable Diffusion 3.5</li>
</ul>



<p class="wp-block-paragraph">If an existing image is linked to an image node, the context changes. The existing information can optionally be displayed with Google Gemini 2.0 Flash, Flux Canny, Flux Depth and Flux Redux. There is also an option to remove the background. In practice, the user can quickly change the perspective or view with &#8220;Gemini 2.0 Flash&#8221;, for example. Useful if you need different frames for video production later on.</p>



<ul class="wp-block-list">
<li></li>
</ul>



<figure data-wp-context="{&quot;imageId&quot;:&quot;69e0e9c4a56a0&quot;}" data-wp-interactive="core/image" data-wp-key="69e0e9c4a56a0" class="wp-block-image size-large wp-lightbox-container"><img data-recalc-dims="1" height="1080" width="1186"  decoding="async"  data-wp-class--hide="state.isContentHidden"  data-wp-class--show="state.isContentVisible"  data-wp-init="callbacks.setButtonStyles"  data-wp-on--click="actions.showLightbox"  data-wp-on--load="callbacks.setButtonStyles"  data-wp-on--pointerdown="actions.preloadImage"  data-wp-on--pointerenter="actions.preloadImageWithDelay"  data-wp-on--pointerleave="actions.cancelPreload"  data-wp-on-window--resize="callbacks.setButtonStyles"  src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/04/Flora-fauna-network.png?resize=1186%2C1080&#038;quality=72&#038;ssl=1"  alt="AI-Workspace: Flora Fauna AI"  class="wp-image-165474" ><button
			class="lightbox-trigger"
			type="button"
			aria-haspopup="dialog"
			data-wp-bind--aria-label="state.thisImage.triggerButtonAriaLabel"
			data-wp-init="callbacks.initTriggerButton"
			data-wp-on--click="actions.showLightbox"
			data-wp-style--right="state.thisImage.buttonRight"
			data-wp-style--top="state.thisImage.buttonTop"
		>
			<svg xmlns="http://www.w3.org/2000/svg" width="12" height="12" fill="none" viewBox="0 0 12 12">
				<path fill="#fff" d="M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z" />
			</svg>
		</button><figcaption class="wp-element-caption">The system makes it possible to combine and modify images as the basis for creating a video.</figcaption></figure>



<p class="wp-block-paragraph">The production of videos is part of the programme&#8217;s range of functions. This can be a simple prompt or you can combine images or descriptions via the node network. Depending on the instructions, the selected AI generates the clips. For the calculation, the user can choose between</p>



<ul class="wp-block-list">
<li>Hailou Minimax</li>



<li>Kling 2.0 / Kling Pro 1.6 / Kling 1.5</li>



<li>Luma Ray 2 / Luma Ray 2 Flash</li>



<li>Pika</li>



<li>Runway Gen-3 Alpha</li>



<li>Tencent Hunyan</li>



<li>Lightricks LTXV</li>



<li>Google Veo2 </li>



<li>WAN 2.1</li>
</ul>



<h2 id="create-your-own-styles" class="wp-block-heading">Create your own styles</h2>



<p class="wp-block-paragraph">Since the end of March 2025, the available styles for creating images can be expanded with your own entries. To do this, the user uploads a series of 20 to 30 images to &#8220;Flora Fauna AI&#8221;. Their content determines the new style. Training is started with one click and the style appears in the list after a short time. It is important here not to violate copyrights. The controversy surrounding the <a href="https://www.ghibli.jp/" target="_blank" rel="noreferrer noopener">&#8220;Studio Ghibli&#8221;</a> style in images and videos when &#8220;OpenAI ChatGPT 4o&#8221; was released shows how polarised this topic is.</p>



<h2 id="conclusion" class="wp-block-heading">Conclusion</h2>



<p class="wp-block-paragraph">With &#8220;Flora Fauna AI&#8221; you get a useful AI workspace for a wide range of tasks. The system can help with the development of designs, storyboards or concepts for games. The node network makes the work flexible and is a great help for the presentation of designs. Thanks to the broad support of various AI systems, there is no need to switch between different offers. The styling allows the generation of completely individualised images and videos. Many tasks can be realised directly in a project. &#8220;Flora Fauna AI&#8221; also eliminates the need to manage different subscriptions because the service manages the various AI systems under one roof.</p>



<p class="wp-block-paragraph">Featured image: © Flora Fauna AI</p><p>The post <a href="https://digitalproduction.com/2025/04/25/ai-workspace-flora-fauna/">AI-Workspace: Flora Fauna</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/jeburkert/">Jörn-Erik Burkert</a>. </p></div>]]></content:encoded>
					
		
		
		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/04/Post-Image-Flora-Fauna-AI.png?fit=3070%2C1684&#038;quality=72&#038;ssl=1" length="477980" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/04/Post-Image-Flora-Fauna-AI.png?fit=1200%2C658&#038;quality=72&#038;ssl=1" width="1200" height="658" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title></media:title>
	<media:description type="html"><![CDATA[]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2025/04/Post-Image-Flora-Fauna-AI.png?fit=1200%2C658&#038;quality=72&#038;ssl=1" width="1200" height="658" />
<post-id xmlns="com-wordpress:feed-additions:1">165465</post-id>	</item>
		<item>
		<title>Comfy UI &#8211; AI for artists!</title>
		<link>https://digitalproduction.com/2024/06/09/comfy-ui-ai-for-artists/</link>
		
		<dc:creator><![CDATA[Arne Palluck]]></dc:creator>
		<pubDate>Sun, 09 Jun 2024 21:29:00 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[ComfyUI]]></category>
		<category><![CDATA[DP2403]]></category>
		<category><![CDATA[layout]]></category>
		<category><![CDATA[Stable Diffusion]]></category>
		<category><![CDATA[Storyboarding]]></category>
		<category><![CDATA[subscribers]]></category>
		<guid isPermaLink="false">https://digitalproduction.com/?p=144475</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/image-31.webp?fit=1200%2C675&quality=72&ssl=1" width="1200" height="675" title="" alt="" /></div><div><p>While some become the Stable Diffusion play child, and others lament the demise of the industry, we're staying out of this round of prophecy and looking at what you can really do with it as artists - and not as slightly brainwashed newsletter marketers.</p>
<p>The post <a href="https://digitalproduction.com/2024/06/09/comfy-ui-ai-for-artists/">Comfy UI – AI for artists!</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/arnepalluck/">Arne Palluck</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/image-31.webp?fit=1200%2C675&quality=72&ssl=1" width="1200" height="675" title="" alt="" /></div><div><div class='__iawmlf-post-loop-links' style='display:none;' data-iawmlf-post-links='[{&quot;id&quot;:2666,&quot;href&quot;:&quot;https:\/\/github.com\/ltdrdata\/ComfyUI-Manager.git&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;https:\/\/github.com\/ltdrdata\/ComfyUI-Manager&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:2667,&quot;href&quot;:&quot;http:\/\/www.civitai.com&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251227175252\/https:\/\/civitai.com\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-28 04:52:44&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-02 07:20:49&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-07 05:14:37&quot;,&quot;http_code&quot;:503},{&quot;date&quot;:&quot;2026-01-10 20:00:18&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-14 10:42:42&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-18 15:53:28&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-21 17:21:17&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-26 02:28:03&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-29 15:29:55&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-01 21:39:20&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-11 15:48:25&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-16 16:07:21&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-02-21 16:11:36&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-06 08:36:22&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-11 13:21:17&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-15 08:23:44&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-20 08:47:15&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-24 11:20:42&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-31 17:24:33&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-05 17:21:37&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-09 11:05:21&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-04-13 18:20:56&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-04-13 18:20:56&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;}]'></div>

<p class="wp-block-paragraph">Talent. A relic? Creativity. An algorithm? 3D artists who have agonised through thousands of hours of tutorials, often accompanied by the miserable soundscape of distant Indian intersections, are now looking at upcoming tools like &#8220;Sora&#8221; with interest and intimidation. Is what my brilliant YouTube mentor from the other side of the world taught me in his flat really becoming superfluous? No! Image Generative AI is powerful, no question. But as anyone who has dipped their toes into the world of LUTs and ready-made scripts knows, the magic word is control, and anyone who believes that an art director can now fulfil their own correction requests with text-to-image command input (&#8220;prompten&#8221; for short) is wrong&#8230; Because gaining control over the AI-generated image requires more than a weekend course in &#8220;prompt engineering&#8221; at an adult education centre.</p>





<p class="wp-block-paragraph">We gain control over Computer Generated Imagery (CGI), just as we CGI artists have always done, not through prompts and magic words, but through dedicated software. The one we will be talking about is ComfyUI. And if you&#8217;re afraid of the art director using it to carry out the next change loop himself, you&#8217;re probably also afraid of the &#8220;creative&#8221; from the advertising agency stealing your Houdini files and opening an FX department with them tomorrow.</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/d9d2046c-c341-46bf-bc41-82b2dbecc49e.jpg&#038;w=3840&#038;q=100"  alt="" ></figure>





<h6 id="comfyui" class="wp-block-heading">ComfyUI?</h6>





<p class="wp-block-paragraph">ComfyUI is a node-based interface for Stable Diffusion &#8211; and offers maximum control over Image Generative AI. It&#8217;s about more than the simple mid-journey principle: I write a prompt and it generates an image &#8211; that would be pure surface scratching. With the help of Stable Diffusion and ComfyUI, I can not only create images on command, but also extract a good depth path from a real photo &#8211; and much more. Beyond the gamble of prompting, I can gain control over an image in a way that Midjourney cannot. I can generate a mountain during the day and just change the time of day or the position of the sun. I can mask out the main character from footage at the touch of a button, faster than the After Effects police allow. And I can create 3D models from images and text input. And that&#8217;s what we&#8217;re going to talk about today.</p>





<h6 id="what-can-you-actually-do" class="wp-block-heading">What can you actually do?</h6>





<p class="wp-block-paragraph">If you have experience with nodes and a solid understanding of compositing, Houdini, Maya or Blender, then you are well equipped to benefit from ComfyUI. If you&#8217;re just looking for a 1-click solution to create deepfake porn, ComfyUI is definitely not for you. Unlike Midjourney, for example, where you just need to be able to type a few words (and if you don&#8217;t know how to do that, you could dictate it to the AI you trust), the bare &#8220;text to image&#8221; image generation wouldn&#8217;t do justice to the tool&#8217;s capabilities. It&#8217;s very easy to set up, but understanding it and being able to use it in a controlled manner requires the skillset of an experienced mid- to senior-level 3D generalist. Not only to get what you want in a controlled manner, but also to understand where the potential lies. The average user would probably be put off by the node-based interface, but not the 3D artist who feels at home in Maya&#8217;s Hypershade, Nuke, Houdini and the Geometry Node Editor.</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/9304f768-a01e-4f09-a71c-10a5afdea1ed.jpg&#038;w=3840&#038;q=100"  alt="" ></figure>





<h6 id="installing-and-setting-up-comfyui" class="wp-block-heading">Installing and setting up ComfyUI</h6>





<p class="wp-block-paragraph">Installing ComfyUI is simple and can be done in just a few steps. Here is your guide to getting everything up and running quickly. First you need a programme for unpacking files. 7Zip or WinRAR are top choices here. (A little tip: the trial version of WinRar is also sufficient)</p>





<h6 id="download-comfyui" class="wp-block-heading">Download ComfyUI</h6>





<p class="wp-block-paragraph">Then go to the ComfyUI page on GitHub github.com/comfyanonymous/ComfyUI Under &#8216;Installing&#8217; you will find a direct link to the download. Download the 1.4 gigabyte 7-Zip file. Unzip the downloaded file into a folder of your choice. And ComfyUI is installed!</p>





<h6 id="indispensable-the-comfyui-manager" class="wp-block-heading">Indispensable: The ComfyUI Manager</h6>





<p class="wp-block-paragraph">Not just a recommendation from me, but an absolute must-have is the ComfyUI Manager, which you can find here.</p>





<p class="wp-block-paragraph">github.com/ltdrdata/ComfyUI-Manager. I&#8217;ll explain why it&#8217;s so important in a moment. First, navigate to the custom nodes directory: Open the Windows Command Prompt by clicking &#8220;Search&#8221; in the taskbar, typing &#8220;cmd&#8221; and pressing Enter. Then navigate to the custom nodes directory in your ComfyUI folder by copying the path and entering, for example, &#8220;cd D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes&#8221;. (Attention! This is my path, it may vary for you of course.) Press Enter. Clone the ComfyUI Manager: Copy the following command into the command line: git clone <a href="https://github.com/ltdrdata/ComfyUI-Manager.git" target="_blank" rel="noreferrer noopener">https://github.com/ltdrdata/ComfyUI-Manager.git</a> and press Enter. This will clone the ComfyUI Manager into the directory. This may take a few seconds. Restart ComfyUI: If you now restart ComfyUI, you will find the menu item &#8220;Manager&#8221; in the sidebar on the right-hand side. The manager is an extremely powerful tool that allows you to do the Gitclone fiddling you just did to install the manager only once. Without the Manager, you would have to install every other plug-in in the same cumbersome way. The Manager, on the other hand, offers you a database in which all ComfyUI plug-ins are stored, which you can then install at the touch of a button. Not only that, you will later learn that ComfyUI node set-ups, also known as workflows, can be shared from user to user using drag &#038; drop. If a user shares a workflow with you for which you have not installed the necessary custom nodes, the Manager recognises this immediately and automatically installs them for you. </p>





<h6 id="selecting-and-activating-the-stable-diffusion-variant" class="wp-block-heading">Selecting and activating the stable diffusion variant</h6>





<p class="wp-block-paragraph">You have now installed ComfyUI. What you still need is a suitable AI image generation model. Modifications of Stable Diffusion 1.5 and Stable Diffusion XL are currently the means of choice in the open source world &#8211; and you can find a huge selection of all kinds of Stable Diffusion models at civitai.com. </p>





<h6 id="installing-the-model" class="wp-block-heading">Installing the model</h6>





<p class="wp-block-paragraph">Once you have decided on a model, download it from <a href="http://www.civitai.com" target="_blank" rel="noreferrer noopener">www.civitai.com</a> and move the model file to the models/checkpoint folder within your ComfyUI directory. You have now installed ComfyUI and the Manager and set up a ComfyUI model. Now you are ready to go!</p>





<h6 id="first-steps-starting-comfyui" class="wp-block-heading">First steps: Starting ComfyUI</h6>





<p class="wp-block-paragraph">To start ComfyUI, all you need to do is double-click on NvidiaGPU.bat in the ComfyUI Windows Portable folder. If you do not have an Nvidia Gpu, select runcpu.bat &#8211; but be prepared for a considerable loss of performance without a Gpu. After starting, the console loads all the necessary data. This may take a few minutes, especially when starting for the first time after restarting the PC. Once opened, ComfyUI presents itself in a grey window. On the right-hand side you will find a self-explanatory menu bar. Double-click in the grey area to open the search, which you can use to navigate for nodes.</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/04959779-5369-4a00-b427-06c5fe57f539.jpg&#038;w=3840&#038;q=100"  alt="" ></figure>





<h6 id="hardware-requirements" class="wp-block-heading">Hardware requirements</h6>





<p class="wp-block-paragraph">In order to use ComfyUI optimally, a powerful workstation is required. If you are not sure what the difference is between a workstation and a conventional PC, then ComfyUI might be above your requirements (psychologically I mean). However, for 3D generalists, your own PC should already be sufficient. Recommended: A RAM of at least 32 gigabytes, ideally 64 gigabytes. Plus a graphics card: an RTX3070 or similar, but as is so often the case, bigger is better. And last but not least, a processor: a modern I7 or similarly powerful. With this equipment, you are well equipped to use ComfyUI in a relaxed manner.</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/e2b5b0a8-013a-4273-a482-139ffd414288.jpg&#038;w=3840&#038;q=100"  alt="" ></figure>





<h6 id="off-to-the-full-moodboard-layout-and-happy-art-directors" class="wp-block-heading">Off to the full! Moodboard, layout and happy art directors</h6>





<p class="wp-block-paragraph">In large studios, a google-savvy intern is hired to find as much input and reference material as possible on the internet. But what if you can&#8217;t afford a google-savvy intern and are too busy to google yourself?</p>





<h6 id="comfyui-your-unpaid-intern" class="wp-block-heading">ComfyUI: Your unpaid intern</h6>





<p class="wp-block-paragraph">This is where ComfyUI comes into play. If I&#8217;m looking for inspiration for a very specific topic, I can of course search Pinterest and Google Images. But I can also get inspiration in parallel by writing a prompt that describes exactly what kind of inspiration I&#8217;m looking for.</p>





<h6 id="inspiration-instead-of-plagiarism" class="wp-block-heading">Inspiration instead of plagiarism</h6>





<p class="wp-block-paragraph">And while, realistically speaking, I have often found just 20 useful inspirations in Google images after an hour, ComfyUI creates 100 images for me within 20 seconds. if these are all too strict and single-layered, I can play with different variables to change the randomness and prompt fidelity so that the programme adds unexpected combinations to my initial request.</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/28355a1e-af12-4055-865d-24e9c1ac8c41.jpg&#038;w=3840&#038;q=100"  alt="" ></figure>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/de954404-8605-486e-8eb9-92ba71941e00.jpg&#038;w=3840&#038;q=100"  alt="Blocking versus Colorblocking in Comfy UI. Was gefällt euch besser? " ><figcaption class="wp-element-caption">Blocking versus colour blocking in Comfy UI. Which do you like better?</figcaption></figure>





<h6 id="blocking" class="wp-block-heading">Blocking</h6>





<p class="wp-block-paragraph">The blocking. The first animatics traditionally consist of grey and dark grey and slightly greyer cubes and spheres, which are representative of cars, boats and houses. Sometimes a difficult image to decipher, not only for the art director. And this is where ComfyUI can help. How about using proxy assets instead of cubes in greyscale? So that the people who have to continue working with the layout at the end and have to create real 3D models can visualise much better whether they are dealing with the Venus de Milo or a stone in this cube. </p>





<h6 id="installed-what-now" class="wp-block-heading">Installed, what now?</h6>





<p class="wp-block-paragraph">Let&#8217;s create a workflow together. As a &#8220;demonstration task&#8221;, which has nothing at all to do with what films we watched last night, lay out a village in anime style. I&#8217;m imagining bunches of houses standing on rock pillars in the water. Boats sail between them. I&#8217;m sure you&#8217;ve heard these or similar details from your art director before. Can you visualise anything like that? Me neither. That&#8217;s the well-known art director problem, when you talk first and then try to listen to yourself in order to understand what you&#8217;re saying. What we need here is a communication bridge between us and the confused thoughts of an art director. Maybe the art director will even give you a scribble on the back of his last payslip. Which will look something like this.</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/a1906c5f-b6f3-4138-bdcb-bc944f8a8708.jpg&#038;w=3840&#038;q=100"  alt="" ></figure>





<h6 id="were-building-a-ghibli-style-village" class="wp-block-heading">We&#8217;re building a Ghibli-style village</h6>





<p class="wp-block-paragraph">Here we are with the art director&#8217;s doodle and his idea. Instead of having a long back and forth discussion with the art director to understand what he even means, we can let ComfyUI do the talking, because we&#8217;ve already got it set up and ready for this exact moment. You haven&#8217;t? Then it&#8217;s about time. Let&#8217;s get started with our first node tree.</p>





<h6 id="a-first-node-tree" class="wp-block-heading">A first node tree</h6>





<p class="wp-block-paragraph">We start ComfyUI and when it has loaded, we double-click on the image and enter &#8220;KSampler&#8221; in the search field. We create our first node. The KSampler is the centrepiece of every workflow. The KSampler combines our stable diffusion models with our prompts and our idea of what the dimensions and the image should be and generates an image for us. We&#8217;ll go into the setting options in more detail in a moment, but first let&#8217;s set up the node tree. ComfyUI offers us context-related suggestions as to which node we can plug into which input.</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/19375079-c470-49ab-9e7a-cc61f0618a3a.jpg&#038;w=3840&#038;q=100"  alt="Der KSampler" ><figcaption class="wp-element-caption">The KSampler</figcaption></figure>





<p class="wp-block-paragraph">Let&#8217;s now create a checkpoint loader node by left-clicking on the &#8220;model&#8221; input and dragging the mouse into the grey area and releasing it. A small menu opens in which we get suggestions as to what we could put in here. We take the checkpoint loader Simple. Here you can now load the model of your choice under CKPT name, provided you have saved it correctly as described above.</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/26825400-bd44-4b75-8b2b-24519f5f1611.jpg&#038;w=3840&#038;q=100"  alt="Checkpointloader Node" ><figcaption class="wp-element-caption">Checkpointloader Node</figcaption></figure>





<p class="wp-block-paragraph">Next, we want to create our text boxes for the positive and negative prompts. To do this, we drag the mouse from the Clip item in the checkpoint loader into the empty space.</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/05bd252b-c817-486a-b845-d720ad1196d7.jpg&#038;w=3840&#038;q=100"  alt="" ></figure>





<p class="wp-block-paragraph">Here we select clip &#8220;textencode&#8221;. We do this twice. We connect the &#8220;conditioning&#8221; point once with the &#8220;positive&#8221; point in the KSampler and the other &#8220;conditioning&#8221; point with the &#8220;negative&#8221; point in the KSampler. Next, we need a latent_image.</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/e5427482-bb7d-4294-a529-efee2b779ea3.jpg&#038;w=3840&#038;q=100"  alt="" ></figure>





<p class="wp-block-paragraph">What is important for us is that the latent image determines the resolution of the image we want to generate. (Remember: The native resolution of 1.5 models is 512×512, the native resolution of XL models is 1024×1024.) We set this resolution here. We now move the mouse back to the &#8220;latentimage&#8221; item in the KSampler, hold down the left mouse button, drag the mouse to empty space and release it again. Here we select empty latentimage. Now we access the &#8220;latent&#8221; item in the KSampler, drag it into the empty space and select &#8220;VAE Decode&#8221;.</p>





<p class="wp-block-paragraph">The VAE decoder converts the KSampler data back into visible pixels. We connect the VAE point from the VAE decoder with the VAE point in the checkpoint loader. Next, we extract another new node from the &#8220;image&#8221; point, this time the safeimage node. We have now created our first functioning node tree. Let&#8217;s now look at the most important settings.</p>





<h6 id="the-most-important-settings-positive-and-negative-prompts" class="wp-block-heading">The most important settings &#8211; positive and negative prompts</h6>





<p class="wp-block-paragraph">First the prompts: The text box in the &#8220;positive&#8221; item is filled with the information you want to have in your image. If you have loaded an XL model, it is much easier to fill this box because you can describe what you want in a natural English language. In the negative prompt you write in what you don&#8217;t want. If a generated image is too ugly and you have not written a negative prompt, this could be the reason. So you write &#8220;Beatrix von Storch&#8221; in the negative prompt, for example. ComfyUI now also knows that it should avoid anything associated with this term.</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/1fe2b8a1-9483-4c83-bfb5-d3ef577e02ed.jpg&#038;w=3840&#038;q=100"  alt="" ></figure>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/78e460cf-4267-4a6e-a33d-d797bb2f54b8.jpg&#038;w=3840&#038;q=100"  alt="" ></figure>





<h6 id="seed" class="wp-block-heading">Seed</h6>





<p class="wp-block-paragraph">The &#8220;Seed&#8221; setting is used to define a starting point for the random generator, which enables the reproducibility of results. Using the same seed number should theoretically generate the same image. For variations on each run, this value should be changed or set to &#8220;randomise&#8221;.</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/94b41aab-54e4-4da5-b76a-e99f814a9f69.jpg&#038;w=3840&#038;q=100"  alt="Der KSampler Seed" ><figcaption class="wp-element-caption">The KSampler Seed</figcaption></figure>





<p class="wp-block-paragraph">The &#8220;control_after_generate&#8221; parameter determines what happens after an image is generated. If set to &#8220;randomise&#8221;, a new random value is taken after each generation, which ensures diversity in the results. This is useful for creating a wide range of variations.</p>





<h6 id="steps" class="wp-block-heading">Steps</h6>





<p class="wp-block-paragraph">Defines the number of steps the sampling process should go through. More steps can lead to more detailed results, but can also increase the generation time. A medium number of steps is often a good compromise between richness of detail and speed.</p>





<h6 id="classifier-free-guidance-scale" class="wp-block-heading">Classifier-free guidance scale</h6>





<p class="wp-block-paragraph">The cfg value influences how closely the model follows the text descriptions. A higher value can lead to more accurate but possibly less creative results. Experiment with different values to find the best compromise between accuracy and creativity.</p>





<h6 id="sampler-name" class="wp-block-heading">Sampler Name</h6>





<p class="wp-block-paragraph">Sampler_Name specifies the sampling algorithm to be used. Different samplers can emphasise different properties in the generated image. &#8220;Euler&#8221; could be an option that offers a good balance between speed and quality.</p>





<h6 id="scheduler" class="wp-block-heading">Scheduler</h6>





<p class="wp-block-paragraph">This controls the schedule according to which the sampling steps are performed. &#8220;Normal&#8221; could stand for a standard run, but other options could influence the quality or speed.</p>





<h6 id="denoise" class="wp-block-heading">Denoise</h6>





<p class="wp-block-paragraph">Leave this value at 1 if you are only working with text input but no input images. Yes, images can also be input, we&#8217;ll come to that in a moment! The denoise value becomes relevant if we want to control what is generated for us with an image in addition to our prompt. The closer the value is to 1, the more dissimilar the image generation looks compared to the input image.</p>





<p class="wp-block-paragraph">If you are not sure what to do, leave the settings as they are in the picture. You may want to play with the sampler and switch between Euler and other samplers. Now we have our first node tree and know how to use it. So we are prepared for the conversation with the art director.</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/438951c6-b3a2-42b5-9cfe-a185ea02fc8b.jpg&#038;w=3840&#038;q=100"  alt="" ></figure>





<h6 id="visual-ideas-through-comfyui" class="wp-block-heading">Visual ideas through ComfyUI</h6>





<p class="wp-block-paragraph">Let&#8217;s type into the positive prompt: &#8220;anime water village houses on rock pillars.&#8221; Then we press &#8220;Queue Prompt&#8221; in the sidebar on the right and the programme starts to generate an image for us. When the Stable Diffusion Model is activated for the first time, loading the model can take one to two minutes &#8211; but it remains loaded until the PC is restarted and then works &#8220;instantly&#8221;. In the menu bar, we can also switch on Autoqueue and click on Change. This means that if we now press Queue prompt, Stable Diffusion will generate one image after the other. With these settings and a halfway decent graphics card, the generation of an image should not take longer than 2 seconds. The programme will now spit out a large number of images very quickly &#8211; on the basis of which you can evaluate with your art director what he actually wants. (Here I show a folder with 200 images that were generated within 2 minutes)</p>





<h6 id="introduction-to-the-colourful-3d-scene" class="wp-block-heading">Introduction to the colourful 3D scene</h6>





<p class="wp-block-paragraph">So, let&#8217;s assume that we have now agreed on a rough idea with the art director. Next, we want to block the scene, but in colour. To do this, we need 3D models of houses, cliffs with houses on them, boats and maybe a few people. I have provided you with the node tree for this on the DP page. Download it and simply drag it into your ComfyUI interface. (is.gd/comfyUIsetup_plugin) It will automatically recognise it and offer you exactly my workflow &#8211; if you want to participate or need a starting point. Your ComfyUI will recognise that plug-ins are required, which you probably haven&#8217;t installed yet, and offer to install them. You agree to this and after a few minutes you are ready to go. And if we now go through the various tools, we have a common basis.</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/d0f412cc-6368-4d5f-9a98-1d7919119277.jpg&#038;w=3840&#038;q=100"  alt="" ></figure>





<h6 id="from-doodling-to-3d-model" class="wp-block-heading">From doodling to 3D model</h6>





<p class="wp-block-paragraph">This is what the nodetree looks like. So that ComfyUI knows that we want to have a house, we promptly write as a positive: &#8220;beautiful studio Ghibli Hero House with a lot of details&#8221;. Now we doodle a very rudimentary house of St Nicholas so that ComfyUI knows what kind of perspective and what kind of format we want to generate our house in. The image then goes into a VAE node and now serves as a latent image. The Denoise value is very important here. If you set the Denoise value to 0, about 0.1, then ComfyUI will spit out the image pretty much exactly as you have given it. I recommend a value of around 0.8 here so that your 3D world doesn&#8217;t just consist of 2-dimensional stick houses.</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/d35c6bdc-c21d-43b9-9f47-0275f06a27b4.jpg&#038;w=3840&#038;q=100"  alt="" ></figure>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/e8ba0f94-648c-4c22-8c5c-bc5aa25daadf.jpg&#038;w=3840&#038;q=100"  alt="Der Nodetree für unser Häuschen – und die Painter-Node mit der Eingabe. Ja, malen ist nicht meine Stärke. Dafür war es schnell!" ><figcaption class="wp-element-caption">The node tree for our little house &#8211; and the Painter node with the input. Yes, painting is not my strong point. But it was quick!</figcaption></figure>





<h6 id="the-path-to-the-detailed-3d-object" class="wp-block-heading">The path to the detailed 3D object</h6>





<p class="wp-block-paragraph">The nodetree goes further and scales up our image so that we get a better resolution than 512×512. To create a 3D object from this, we use the TripoSR plug-in. To do this, we first use the AI to mask the background of the image using the RemoveBackground Node (the AI is quite clever, so it often knows exactly what we want without us having to change anything in the settings) and then we enter the mask and image into the TripoSR sampler, which then spits out a 3D model. You will then find this in the ComfyUI output folder. Remember, the model is not textured, but the colour representation is displayed via a vertex colour attribute, i.e. the level of detail of the texture or the colours depends on the geometry resolution. You can of course also use this workflow for all other objects. I did all the assets for my rendering this way.</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/bd35aeca-0141-482d-a028-a676e0aac302.jpg&#038;w=3840&#038;q=100"  alt="" ></figure>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/d8b3dc90-9d27-45ac-8578-3a585bbf8bbf.jpg&#038;w=3840&#038;q=100"  alt="Für die Schiffe hat die Inputkritzelei so ausgesehen." ><figcaption class="wp-element-caption">This is what the input scribbling looked like for the ships.</figcaption></figure>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/a1061fe3-855e-4af5-88a8-1e2b034807a0.jpg&#038;w=3840&#038;q=100"  alt="Und das waren die Resultate" ><figcaption class="wp-element-caption">And these were the results</figcaption></figure>





<h6 id="off-to-blender-selecting-and-creating-low-poly-models" class="wp-block-heading">Off to Blender! Selecting and creating low-poly models</h6>





<p class="wp-block-paragraph">You start by selecting the variants you want to load into Blender. I recommend that you first create a few dozen low-poly models with a very low geometry resolution of 128 using the auto queue function. This will result in models that work very well and represent what you want on the one hand, but also some that only work poorly on the other. But that doesn&#8217;t matter. Because we generate one model per second with this technique &#8211; with hundreds of objects, it&#8217;s not difficult to delete what we don&#8217;t want.</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/4ce5e4d6-0482-4ebf-bfac-7500ae924d5c.jpg&#038;w=3840&#038;q=100"  alt="Die Shader" ><figcaption class="wp-element-caption">The shaders</figcaption></figure>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/ed5f9561-24b2-447c-a973-b65503e5fda0.jpg&#038;w=3840&#038;q=100"  alt="Und noch ein paar Zusatzfeatures" ><figcaption class="wp-element-caption">And a few more additional features</figcaption></figure>





<h6 id="importing-into-blender" class="wp-block-heading">Importing into Blender</h6>





<p class="wp-block-paragraph">As soon as you have a few models that you like and would like to import into Blender or another 3D programme, you can search for the corresponding image in the output folder. Then drag this image into your ComfyUI interface. The nodetree used to create the image is stored in the metadata of each image. In the ComfyUI interface, you can then generate a geometry resolution of 512 for the selected model. You can then load the corresponding model into Blender as an OBJ file. In the Shader Editor you create a material for it and use the &#8220;Color Attribute Node&#8221; as colour input to give it back its colour in Blender. This is completely sufficient for a quick colour layout.</p>





<h6 id="colour-resolution-and-texturing" class="wp-block-heading">Colour resolution and texturing</h6>





<p class="wp-block-paragraph">For those of you who want a texture resolution that corresponds to the input image, I have written a small Blender plug-in (link in the description). With this plug-in you can nicely project the texture from the high-resolution input image onto your 3D model with one click, which in some cases is far more impressive than the vertex colour.</p>





<h6 id="model-factory-and-distribution" class="wp-block-heading">Model factory and distribution</h6>





<p class="wp-block-paragraph">Now to the Blender model factory: Let&#8217;s generate 100 models of a house via auto queue. Select all models at the same time and load them into Blender. With my tool you can distribute them with one click and give them the vertex colour at the touch of a button. You can then quickly render these fixed houses in the viewport and send them to your art director. He can then paint circles around the little houses that he likes. You can use the same procedure for cliffs, boats, trees etc.. This will give you some atmospheric images of how the scene should be put together, as well as models that you can place in this scene. Using your imagination, you can then quickly lay out the scene according to your wishes and those of the art director.</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/f480fcea-86a9-4243-bc72-017ac38c8aa3.jpg&#038;w=3840&#038;q=100"  alt="" ></figure>





<h6 id="conclusion-art-director-to-artist" class="wp-block-heading">Conclusion: Art Director to Artist</h6>





<p class="wp-block-paragraph">ComfyUI can help you overcome your communication barriers within a pipeline. We are moving away from our grey-graded bricks towards an initial layout that is not only more colourful than simply working with grey blocks, but the Proxy3D models could be a real inspiration for the final look of a product. Especially in the initial phase, you can offer the customer a palette of possibilities so that they already know more clearly in which direction the look of their product should go. Nasty surprises in the second half of a project, where the customer wants to completely overturn the look, can be avoided in this way. Well, most of the time.</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/84c42324-cf8d-4cf3-9c29-5e5e172fd38b.jpg&#038;w=3840&#038;q=100"  alt="" ></figure>





<p class="wp-block-paragraph">And as you can see, even if we are dealing with a powerful tool here, it is necessary to have a real CG artist at the keys when using it, as an intern is in a bad position. And if someone thinks that the customer will throw everything into a prompt themselves in future, they forget that they first need to know what they actually want. So our jobs are safe in the medium term.</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/3067e454-bd00-4783-9a23-93d0422244ef.jpg&#038;w=3840&#038;q=100"  alt="" ></figure>





<h6 id="what-happens-next" class="wp-block-heading">What happens next?</h6>





<p class="wp-block-paragraph">Well, this was the first article in a series that will continue over the next few issues &#8211; there are simply too many ways in which Comfy UI can help an artist in everyday life. In this series, we can expect future articles on PBR materials at the touch of a button, quartered render times, stylised renderers and one or two other delicacies. There&#8217;s a lot to look forward to!</p>





<figure class="wp-block-image"><img  decoding="async"  src="https://images.creativebase.com/_next/image?url=https://s3.eu-central-1.amazonaws.com/zone.busch.store.image/01a31d23-8215-49bd-a03c-9f5a945b97c0.jpg&#038;w=3840&#038;q=100"  alt="" ></figure>





<p class="wp-block-paragraph">Arne Palluck is a 3D generalist and designs 3D animations for German TV programmes such as &#8220;TerraX&#8221; on ZDF or &#8220;PM Wissen&#8221; on Servus TV. He is also a 3D layout artist for large VFX studios, where he takes care of the camera and layout for feature films. He brags to his daughter&#8217;s friends that he has already worked on Star Wars.</p><p>The post <a href="https://digitalproduction.com/2024/06/09/comfy-ui-ai-for-artists/">Comfy UI – AI for artists!</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/arnepalluck/">Arne Palluck</a>. </p></div>]]></content:encoded>
					
		
		
		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/image-31.webp?fit=1920%2C1080&#038;quality=72&#038;ssl=1" length="71830" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/image-31.webp?fit=1200%2C675&#038;quality=72&#038;ssl=1" width="1200" height="675" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title></media:title>
	<media:description type="html"><![CDATA[]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2024/09/image-31.webp?fit=1200%2C675&#038;quality=72&#038;ssl=1" width="1200" height="675" />
<post-id xmlns="com-wordpress:feed-additions:1">144475</post-id>	</item>
		<item>
		<title>Latest version of the story development tool released!</title>
		<link>https://digitalproduction.com/2023/01/19/neueste-version-des-story-entwicklungs-tools-erschienen/</link>
		
		<dc:creator><![CDATA[Patrick Poti]]></dc:creator>
		<pubDate>Thu, 19 Jan 2023 14:09:08 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Storyboard]]></category>
		<category><![CDATA[Storyboarding]]></category>
		<category><![CDATA[Storytelling]]></category>
		<category><![CDATA[Update]]></category>
		<guid isPermaLink="false">https://www.digitalproduction.com/?p=114092</guid>

					<description><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2023/01/Neueste-Version-des-Story-Entwicklungs-Tools-erschienen_Banner.jpg?fit=540%2C314&quality=80&ssl=1" width="540" height="314" title="" alt="" /></div><div><p>Flix 6.5 has been released. The story development software from The Foundry now also supports Apple Silicon processors - and the VFX Reference Platform 2022.</p>
<p>The post <a href="https://digitalproduction.com/2023/01/19/neueste-version-des-story-entwicklungs-tools-erschienen/">Latest version of the story development tool released!</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/patrick-poti/">Patrick Poti</a>. </p></div>]]></description>
										<content:encoded><![CDATA[<div style="margin: 5px 5% 10px 5%;"><img src="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2023/01/Neueste-Version-des-Story-Entwicklungs-Tools-erschienen_Banner.jpg?fit=540%2C314&quality=80&ssl=1" width="540" height="314" title="" alt="" /></div><div><div class='__iawmlf-post-loop-links' style='display:none;' data-iawmlf-post-links='[{&quot;id&quot;:3163,&quot;href&quot;:&quot;https:\/\/postperspective.com\/foundry-flix-6-5-increased-flexibility-pipeline-customization\/#utm_source=rss&amp;utm_medium=rss&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:3164,&quot;href&quot;:&quot;https:\/\/www.digitalproduction.com\/2022\/12\/23\/the-foundry-veroeffentlicht-katana-6-0&quot;,&quot;archived_href&quot;:&quot;http:\/\/web-wp.archive.org\/web\/20251211031240\/https:\/\/digitalproduction.com\/2022\/12\/23\/the-foundry-veroeffentlicht-katana-6-0\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2025-12-28 07:46:27&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-01-18 15:28:45&quot;,&quot;http_code&quot;:200},{&quot;date&quot;:&quot;2026-03-11 09:40:14&quot;,&quot;http_code&quot;:503}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-03-11 09:40:14&quot;,&quot;http_code&quot;:503},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:3165,&quot;href&quot;:&quot;https:\/\/www.foundry.com\/news-and-awards\/flix-6-5-release-with-increased-pipeline-flexibility-and-customization&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;}]'></div>
<p><strong>In nuce:</strong> Software developer The Foundry (Nuke, Katana, among others) has released Flix 6.5. This is according to an announcement <strong>published </strong>yesterday, 18 January, on <strong><a href="https://postperspective.com/foundry-flix-6-5-increased-flexibility-pipeline-customization/#utm_source=rss&#038;utm_medium=rss">postperspective.com</a> </strong>. Flix is a software for story development.</p>
<p><strong>In toto:</strong> The Foundry claims to offer its customers more flexibility and customised workflows with the release of version 6.5.</p>
<p><strong>Flexible Group and User Management | What&#8217;s New in Flix 6.5</strong><br />
<iframe class="youtube-player" width="1200" height="675" src="https://www.youtube.com/embed/qdkuRWzrxmk?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe></p>
<p><strong>Flix 6.5 | Features Overview</strong><br />
<iframe class="youtube-player" width="1200" height="675" src="https://www.youtube.com/embed/0T6lQyJtAwU?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe></p>
<ul>
<li><strong>Support for the VFX Reference Platform 2022</strong> (as also reported in the course of the release of Katana 6.0 by The Foundry. Digital Production reported on <strong><a href="https://www.digitalproduction.com/2022/12/23/the-foundry-veroeffentlicht-katana-6-0/">23.12.2022</a></strong>)</li>
<li><strong>Apple Silicon processors:</strong> Are now also natively supported by the Flix client.</li>
<li><strong>New authorisation system:</strong> This is based on groups and roles and is intended to provide more overall control over storytelling. It should also make it more secure for admin users to access information &#8211; and share it with other team members.</li>
<li><strong>Extended range of functions for contact sheets:</strong> This is intended to streamline reviewing and feedback; users will also have access to different templates that can be customised &#8211; in line with the circumstances of the pipeline.</li>
<li><strong>New versioning system:</strong> The same should allow users to integrate Flix at any point in the course of a film production, as stated in the message on postperspective.com.</li>
<li><strong>New linking system for media:</strong> Panels can be linked automatically within Storyboard Pro, for example.</li>
<li><strong>Support for managed source files in Storyboard Pro:</strong> The benefit of this is that .sbpz projects can be backed up to the server and saved as source files. The announcement from postperspective.com goes on to say that this eliminates the need to remember names or storage locations.</li>
<li><strong>Enhanced extensibility:</strong> This should be achieved through simplified access to programming interfaces &#8211; and the provision of comprehensive documentation. In addition, the Webhooks system has been introduced; this enables users to automate repetitive tasks with the help of user-definable, event-based triggers.</li>
</ul>
<p><strong>Click further:</strong> To the official announcement on <strong><a href="https://www.foundry.com/news-and-awards/flix-6-5-release-with-increased-pipeline-flexibility-and-customization">foundry.com</a></strong>.</p><p>The post <a href="https://digitalproduction.com/2023/01/19/neueste-version-des-story-entwicklungs-tools-erschienen/">Latest version of the story development tool released!</a> first appeared on <a href="https://digitalproduction.com">DIGITAL PRODUCTION</a> and was written by <a href="https://digitalproduction.com/author/patrick-poti/">Patrick Poti</a>. </p></div>]]></content:encoded>
					
		
		
		<enclosure url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2023/01/Neueste-Version-des-Story-Entwicklungs-Tools-erschienen_Banner.jpg?fit=540%2C314&#038;quality=80&#038;ssl=1" length="36720" type="image/jpg" />
<media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2023/01/Neueste-Version-des-Story-Entwicklungs-Tools-erschienen_Banner.jpg?fit=540%2C314&#038;quality=80&#038;ssl=1" width="540" height="314" medium="image" type="image/jpeg">
	<media:copyright>DIGITAL PRODUCTION</media:copyright>
	<media:title></media:title>
	<media:description type="html"><![CDATA[]]></media:description>
</media:content>
<media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://i0.wp.com/digitalproduction.com/wp-content/uploads/2023/01/Neueste-Version-des-Story-Entwicklungs-Tools-erschienen_Banner.jpg?fit=540%2C314&#038;quality=80&#038;ssl=1" width="540" height="314" />
<post-id xmlns="com-wordpress:feed-additions:1">114092</post-id>	</item>
	</channel>
</rss>
