The Lost Bus is a 2025 survival-drama directed by Paul Greengrass for Blumhouse Productions in association with Comet Pictures and Apple Original Films. The film is based on the non-fiction book Paradise: One Town’s Struggle to Survive an American Wildfire by journalist Lizzie Johnson. Set against the 2018 Camp Fire in Paradise, California, it follows a school-bus driver and a teacher who fight to guide 22 children to safety through an encroaching inferno. Combining Greengrass’s documentary-style direction with large-scale visual effects and environmental reconstruction by RISE FX, the film depicts one of the deadliest wildfires in recent history with stark realism.
Supervising the inferno: Oliver Schulz (IMDB | Linkedin) is a senior Visual Effects Supervisor at RISE FX, the Berlin-based VFX studio. Over more than a decade at RISE he has guided VFX supervision on major international productions including Fantastic Beasts: The Secrets of Dumbledore, Blue Beetle and Megalopolis, among many others.
His background spans concept art, digital matte painting and 3D environments; skills that helped him to take the creative lead on blockbuster-scale environment and FX heavy shows. In this interview he reveals how he and his team at RISE tackled the challenge of recreating a burning landscape for The Lost Bus, combining procedural geography, wind-driven vegetation, deep-rendered volumetrics, and machine-learning techniques for depth integration, to bring the inferno to life on screen.
DP: How did you get onto the Lost Bus?
Oliver Schulz: I came on board after wrapping up on Megalopolis and jumped onto the very first meeting with Charlie Noble and Gavin Round, Production VFX Supervisor and Producer. The project was already awarded at that time so we directly started talking about the Sequences and the scope of the Rise portion of work.
Luckily or better sad tragically this was a real event so in regards of look, there were many references and documentation of this day. So our first meeting was looking through a lot of real world footage from all available sources. Charlie had been prepping reference reels from the very beginning, so we could hone in on a lot of specific ones for each portion of the work because he had references for all of them!
DP: Roughly how big was the RISE team on The Lost Bus, and how long did you spend from first build to final comp?
Oliver Schulz: We started with a very small core team in May 24 and delivered the last shots at the beginning of 25. I think around 50-60 people worked on the show in total during the production with up- and downramping based on specific project needs like temp deliveries for example.

DP: The antagonist of the movie is the geography of a very particular area, and the fact that it is on fire – how did you make sure that it was recognizably that specific part of the world?
Oliver Schulz: We started with real world data derived from elevation models. That gave us a pretty good grounding in reality. We got lidar scans for very specific locations like the Pulga Bridges for example which was invaluable as this is usually something one doesn’t get from any publicly available sources. We spent quite a bit of time to get us a very good foundation of all key locations, which meant that everything had a geometrical base until the very last mountain you see on the horizon.
Our Lidar Supervisor David Salamon was instrumental in setting up this base. He used some maps imagery to give a rough base color to all those individual geometries that served as a rough guide later on in layout and surfacing for distributions of materials or assets. One has to keep in mind that most data was post 2018 so for instance vegetation had to be recreated from mostly photographic references shot before the fire. We tried to stay as true as possible to real world geography, but later on of course things had to be changed for storytelling reasons.

DP: When merging New Mexico plates into your California canyon builds, how did you maintain scale and geological continuity?
Oliver Schulz: As production did not have access or in case of some very sketchy roads didn’t shoot at the original location for safety reasons, they did some scouting through the US and chose some New Mexico locations as stand ins for some of our sequences. The most prominent was the Pulga road for sure. In the film, the first responding firefighters trying to get to the origin of the camp fire first get sight when they are on top of the Pulga Highway bridge crossing the Feather River Canyon.
Without any better option they decide their best shot is to try and get to the fire following a very narrow road on the slope of the canyon. All shots on Pulga Bridge were shot on a Studio Backlot featuring full CG Environments including the FG bridge. This narrow path however was all shot on the New Mexico location with two big fire engines driving a slightly wider road.
In production that meant that we replaced most of the visible Environment due to a couple of reasons. First of course it needed to have the right roadwidth and the correct canyon in the background. Second we needed to have very windy vegetation everywhere. Third in case all of that worked in camera which was pretty rare we still needed to put FX Elements into every shot consisting of dust, debris, smoke and later also embers. With those guidelines in place probably 90% of the shots became full CG exteriors only keeping small bits of photography for fire engines and some road pieces.
Once all those went into layout we made sure to keep a senseful progression to those shots meaning having the firefighters travel along the road during those shots in cutorder. The topography of the shooting location though was pretty different from the storypoint progression on the pulga road, so was the framing in camera when pointing at the fire from the fire engine interiors.
That of course meant as good and real our base was, it needed to be heavily augmented to make sense with storytelling and framing choices. Most shots feature the correct BG canyon but the midground is totally made up to allow for good view to the fire origin. All of this had to be understandable even with very frenetically moving handheld cameras.

DP: Let’s talk vegetation: How much botanical creative freedom did you have?
Oliver Schulz: Vegetation was a big part of the environment and one of the reasons for the rapid spread of the fire. All FVX vendors had to tackle it in one way or the other which meant all were contributing to the research for which plant goes where. The foundation was once again the research and material collection from Charlie and his team.
We focused on the most common species found in this region of CA and made sure the level of dryness and the distribution made sense. So in this sense there was not too much freedom here as everyone tried to make this as real as possible from this point of view. For the build we actually just used the most common ground which is Speedtree with some augmentations done in Houdini. Part of the assets were also shares from other vendors which just needed ingestion and rigging in FX.
DP: You mentioned building a hierarchical “ecosystem” in Houdini. How modular was this system, and how much hand-authoring did artists still need to do per shot?
Oliver Schulz: This was something we actually invested a bit of time in at the very beginning and was overseen by CG Supervisor David Schulz and Layout Lead Mareike Loges / Senior Layout Artist Björn Markgraf. The core idea is nothing new and hierarchical just means that you start from the biggest Elements in your kit and than go smaller and smaller based on the previous distribution of Elements. First step is to either scatter or handplace big trees for example, following this you end up with a certain distribution.
Based on this the system places smaller entities like younger trees or seedlings and smaller shrubs and bushes around or between the big trees. This distribution is based on simple rules like distance or terrain steepness. In case of the Pulga road we divided it up in two categories: mountains and roads. Both had similar procedures. We would always start with the rough blocking geometries matching either scan data, elevation data or sometimes just made up. From there we would generate the base coverage of rock cliffs which would hold out trees mostly in those areas. Following this we created the trees and bigger vegetation which would determine the ground coverage of rocks vs more pepply ground.
Roads were pretty similar but less complex as they mostly feature small stones. Again here we used some manually created maps to drive the distribution of small vs bigger pebbles that mostly accumulate on the side for example.

The toolset itself worked pretty well and as it was applicable as a template we could have a fully laid out shot in a day. Shot specific adjustments were applied on almost every shot though, mostly for continuity, visibility or art direction purpose.
DP: Vegetation, environment, and FX were all dependent on one another. How did you keep versioning sane between departments?
Oliver Schulz: That was a big topic indeed and it only is possible with two things: a rigorous approval system and a good pipeline that helps you track those approved layouts. We rely on our usd pipeline to do exactly that for us, it makes it somewhat easy (Im sure layout and production will hit me for this) to track department versioning. For each layout update we`d always get automatic QC renders that run through our inhouse “slapstick” system which is our inhouse auto comp engine.
When Layout would do a specific change it would publish this either on a shot or on an Sequence/Environment level. This will trigger a QC render from the shotcam of the affected shots. Once the rendering is done it will have a postjob that combines it with the prepped plate in nuke and runs another renderjob that will give you the layout reviewable which is than checked and can be approved and pushed into the pipeline from RV. This Layout than becomes available to the FX department which would run all needed simulations and hand off another QC reviewable for approval. Without those systems in place it would have been a nightmare to stay on top of all these versions!

DP: You said the layout department drove wind direction and strength instead of FX. How did that change your creative workflow?
Oliver Schulz: Yes that was indeed true and one of our early conversations we had internally to determine the approach all Environment builds would share. It was a practical decision based on two factors: We would simulate all vegetation on the asset level in different windspeeds for efficiency reasons and we wanted to keep iteration loops to a minimum.
This meant that I wanted to look at layout versions with moving vegetation as the strong directional wind would make it necessary to consider this already while layouting trees. As the direction is clearly visible you cant rely on spinning a tree 360 deg free in Y to create variations as the direction is “baked” in, so you need to see it moving in order to determine if an environment looks good!
The second reason is that FX needed to take care of vegetation simulation only once and when approved never needed to come back. This system worked really well and was accessible to the layout artists down to the single blade of grass, meaning one could really art direct where and how much specific things should move.
DP: You divided smoke into “hero” and “residual” categories. How did you manage density and readability without losing visual clarity?
Oliver Schulz: These two categories were simply based on the fact that we needed to deal with smoke in almost every shot. The ever present residual smoke needed to inherit a direction, needed to be art directable and also needed to render as fast as possible. Its pretty much the equivalent of atmospheric perspective in a wildfire scenario. Our Fx Supervisor Akin Göcmenli came up with a system of instanced presimulated caches that sometimes could consist of thousands of individual ones.
We started by doing simulations of smoke with a constant wind direction and speed that had a hidden source of emission and diffused pretty soon. That gave us a very soft falloff to the edges of the simulation grid which made these simulations perfect to overlap and look as one big single instance of smoke. On top due to those aspects it was quite easy to remove single containers and punch holes into the wall of smoke for visibility. We also invested a bit of time to develop shaders and render efficiencies to cut down on notoriously long volumetric rendertimes for this element.

Hero smoke was the category which either had a visible emission source in frame or simply had a hero storytelling element. These were usually shot or sequence simulations as they were mostly much denser and most of the time also much closer to camera. We also spend a good amount of time matching shading and simulation to real world references. The secret to readability also lies in relentless QCing of outputs to make sure once you kickoff the expensive lighting renders, you are as certain as possible all elements are going to work.
DP: The ember work looks incredibly detailed. How did you simulate believable motion in strong winds without visible repetition or looping patterns?
Oliver Schulz: Embers were a big part of the equations so by now you might guess…. Yes we spend a bit of time in asset prep to build some solid foundations. The first thing is of course the driving factor for all fx aspect: the wind.
FX developed hero wind forces that we used to simulate all elements with. A good amount of chaos and variance is key to not run into issues with readable patterns in any simulation. Another factor is collision. Embers will behave a certain way when they collide and thats what we tried to replicate. Also the ground plays a big role especially with the heavier emberclumps that slide over it.
Reality is unbeaten when it comes to little quirks and anomalies especially for something as complex as this. As no one is able to have a ground as detailed as the real world we also sometimes had a collision geometry that had slightly more displacement in order to have more detailed collisions happening.
DP: Lighting and volumetrics are natural enemies. How did you maintain physically plausible lighting through that much smoke and fire?
Oliver Schulz: That was one of the biggest questions going into this project, considering that what was shot on set sometimes had very little to do with what ended up on screen, especially in terms of atmospherics. The best base for something to look real is to match the real thing. We did so in our asset phase and made sure our shaders and lightrigs were physically plausible, especially the ones only used to develop assets.
We would match greyballs and reference macbeth charts in order to make sure scene lighting was correct in terms of lightbalancing. Then from there we developed shaders for all aspects.One of the most common issues I see is that volume and surface renders aren’t lookdeved in conjunction. What you end up having to do is to grade surface and volume render differently. This leads to very unrealistic renders very fast because there is no ground truth you can come back to.

We tried to make sure all our shaders worked with each other to have exactly this common base. Also when dealing with dozens of light sources and those issues on top that’s definitely a position you don’t want to be in when time is running…. All volumetrics do have very different properties to them where one of the biggest is how they scatter light. Back to front scattering can take a volume from being ultrabright to consuming all lighting energy and being pitchblack. So once you matched the real thing, use those tools wisely to deviate from there and support the story.
We tried to always start with a balancing pass usually done still in lighting. This goes to comp as the foundation to do all the finetuning with. Still there was a lot of tuning left for comp and also we needed to break reality more than once to make sure that what you wanted to read in a frame remained readable when tons of smoke and fire went in front. Sometimes we needed to go as far as use the deep data to pull things in and out of the smoke to make them visible. Still the most valuable tool you have is the artist’s eye to determine the sweet spot of good vs real.
DP: You mentioned deep rendering bottlenecks, like OIIO running out of patience with too many AoVs. What exactly went wrong first?
Oliver Schulz: Haha yes that was one of the issues that came when switching to full deep with our renders. That means you have each component of each lightgroup rendered with deep data to put it back together in comp. That resulted in a lot of channels that apparently were too much for OIIO to handle. Thank god that was fixed otherwise I wouldn’t write this story now 😉

DP: Rendering holdouts took up to an hour per frame. Did you develop any automation or optimisation to make deep rendering less painful?
Oliver Schulz: Indeed rendering times for deep holdouts were quite painful and nothing to speed up really. With so many volumetric elements you need to deep hold out everything with everything to make sure its accurate. If you multiply this with the number of separate elements rendered and with the amount of light Aovs times the amount of components you end up with a staggering number of renders.
Plus in the end you need to denoise all frames so the best solution was try to plan out delivery dates as good as possible to have time for all those thousands prerenders to run on the farm. Still our compositing Supervisor Oliver Hohn and Lead Nicolas Burgers had some longer evenings ensuring all renders were there the next morning to be picked up by the compositing Artists.
DP: You used machine-learning depth generators to create deep data from plates. What tools powered that, and how reliable were the results?
Oliver Schulz: We started testing tools quite early in anticipation of very challenging compositing work. DepthAnything v2 was what we ended up using as a default prerender pass. The results were a mixed bag considering the wide range of plates we worked with, although it proved to be valuable to have. Comp remapped the relative value output of the depth passes to absolute values from deep data with help of lidarscans or renders and was able to create some good integration especially with more wispy type of smoke.
For denser smoke and more accurate holdouts especially for actors we still needed to rely on a lot of manual roto for good integration. The AI passes proved to be pretty successful though for fast temp work as you get something going in no time. Issues were mostly the missing good temporal stability and also the lack of precision.
DP: Before deploying new tech like ML depth generators or procedural ecosystems, how do you test them safely inside production?
Oliver Schulz: We implemented those during production directly on our project infrastructure, so developed, tested and used simultaneously.
DP: Were there any spectacular ML depth map failures, like smoke reading as solid or background cliffs collapsing?
Oliver Schulz: Depth popping or lost shapes were the most common ones. But as none of these passes were used without correction in comp I’m sure I haven’t seen all of them!
DP: With so many volumetric layers, how did compositors manage complexity without drowning in passes?
Oliver Schulz: We have standard workflows for loading CG renders into nuke which do provide a basic level of organization. However the more elements you have the bigger the compscripts and we had some good ones for sure!
DP: Fire colouration is tricky. Did you use any spectral rendering or rely purely on LUTs to match on-set lighting and heat distortion?
Oliver Schulz: That is very true.. Luckily production tried to shoot everything with a practical fire which provided a good level of references in camera. If you than try to render as physically plausible as possible and have something in frame that you can match exposure to you are already halfway there. We didn’t use any spectral rendering here and rendered everything through Houdinis Karma in RGB.
DP: You switched to full motion-blur sampling for embers instead of faked streaks. How much did that impact render time, and was it worth it?
Oliver Schulz: Oh that was worth every minute of rendertime.. Real Motionblur for an element which is mainly visible in motionblur is a good investment. Plus the rendertimes weren’t actually that bad and took only a couple of minutes as you are not dealing with an expensive shading as well. The biggest benefit is getting nice curved and very interesting blurs especially with collisions. The trick actually is to only invest time where its needed and render other elements with less costly settings. Deep compositing allows for it as you are not bound to any holdouts and you can combine differently rendered motion blur without any problems.
DP: How crucial was RiseFlow for distributing simulations and maintaining consistency across all sequences?
Oliver Schulz: We started implementing RiseFlow at the very beginning once we had our initial workflow for distributing elements figured out. The development was done by our Head of Pipeline Paul Schweizer and the implementation on the show was spearheaded by Jonas Sorgenfrei.
It actually is a very versatile framework that we use for a variety of tasks here at Rise. Its a modular node based System that can take arbitrary inputs and execute them in a chained workflow. FX built templates for various scenarios that got exposed variables like wind direction, speed, inputs for collision geometry etc. These could than be varied per shot and sent to the farm for execution. Once all those Sims were done, QC renders were submitted to the Farm and when completed, auto comped in Slapstick.
That meant that one artist could do changes on a big number of shots by adjusting the template and than resimming and rendering them over night. All render elements were deepcomped with our deep plate workflow and reviewed the next morning. This allowed for rapid adjustments and turnarounds which was a very crucial aspect of this fast paced production.
DP: How did RiseFlow and Slapstick communicate between departments for reviews and dailies?
Oliver Schulz: RiseFlow and Slapstick are two different things really. The point where they communicate is that Riseflow might trigger a farmjob where Slapstick is hooked in as a post process that gets triggered after completion of the render. Slapstick again is a modular node based system implemented in Nuke that allows for a generalized template to be created. These inputs could take for instance all general elements that comp might use to layer a shot like mainplates, rotos, colorcorrections, lensdistortions etc and comp them together. We use Slapstick in all departments to create automatic reviewables for assetbuild like turntables with reference images, lighting slaps,fx slaps and so on.
DP: You’ve called The Lost Bus the toughest matchmove job you’ve ever seen. How did you solve the handheld, wet, low-light camera challenge?
Oliver Schulz: That was a tough one indeed. To solve this it really just comes down to the excellence of all individual artists that created those matchmoves. So there is no magic recipe to get through so many challenging matchmoves…
DP: Greengrass loves long, continuous takes. How did you manage to iterate and render efficiently on such heavy, unbroken shots?
Oliver Schulz: I guess it is really to choose your battles wisely… Invest into a good foundation early on and make sure to be as precise as possible in prep phase. Once the show is running and you are in full delivery mode there is no time to go back and redevelop anything.
Render optimization as much as possible and then relying on everything that was set up in the beginning is key to not have to think about accuracy anymore when you are trying to finish the shots. We did this and it really paid off, though having a couple of long shots with lots of elements to render we never ran into the issue of having to fear a render didn’t get finished in time.
There were some challenging shots for all departments involved but again the prep phase paid off and we managed to deliver everything in time. It’s really a Situation in which the Production team led by Michelle Cullen and Production Manager Androniki Nikolaou outdid themselves by planning and scheduling every milestone in production to make sure we had what we needed to finish shots in time. Of course that also means adjusting and revising this schedule each and every day based on client comments and changes.. It’s a tough job to make sure the whole production runs like a well oiled machine!
DP: Deep compositing only works if all layers align perfectly in space. Did you use diagnostic tools or pure visual QC to verify deep accuracy?
Oliver Schulz: The good thing about deep is that it’s pretty accurate as long as the sampling increments in depth are small enough for certain elements. It’s a game of keeping error thresholds low enough so you don’t pick them up actually. The balancing is precision versus filesize. Surface renders aren’t an issue really as you are dealing with front and backsides of hard surface objects really.
The fun starts with volumetric elements and this is where you need to tweak the settings a bit to make sure you don’t end up having 5GB per frame in volumetric renders. Still frames could grow to well over 1GB on bigger shots with all elements included, so we needed to do some rough calculations beforehand to make sure we weren’t running out of allocated serverspace.
DP: How did you maintain consistency for fire behaviour across sequences? Was there a single reference look, or did it evolve shot by shot?
Oliver Schulz: One of the big topics obviously here as the quality of the fire not only needed to remain consistent but also serve the story in how it behaves. When you look at fires in reality they all have very different qualities to them depending on a ton of external factors like what is burning, where it burns, what is the actual heat it produces and what is the influence of the wind and so on. So yes it’s crucial to pick a reference and not try to incorporate them all. The initial tactic we used was to create asset based fires with all components.
The two types that production defined as assets were “spotfires” and “forestfires”. Pretty generalized in description though mostly divided up by scale. So we took those two types into asset development and created a little scene with them. Forestfire in the background and spotfires in the foreground. This scene actually was the same one we used to lookdev all assets in. So we had a common ground for all assets really and the fx ones were not different.
We picked a general reference we felt was working well for each category and supplemented that with references that production had shot on set. The shot element though were mostly run by gas so wouldnt really emit any smoke but were a general ref in terms of breakup and edge qualities. Also those would come in native resolution where most of the actual refs are cellphone captures of much poorer quality.
So with all those references in place we started matching the fires again in different windspeeds. We tried to also implement all little details especially on the bigger forest fire like flambursts on dry wood, falling burning pieces of wood etc. Once fire was in place we hooked it up with all secondary elements like smoke and embers. We had already pretty robust setups developed for each of them individually so we could already build on a solid foundation using those as a base.
Once this little scene was successfully approved by production to go into shots we splitted out the individual components as assets again. These had all elements attached like smoke emission, ember emission, lots of different masks for heat distortion and were ready to be dropped into shots.
Using this technique we had a very solid foundation of very similar looking and behaving fires. Of course for hero shots we would need to resim those, but with setups in place and our template system it was mostly straightforward. Of course there are shots that need to tell a certain story like a fire coming right at you towards camera. Solving a problem like reading a perspective of a selfilluminating matter coming right towards camera is a different beast though you can’t prep for! This just takes a lot of creativity and trial and error to get right…
DP: What’s the single biggest creative takeaway from The Lost Bus you’d carry into your next show?
Oliver Schulz: Don’t try to put out all the fires at once….
DP: Which shot makes you proudest or gives you flashbacks?
Oliver Schulz: Oh there are so many good ones really, honestly when I was watching all shots in a row I was so happy about the overall level of quality the team achieved in every aspect. So hard to pick singles but the Embercam full CG shots looked amazing on the big screen and were pretty spectacular… but getting them to the state we delivered them in was quite a journey…
DP: Finally, if you had to redo The Lost Bus from scratch, what would you rebuild first? Vegetation tools, compositing templates, or your caffeine reserves?
Oliver Schulz: Myself 🙂
