Former Meta engineer Daniel Habib has presented a new approach to 3D viewing that runs directly on everyday devices. His project, called True3D, uses head tracking via a front-facing camera to deliver motion parallax on flat screens. The result is a depth effect that requires neither stereoscopic glasses nor a headset.
The system estimates the viewer’s head position in six degrees of freedom, then reprojects the video accordingly. This makes the display behave like a “window” into the scene: shifting one’s head shifts the perspective of the content.
No Glasses, No Joysticks
Traditional 3D formats either collapse into flat video or ask users to manually “steer” the camera via drag-and-pan gestures. Habib argues that this mismatches how most people consume media, since viewers usually prefer passive, lean-back experiences. True3D removes that barrier. The viewer does not interact directly with the content, but the system continuously adjusts perspective based on natural head movement. This yields a convincing impression of depth from parallax, perspective, and occlusion, without stereoscopy.
Window Mode: The Old Idea Made Practical
Habib calls this approach Window Mode. The method was first demonstrated 17 years ago by Johnny Lee, who used a Nintendo Wii Remote to track head position. What makes True3D notable is that modern facial landmark detection, iris tracking, and mobile GPUs now make the concept usable on consumer hardware.
Low latency is critical. If the reprojection lags behind head movement, the illusion breaks and the scene feels unstable. True3D applies temporal smoothing to minimise jitter, rejects outlier data points, and processes face data on-device to avoid privacy issues.
A Working Demo, Plus Tools
A live demo is available via True3D’s lab page. The player requests access to the front-facing camera, then applies head-coupled perspective in real time. Habib reports that informal testing with nearly 100 people showed that users intuitively understood the effect without explanation.
In addition to pre-made examples, users can upload their own MP4 files to generate Window Mode clips. True3D also offers an API and a player component that can be integrated into web applications, games, or renders exported from Unity or Blender.
Under the hood, the pipeline relies on volumetric representations, including voxels and Gaussian splats, to render view-dependent video efficiently. The service runs entirely on True3D’s own APIs.
Tried in the Lab
We tested the live demo with True3D’s calibration targets. Tracking was surprisingly stable, and the parallax effect produced a convincing sense of depth with minimal setup. For artists in animation and compositing, such responsive head-coupled viewing could become a practical aid when working with layered scenes or depth-rich shots.
The Steamboat Willie sample demonstrates both the promise and the limitations. While the shifting perspective makes the clip feel anchored in space, it also highlights that parallax requires more source content. You cannot “look around” objects unless that visual data exists, which becomes evident when foreground occlusions reveal missing detail. Even with those limits, the technology feels like a welcome change from 3D approaches that only push for new and more expensive display hardware. True3D instead leverages the cameras and GPUs that most devices already carry.
Context: Not VR, Not AR, Just a Screen
Habib positions True3D as a middle ground between traditional video and VR. It avoids the isolating form factor of headsets, but delivers presence through subtle automatic interaction. Users do not give up editorial control to interactivity, and content still plays out as authored. Unlike 360-degree video or interactive VR scenes, Window Mode preserves the passive nature of screen-based viewing while layering in a perceptual trick that makes the content appear anchored in physical space.
Availability and Limitations
The demo is freely accessible in browsers that support camera access. Privacy-sensitive users should note that the system requires constant face tracking, though Habib states that processing is done on-device and discarded per frame.
Performance depends heavily on latency. Slow cameras or underpowered devices may produce a lagging, unstable effect. At press time, no benchmarks or system requirements were officially published.