a workspace built around traditional workflows​​​​​​​
where not every action requires a mouse and a keyboard
where tech seamlessly augments THE craft in a digital yet intuitive way
Overview
This work follows previous explorations of a possible workspace where technology augments traditional workflows in a seamless and natural way, instead of disrupting it, where not every workflow requires mouse and keyboard, where technology adapts to the craft and not the other way around.
The video demo can be broken down into two main parts: 
1. The real-time painting of the digital garment - an artist applies watercolour paint on a physical canvas which is immediately projected onto an animated digital dress seen on a screen.
This is achieved using a camera that automatically frames the physical canvas with the aid of a computer vision algorithm (created as an OpenCV native C++ plugin for Unity) which results are then applied to the dress in the Unity 3D scene based on its UV map.
2. The perspective projection onto the LED screen - as the action takes place the perspective is corrected according to the position of the camera giving depth and volume to what in reality is a flat surface.
A Meta Quest 2 is used to track the physical camera and “mirror” its position and orientation inside the Unity scene where a C# script takes care of the math required to "project" the virtual camera output onto the LED screen. The video below highlights the projection providing two separate points of views with correct and distorted perspective.
This approach is inspired by the virtual production technique of employing LED walls as backdrop to actors and physical sets, just like what seen behind the scenes of the Mandalorian (ILM's StageCraft) but here at a much smaller scale. I'm using a Meta Quest 2 device to track the real camera and communicate its position and orientation to the Unity game engine, where a C# script takes care of doing the math to "project" the results onto the LED screen (camera frustum projection)
Scope of Work and Skills
This project required creating a native C++ plugin for Unity since the OpenCV library necessary to write the computer vision algorithm that automatically frames the square canvas isn't available in C#.
The dress was modelled and animated in Blender using its cloth simulation solver then later imported in Unity as an Alembic file containing cached geometry data per each frame of the animation.
Real-time camera frustum projection was achieved by writing a C# script that retrieves the position of the four corners of the LED screen relative to the 3D scene as points in the camera space coordinate system. These four positions are used to make a system of equations solved by performing a Gaussian Elimination algorithm whose output is then applied to the camera buffer as a transformation matrix. This process deserves a thorough explanation of the mathematics behind it in order to better understand it.
Finally, the position of the virtual camera is given by the real-world position of a Meta Quest 2 VR headset to which an iPhone 7 is attached to record the final video.
physical setup and equipment
Virtual Production Home Setup
Virtual Production Home Setup
Camera Rig consisting of a Meta Quest 2 and an iPhone - Front
Camera Rig consisting of a Meta Quest 2 and an iPhone - Front
Camera Rig Consisting of a Meta Quest 2 and an iPhone - Back
Camera Rig Consisting of a Meta Quest 2 and an iPhone - Back
Back to Top