👈

The Cube

In 2016 the US mens basketball team and Nike teamed up to create an interactive basketball camp for kids, the task was to create a cinematic experience of them charging a basket and capturing a video of them floating in the air with a custom real time bullet time rig.

  • Technology
  • Node
  • Express
  • FFMPEG
  • BlackMagic SDK

Overview

The experience lived inside a 30ft x 30ft LED Cube inside an air hangar right next to LAX in LA. Nike converted it into a camp for kids to come learn from the pros and for the US mens basketball team to do some promo games.

Users would walk into the cube and the room would come alive as Kinects tracked their movement in the space. Working with Justin Gitlin on the wall graphics, and Jasper Gray from Futuristic Films for the camera rig setup I was responsible for building the recording software to capture, edit and create the final clip of users suspended in time as they charged the basket.

Tech

The experience consisted of a number of pieces of technology working together to deliver an immersive experience that felt seamless and magical to users. When users would enter the cube a kinect sensor would pick them up and the walls of the room would come to life. As you moved left and right the walls would respond. When the user crossed a specific threshold in distance to the hoop an event was fired that would start the recording process.

There were 3 windows machines that each had 4 Decklink Duo cards to handle 4 BlackMagic cameras. They were networked together via a local websocket server that would send an event to start recording. Each Camera recorded 1 second of video and when complete would send the video to a central machine for processing. This machine would stitch each of the videos together (12 in total) and use an exact timecode based on calculations of how fast the user was moving to freeze them in time. This was then stitched together with buffers for the beginning and end of the video.

The whole render process took about 25 seconds, and by the time the user was finished up when they exited the room they were given an iPad that had a preview of their video that they could have emailed and texted to them.

Freezing Time

To freeze time the cameras needed to be configured in a specific way to have the same focal point. We had a total of 12 cameras that were setup on a rig that went from 3ft high to 18 inches. This slope along with having a central focal point gave the capture an even more dynamic effect.

When the cameras were activated each camera would record 1 second of footage, that footage was compressed and sent to the main render machine. Based on the speed that the user was moving when they crossed the threshold they would all pull the same frame from their captures and use that as the main frame for the final capture.

Jasper Gray & Matt Fajohn setting up the camera rig.

But wait, there’s more… 👀

I’ve done an in depth writeup of the technical side of how this project came to life on Medium, you can read more here.