Weeks 1/2
Proposal
Obsidian was presented as a large-scale narrative-based projection mapping experience that would envelope a room. We pictured it as an interactive movie with the crowd as participants.

Premise
→ Large scale projection mapping experience
→ Narrative-based
→ Movie with crowd as participants
→ Crowd sensing through Kinect or Leap Motion
→ Experience changes with motion of the crowd and the audio input

Stretch Goals
→ Real-time audio
→ Autonomous VJ’ing through AI

Tech
→ Projectors
→ Good sound system
→ Kinect/or/Leap Motion
→ Blender
→ Unity+Shader code
→ MAX msp (to be determined)

Resources/references
→ instagram.com/back_down_to_mars
→ radiumsoftware.tumblr.com/post/175304870744
→ radiumsoftware.tumblr.com/post/171008805859
→ docs.depthkit.tv/docs
→ www.youtube.com/watch?v=S8j0gwzY4ns
Contemporary Reference
Triennial / TeamLab
Triennial is an interactive piece developed by the collective TeamLab. It features particle system that navigates a flow field which is being distorted in real-time by users walking over it. Through this, the installation create a beautiful whirlwind-like a projection that illuminates the entire room filled with perpetual mirrors.

This piece acts as a solid reference since it implements infrared motion tracking technology with top-down projection mapping. It situates the user as an object that bends the flow of wind particles. We find it to be highly relevant for what we would like to achieve in terms of infrared-based interaction and projection-driven design.
Contemporary Reference
The Movement of Air / Adrien M & Claire B
The Movement of Air is a performance piece for three dancers blending choreography, visuals and music to reveal the invisible movement of air. It consists of various scenes with unique interactions while retaining a cohesive aesthetic. This is the goal for our project as well.

It is also exemplary of a surreal dreamland experience we would like to evoke. The piece has no explicit narrative, and is rather a story told in visual language. We have a similar intent with Obsidian.
Contemporary Reference
The Liminals / Jeremy Shaw
Jeremy Shaw’s The Liminals explores its subject matter through medium; it aims to blur exactly when it was made to put more focus on why it was made. The sudden shift in production medium from black & white film to datamoshed HD footage communicates, purely through visual language, Shaw’s intentions.
Contemporary Reference
Lu Gym Wall / Lu Interactive Playground
Lu’s interactive wall is a consumer-friendly, ready-to-buy immersive lighting/sound projection including a series of games.

With this product, we were interested in how distance data can be utilized/calibrated with projections to create large-scale interactive environments.
Historical Reference
Three Camera Participation TV / Nam June Paik
Three Camera Participation TV (1969/2001) is a interactive installation that relies on viewer participation, projecting their form into the space and onto a TV as a series of hazy silhouettes using three cameras. This is in line with Obsidian, as the projection will not exist in the same capacity when no one is in the piece. A person who enters the project’s view is seen as a "willing participant", and their depth information is what initializes the sequence.

There is also a play between the analog TV screens, cameras and warped projections which we would like to explore through visual feedback on scaled projections and analog TVs. We would also like to explore the distinctions and links between technology, sensorial data, and the body. Narrative wise, the piece consists of very little context, relying almost solely on the viewer’s unique perceptions and experiences. This is our intent with Obsidian as well.


Production
In the Space
→ We made a detailed asset and equipment list
→ We were introduced to the black box space
→ Valerie produced a technical drawing of the setup
→ Codrin performed depth tests with the Kinect
→ Codrin began prototyping with Kinect
→ Ali began prototyping analog visuals
→ Ali is currently working on audio sequences
→ Reserved all of the equipment
Production
Interaction
Codrin
Software architecture
Kinect SDK:
→ BodySourceManager.cs
→ BodySourceView.cs (Allows to track bones/body)

Core Scripts:
→ WorldManager.cs (Manages scene transition/conditions)
→ PlayerController.cs (Manages the kinect controls/interaction)
→ LoadData.cs (Loads the data/shifts it with new participants)
→ PointCloudDataManager.cs (Handles the depth texture)

Post-processing scripts:
→ DataMoshController.cs (The grain effect)

Projections Scripts:
→ VideoOnTerminal.cs (Main wall projection/Reference from Charles Doucet)
→ FloorProjection.cs (Floor projection)

Camera Scripts:
→ CameraForward.cs
→ OrbitControls.cs

Development log
Interaction:
As is, the installation features interactive components in only the 2 first phases, when the participants have to raise their hands and when the participants interact with their projected point clouds. For the point cloud interaction, I fecthed the hand distances and also calculated the angle of the arms. The hand distance was linked to the camera orbit controls and allowed the participant to zoom in/out as they were extending/contracting their arms. On the other hand, the angle of the arms was mapped to the amount of entropy in the shader which made the point cloud projection more or less distorted.

Shader work:
For the shader that went into the point cloud, I used a shader template that I found from Atsushi Izumihara which I modified heavily to create my desired shader. Izumihara created an efficient way to display depth data in 3D world space as opposed to camera space. This simplified a lot of things as I could just code a fragment shader in shaderlab. For the fragment shader, I added color gradients, entropy and sin() modifiers. The entropy modifier distorts the verteces so it gets rid of that sharp edge look while the sin() modifier cuts the depth data on a z-axis threshold and distorts the data in a wavy pattern after the participant steps beyond that threshold.

Managing data:
Another main component of the installation is the capturing, sorting and saving of data. The participant's data is captured as soon as they put their hands together at the beginning. The data is then shifted through an array of similar point clouds with a unique hash (name tag generated dynamically) which is outputted by a function I wrote. The output is "o_" which stands for obsidian_ followed by a set of 10 random characters (lower case/ upper case/ numbers). Every time a participant is saved, the data is randomized for the participant as an effort to create a unique experience each time obsidian is initialized. The purpose of the randomization is 2-fold as the randomizing the data also prompts the user to look for their scan. After the experience, the data is then saved locally on disk and uploaded directly onto the web version of our installation for people to see.

Production
Sound
Ali
Ali continued on the sound design, building sounds using modular synth and sourcing noise from authentic analog tech, with some of the digital sounds being transferred to audio tape to add texture to the tracks. The process is a continual back and forth between analog and digital, as analog recordings were then digitally modified to emphasize certain parts over others (ex. high frequency parts of noise samples would be amplified to make sure the reactive audio in Unity picks them up).
Production
Pre-rendered visuals
Val
After gauging the general projection size in the black box, Valerie continued building off of her block outs to create a semi-final wall and floor animation. Furthermore, she created mockups for the visual design of the final human archive, to be created in Unity. She also downloaded and edited a royalty-free model and animation from Mixamo to add to the animation. Valerie also worked out a webpage to host a reproduction of the results of the experiment online, [click here to view].
Production
Pre-rendered visuals
Ali
Ali handled the analog post-processing of Valerie's animations. He recorded the animations onto VHS and ran it through a VHS player many times, creating different glitched variations for each pre-comp.
Production
Model / pre-rendered visuals
Val
To create the animation to promp the user to lift their hands, Valerie downloaded and edited a model and animation from Mixamo. Using a jumping jack animation as a base, she tweaked the geometry and weightpainting of the model. Textures were removed and replaced with a solid toon shaded metallic orange. Post-processing was complete in AfterEffects using a displacement effect.
Production
In the Space
The team set up Obsidian in the final space under the guidance of a technician, Jody. Valerie concentrated on project management, organizing materials and reservations throughout the semester. During setup, she focused on getting the projections in the correct placement. Codrin set up the computer and Kinect on the two projectors, while Ali worked out the audio mixer and continued updating the soundscapes.