Lost In Interpretation

Fall 2019 | Grad Design I
in collaboration with Jingyi Wang


Project Brief 



(excerpt)
As designers, we consider a person’s psychological, physical, and emotional relation to the external. We help author the world around the body through objects, garments, graphics, interfaces, systems, spaces... We move it, bend it, and shape it through our work. Motion capture is a technique for recording movement; a body position through time. Most commonly used for animating digital characters in movies and video games, motion capture hardware and software has been designed in favor of the performative body — ogres, soldiers, princesses, and footballers. As motion capture technologies are increasingly appropriated by industries outside of entertainment, how might these biases towards action, horror, fantasy, and sports find their way into unforeseen places? 

Students will be required to deliver a ‘performance-capture’, which can be demoed live or previously recorded. This performance-capture should communicate a formal summary of your research, argument, and hypothesis. Students can design additional multimedia — objects, garments, graphics, interfaces, systems, spaces  — to communicate their proposal in an exhibition/presentation to an audience. 



Project Description


“Lost in Interpretation” is a video installation showcasing a collection of abstracted spatial designs built upon several layers of interpretation, representing the beginning of a collaborative effort between AI and designers.

As a response to the industry’s continual pursuit of hyper-realistic motion capture, we see the loss of resolution in AI-interpreted motion capture as a design opportunity. Rather than using the precision-focused technology of motion capture suits, we attempt to use motion capture created by Radical AI, which uses machine learning to interpret recorded motions.

We use these outputs as design materials rather than production asset, which are removed from their original contexts, and reconstruct their spatial contexts based on our own interpretations. This results in potential misinterpretations; in doing so, the original motions take on new meanings.
 
Final output: an animation of 7 environments
designed based on motion capture sequences

Concept


My collaborator and I decided to extend some of the work we did in the previous assignments leading up to this final project. For example, we drew inspiration from the assignment where we had to cook recipes using the Perception Neuron motion capture suit, and redesign a kitchen layout based on our observations and experiences.

We also built this on top of ideas around the motion capture ‘swap meet’ that took place early on in the studio, where everyone in the class had to generate motion capture files and upload them to a shared Google Drive folder, but we had to name them with numbers so that we had to figure out what the motions were.
 
We came up with this workflow:

  1. We each separately created 3 motion capture sequences of 4-5 motion capture files.

  2. We swapped our motion capture sequences without providing info or context as to where the motion captures came from, or what they are.

  3. For each motion capture sequence, we design an environment, treating the motion capture files as design material.





Motion capture sequence created by Jingyi,
which I used to design an environment 



Environment created by Jingyi, based on 
motion sequence that I generated
                


(slideshow) Several of the designed environments that we generated,
based on motion capture sequences we exchanged with each other