Combining our insights from the research and prototyping stages, we present: AdaptiveWatch - a future-focused TV-viewing experience we envision would be integrated as a toggle-able feature across all TV streaming platforms.
AdaptiveWatch utilizes user-purchased sensors, such as smartwatches, cameras, and smart-speakers to extract user’s emotion, attention, and personal viewing habits. Using that information, it will leverage Deep Learning models to synthetically adapt the TV content in real time based on the user’s context, thereby seamlessly and dynamically adjusting the TV content to meet the user’s emotional, cognitive, and personal viewing needs. AdaptiveWatch uses real-time stylistic changes to the plot-line, weather, music, and color grading to adapt the content according to the user's emotional and personal viewing preferences to enhance the TV viewing experience.
For our high-fidelity prototype of the AdaptiveWatch experience, we put a spin on the classic European fairy tale Little Red Riding Hood. The narrative that we all know - a young girl who goes to visit her grandmother in the wood, a wolf follows her and eats Little Red and her grandmother, and a hunter rescues them. This is the base narrative that we used as the foundation of our prototype’s plotline. Along with the base narrative, we created three other narratives with themes of sadness, horror, and comedy (respectively) in order to simulate an on-the-fly plot adaptation based on users’ emotional inputs.
Aside from the plot-line adaption, we also incorporated stylistic changes - including weather, background music, and color grading - to put the variables to the test and determine which change(s) users prefer to experience during their viewing session.
While the goal for context-aware content is to have the narrative adapting automatically based on the user's emotional responses, we learned from early prototyping that users have mixed emotions about the automated viewing experiences. It's an experience they want to choose to have, not expect.
Integration into existing streaming platforms and the ability to toggle the feature on and off is imperative. AdaptiveWatch is a feature, a supplement, a way to enhance your viewing, but it's not the rule.
Our team used a variety of brainstorming activities, research methods, synthesis and design methodologies to narrow down our scope and get to the root of the problem we aimed to solve for, including:
pretotyping, contextual inquiries, diary studies, affinity diagramming, experience mapping, and needfinding exercises.
How do users consume digital content now?
How do users create digital content now?
What do people value about the TV viewing experience?
Through needfinding research, we analyzed people’s ideas in a future-thinking imagination game to identify their potential hopes, fears, and expectations for synthetic content and TV-related technology. We found that people want to consume content for:
Read more about our research here.
Although this project reached its conclusion with a prototype that is geared towards the solo viewing experience, there is so much more to the world of content viewing that could be explored further. What about the group viewing experience? Or viewing content on-the-go in a mobile setting? There are as many ways to view content as there are viewers, which means that a system of adaptive content has vast potential beyond just the solo viewing context.
I really enjoyed examining the content viewing experience and envisioning a new kind of future for television in the digital age with the rest of my team. This project helped me develop my leadership and project management skills, while also giving me a taste for what it's like to work within an ambiguous future-thinking space. I'm very proud to have had this opportunity to work with my team and with our client.
Divya Mohan
Phipson Lee
Maggie Chen
Aaron Lee
Gabriela Suazo