Nazmus Saquib, Rubaiat Habib, Li-Yi Wei, Wilmot Li (CHI 2019 paper)
Augmented and mixed-reality technologies enable us to enhance and extend our perception of reality by incorporating virtual graphics into real-world scenes. One simple but powerful way to augment a scene is to blend dynamic graphics with live action footage of real people performing. This technique has been used as a special effect for music videos, scientific documentaries, and instructional materials incorporated in the post-processing stage.
As live-streaming becomes an increasingly powerful cultural phenomenon, this work explored how to enhance these real-time presentations with interactive graphics to create a powerful new storytelling environment. Traditionally, crafting such an interactive and expressive performance required technical programming or highly-specialized tools tailored for experts.
Our approach is different, and could open up this kind of presentation to a much wider range of people. Our system leverages the rich gestural (from direct manipulation to abstract communications) and postural language of humans to interact with graphical elements. By simplifying the mapping between gestures, postures, and their corresponding output effects, our UI enables users to create customized, rich interactions with the graphical elements.
News Coverage:
Impact: To date, an eclectic group of users used the prototype to ideate and create demos. One patent has been filed, and there is a work-in-progress livestream Adobe product being developed based on this work.
Supplementary Examples: A wide range of users could use the system to make an impressive array of examples and possibilities. Some examples can be seen here: cooking instruction video, astronomy research presentation, interior design, meditation tutorial.