(they/them/theirs)
Stanford University
human-computer interaction, design, creativity support tools
Jingyi Li is a PhD candidate in computer science at Stanford University. Their research in human-computer interaction investigates ways computation can better support expressive creative practices and empower artists. Jingyi is passionate about supporting underrepresented students in the academy through teaching, mentorship, and organizing. They hold a BS in Electrical Engineering & Computer Science from UC Berkeley, and their PhD has been funded by the NSF GRFP, a Stanford DARE (Diversifying Academia, Recruiting Excellence) fellowship, and the Brown Institute for Media Innovation. http://jingyi.me
Empowering visual artists by extending computation with manual craft
This talk examines the limitations faced when using software tools to create digital art, and proposes new environments and tools to address these challenges. Through qualitative empirical research, I found that forms of software automation which are black box abstractions prevent professional artists from using and modifying them as intended due to mismatched input data expectations and a lack of fine-grained aesthetic control over the final outcome. While some artists thus learned to code to create their own artifacts, they also wished to gain power and technical legitimacy in their communities. Artists are used to manipulating materials visually and concretely, which means programming—manipulating symbolic abstractions—can pose a challenge. To bring the process of programming closer to that of an artist's traditional—and skillful—workflow, I developed an interactive programming environment called Demystified Dynamic Brushes (DDB). DDB visualizes code and numerical information directly on the artist's ongoing artwork, allows them to jump program state by touching a corresponding part of the artwork, and privileges manual interaction by looping pen stylus inputs to allow artists to debug their programs at the pace and with the actions of drawing.
Computation may also be a powerful tool in creating constraints and variations in artwork. For example, 2D mix-and-match character creation systems like Bitmoji are popular for creating virtual avatars. While these systems allow end-users to pick from a wide array of clothing, hair styles, and accessories, they are often limited by their pose or body shape due to the accessories being on separate, independent layers from the body. To address this problem, I developed an algorithm that automates the rigging of character clothing to bodies to allow for more flexible and customizable 2D character illustrations and thus more representative avatars. Though automatically generated, users may manually inspect and edit the rigs. Evaluating these two systems, my research exposes otherwise hidden computational abstractions as transparent materials which artists can visually interpret in a process that accommodates their non-linear workflows. Going forward, I am interested in how media authoring tools can explicitly support the psychological empowerment of artists both personally and in their communities.