
Northeastern at ACM CHI 2025
The most prestigious human-computer interaction conference in the world takes place in Yokohama, Japan, from April 26–May 1, when a record number of Khoury College faculty and student researchers, along with their collaborators in the College of Arts, Media and Design (CAMD), the Bouvé College of Health Sciences, the College of Science, and the College of Professional Studies, will present more than 30 papers, late-breaking works, panels, special interest groups, and other works and events.
For a full view of Khoury-affiliated works and when they’re being showcased, see the schedule below (all times local).
2025 Honorable Mentions
In 2025, three Khoury-affiliated works were recognized by an Honorable Mention at ACM CHI:
Moving Towards Epistemic Autonomy: A Paradigm Shift for Centering Participant Knowledge, in which researchers — including Khoury Assistant Professor Michael Ann DeVito — describe the importance and benefits of epistemic autonomy, the surprisingly novel principle that researchers should respect the rights of people who’ve experienced marginalization to govern knowledge about themselves. They demonstrate the principle firsthand with two of the authors, both trans women, sharing nuanced insights based in their own epistemic autonomy. They also discuss the harms that occur when researchers try to solve complex problems without listening to the people those problems affect.
The Many Tendrils of the Octopus Map, a study on the history of the octopus map and the ways that the visual metaphor of an octopus can encourage conspiratorial interpretation, that includes Khoury Associate Research Professor Michael Correll among its authors.
Why Can’t Black Women Just Be?: Black Femme Content Creators Navigating Algorithmic Monoliths, in which three Khoury researchers (DeVito, Assistant Professor Alexandra To, and PhD student Gianna Williams), as well as recent CAMD graduate Natalie Chen, interviewed 11 Black femme content creators to find out how they experience social media content moderation, what they do to resist it, and what folk theories they have about TikTok’s algorithm.
Additional highlights from Khoury researchers featured at ACM CHI 2025
Schedule of Khoury-affiliated works
For additional information on programming and scheduling, check the CHI 2025 website.
Khoury College researchers indicated by bold; interdisciplinary with Khoury indicated by “+”; other Northeastern affiliations (such as CAMD, CoE, CoS, or Bouvé) indicated by parentheses. Khoury author bios are linked if available.
Saturday, April 26
New Frontiers of Human-centered Explainable AI (HCXAI): Participatory Civic AI, Benchmarking LLMs and Hallucinations for XAI, and Responsible AI Audits
Explainable AI (XAI) is more than just “opening” the black box — who opens it matters just as much, if not more, as the ways of opening it. Human-centered XAI (HCXAI) advocates that algorithmic transparency alone is not sufficient for making AI explainable. In our fifth CHI workshop on Human-Centered XAI (HCXAI), we shift our focus to new, emerging frontiers of explainability: (1) participatory approaches toward explainability in civic AI applications; (2) addressing hallucinations in LLMs using explainability benchmarks; (3) connecting HCXAI research with Responsible AI practices, algorithmic auditing, and public policy; and (4) improving representation of XAI issues from the Global South. We have built a strong community of HCXAI researchers through our workshop series whose work has made important conceptual, methodological, and technical impact on the field. In this installment, we will push the frontiers of work in HCXAI with an emphasis on operationalizing perspectives sociotechnically.
Type: Workshops (closed attendance)
Authors: Upol Ehsan, Elizabeth A. Watkins, Philipp Wintersberger, Carina Manger, Nina Hubig, Saiph Savage, Justin B. Weisz, Andreas Riener
Sunday, April 27
Technology Mediated Caregiving for Older Adults Aging in Place
The caregiving environment for an older adult aging in place includes a network of caregivers working with the older adult to support their needs and maintain independence. As older adults experience cognitive and functional changes, their caregiving network expands to include spouses or siblings (who are often older adults themselves), children, friends, neighbors and community members — each bringing unique values, expectations, and goals. In this network of care, technology-enabled support offers the potential to mediate care responsibilities, such as coordinating activities and assisting with everyday tasks. However, designing these systems requires addressing value tensions among caregivers, cultural norms around aging, participatory research practices and balancing autonomy with safety concerns for older adults in later life. This workshop brings together researchers and practitioners to discuss (1) opportunities and challenges for designing technological systems for caregiving for older adults; (2) longitudinal interactions with these systems as older adults progress through stages of functional and cognitive changes; (3) potential for such systems to support caregivers while centering older adults’ privacy and autonomy needs; and (4) the influence of cultural norms on caregiving and technology use.
Type: Workshops (closed attendance)
Authors: Elizabeth D. Mynatt, Masatomo Kobayashi, Alisha Pradhan, Niharika Mathur, John Vines, Katie Seaborn, Erin Buehler, Jenny Waycott, John Rudnik, Tamara Zubatiy, Agata Rozga
Monday, April 28
Personalized interaction is highly valued by parents in their story-reading activities with children. While AI-empowered story-reading tools have been increasingly used, their abilities to support personalized interaction with children are still limited. Recent advances in large language models (LLMs) show promise in facilitating personalized interactions, but little is known about how to effectively and appropriately use LLMs to enhance children’s personalized story-reading experiences. This work explores this question through a design-based study. Drawing on a formative study, we designed and developed StoryMate, an LLM-empowered personalized interactive story-reading tool for children, following an empirical study with children, parents, and education experts. Our participants valued the personalized features in StoryMate, and also highlighted the need to support personalized content, guiding mechanisms, reading context variations, and interactive interfaces. Based on these findings, we propose a series of design recommendations for better using LLMs to empower children’s personalized story reading and interaction.
Time: 11:10–11:22 a.m.
Type: Papers
Authors: Jiaju Chen, Minglong Tang, Yuxuan Lu, Bingsheng Yao, Elissa Fan, Xiaojuan Ma, Ying Xu, Dakuo Wang (+CAMD), Yuling Sun, Liang He
The Many Tendrils of the Octopus Map
Honorable mention
Conspiratorial thinking can connect many distinct or distant ills to a central cause. This belief has visual form in the octopus map: a map where a central force (for instance a nation, an ideology, or an ethnicity) is depicted as a literal or figurative octopus, with extending tendrils. In this paper, we explore how octopus maps function as visual arguments through an analysis of historical examples as well as a through a crowd-sourced study on how the underlying data and the use of visual metaphors contribute to specific negative or conspiratorial interpretations. We find that many features of the data or visual style can lead to “octopus-like” thinking in visualizations, even without the use of an explicit octopus motif. We conclude with a call for a deeper analysis of visual rhetoric, and an acknowledgment of the potential for the design of data visualizations to contribute to harmful or conspiratorial thinking.
Time: 12:10–12:22 p.m.
Type: Papers
Authors: Eduardo Puerta, Shani Claire Spivak, Michael Correll
Animals’ Entanglement with Technology: A Scoping Review
Animals living alongside humans are navigating a world increasingly filled with technology, yet little is known about how they interface with these systems, whether designed for, with, or around them. Anchored in HCI and ranging across diverse fields, this scoping review analyzes nearly 800 research works to explore the diverse realities of animal-technology research, examining the who, what, why, and how of animal-technology entanglements. Our analysis revealed 11 research objectives and eight types of technologies across six animal contexts. By categorizing the literature based on authors’ aims and intended beneficiaries, we highlight trends, gaps, and ethical considerations. We find that most systems involve animals with limited potential for direct engagement or sense-making. We propose a framework to understand animals as users versus subjects of interactive systems, focusing on feedback, empirical testing, and projected animal benefits. Our findings offer a foundation to understand current and future animal technology research and the diversity of animal user experience.
Time: 4:44–4:56 p.m.
Type: Papers
Authors: Rébecca Kleinberger (+CAMD), Lena Ashooh, Keavan Farsad, Ilyena Hirskyj-Douglas
Live-Streaming-Based Dual-Teacher Classes for Equitable Education: Insights and Challenges From Local Teachers’ Perspective in Disadvantaged Areas
Educational inequalities in disadvantaged areas have long been a global concern. While Information and Communication Technologies (ICTs) have shown great potential in addressing this issue, the unique challenges in disadvantaged areas often hinder the practical effectiveness of such technologies. This paper examines live-streaming-based dual-teacher classes (LSDC) through a qualitative study in disadvantaged regions of China. Our findings indicate that, although LSDC offers students in these regions access to high-quality educational resources, its practical implementation is fraught with challenges. Specifically, we foreground the pivotal role of local teachers in mitigating these challenges. Through a series of situated efforts, local teachers contextualize high-quality lectures to the local classroom environment, ensuring the expected educational outcomes. Based on our findings, we argue that greater recognition and support for the situational practices of local teachers is essential for fostering a more equitable, sustainable, and scalable technology-driven educational model in disadvantaged areas.
Time: 5:08–5:20 p.m.
Type: Papers
Authors: Yuling Sun, Jiaju Chen, Xiaomu Zhou (+College of Professional Studies), Xiaojuan Ma, Bingsheng Yao, kai zhang, Liang He, Dakuo Wang (+CAMD)
Learning therapeutic counseling involves significant role-play experience with mock patients, with current manual training methods providing only intermittent granular feedback. We seek to accelerate and optimize counselor training by providing frequent, detailed feedback to trainees as they interact with a simulated patient. Our first application domain involves training motivational interviewing skills for counselors. Motivational interviewing is a collaborative counseling style in which patients are guided to talk about changing their behavior, with empathetic counseling an essential ingredient. We developed and evaluated an LLM-powered training system that features a simulated patient and visualizations of turn-by-turn performance feedback tailored to the needs of counselors learning motivational interviewing. We conducted an evaluation study with professional and student counselors, demonstrating high usability and satisfaction with the system. We present design implications for the development of automated systems that train users in counseling skills and their generalizability to other types of social skills training.
Time: 5:20–5:32 p.m.
Type: Papers
Authors: Ian Steenstra, Farnaz Nouraei, Timothy Bickmore
Bridging Modeling and Domain Expertise Through Visualization: A Case Study on Bread-Making with Bayesian Networks
Decision support tools based on Bayesian Networks (BN) are capable of simply representing complex relationships within data. We use a BN network to model bread-dough behavior, reflecting the bread-making process. Such a network must be validated using domain knowledge, but domain experts often lack the necessary statistical backgrounds to understand BNs. We propose a visualization to enable domain experts without technical background to explain and critically analyze the network. Our platform uses familiar visualizations and focuses on interactive evidence propagation, view customization, and access to the underlying dataset as key features to facilitate understanding. Design workshops and a preliminary evaluation with domain experts and BN specialists showed its potential for exploring and validating the model. We report on lessons learned in creating a visualization for making complex models accessible to domain experts, and how its design can foster interdisciplinary dialogue between modelers and domain experts.
Time: 6–8 p.m.
Type: Late-breaking work
Authors: Omi Johnson, Melanie Munch, Kamal Kansou, Cedric Baudrit, Anastasia Bezerianos, Nadia Boukhelifa
Tuesday, April 29
Moving Towards Epistemic Autonomy: A Paradigm Shift for Centering Participant Knowledge
Honorable mention
Justice, epistemology, and marginalization are rich areas of study in HCI. And yet, we repeatedly find platforms and algorithms that push communities further into the margins. In this paper, we propose epistemic autonomy — one’s ability to govern knowledge about themselves — as a necessary HCI paradigm for working with marginalized communities. We establish epistemic autonomy by applying the transfeminine principle of autonomy to the problem of epistemic injustice. To articulate the harm of violating one’s epistemic autonomy, we present six stories from two trans women: a transfem online administrator and a transfem researcher. We then synthesize our definition of epistemic autonomy in research into a research paradigm. Finally, we present two variants of common HCI methods, autoethnography and asynchronous remote communities, that stem from these beliefs. We discuss how CHI is uniquely situated to champion this paradigm and, thereby, the epistemic autonomy of our research participants.
Time: 9:36–9:48 a.m.
Type: Papers
Authors: Leah Admani, Talia Bhatt, Michael Ann DeVito (+CAMD)
KnitA11y: Fabricating Accessible Designs with Machine Knitting
Digital knitting machines provide a fast and efficient way to create garments, but commercial knitting tools are limited to predefined templates. While many knitting design tools help users create patterns from scratch, modifying existing patterns remains challenging. This paper introduces KnitA11y, a digital machine knitting pipeline that enables users to import hand-knitting patterns, add accessibility features, and fabricate them using machine knitting. We support modifications such as holes, pockets, and straps/handles, based on common accessible functional modifications identified in a survey of Ravelry.com. KnitA11y offers an interactive design interface that allows users to visualize patterns and customize the position and shape of modifications. We demonstrate KnitA11y’s capabilities through diverse examples, including a sensory-friendly scarf with a pocket, a hat with a hole for assistive devices, a sock with a pull handle, and a mitten with a pocket for heating pads to alleviate Raynaud’s symptoms.
Time: 10:30–11:10 a.m., 3:40–4:20 p.m.
Type: Late-breaking work
Authors: Tongyan Wang, Hanwen Zhao, Yusuf Shahpurwala, Megan Hofmann, Jennifer Mankoff
Enabling opportunities for young children with disabilities to co-engage in learning activities alongside their non-disabled peers is essential for promoting equity in early childhood education. We investigate how collaborative technology can be designed to support young neurodivergent and neurotypical children in playing together. By integrating theories and methods from design, HCI, and the learning sciences, we iteratively designed, developed, and evaluated a novel tablet application called Incloodle-Classroom (Incloodle in short), that takes into account the needs of neurodiverse groups of children and the adults who support them during play. We deployed Incloodle in a kindergarten classroom of 15 neurodivergent and 16 neurotypical children over a 10-week period. Using interaction analysis, we present rich empirical understandings of how children interacted with each other, with adults, and with Incloodle. In doing so, we contribute new theoretical underpinnings to collaborative and accessible technology design, extending joint media engagement to encompass inclusivity and equity.
Time: 11:10–11:22 a.m.
Type: Journal
Authors: Kiley Sobel, Maitraye Das (+CAMD), Sara M Behbakht, Julie A. Kientz
Rescriber: Smaller-LLM-Powered User-Led Data Minimization for LLM-Based Chatbots
The proliferation of LLM-based conversational agents has resulted in excessive disclosure of identifiable or sensitive information. However, existing technologies fail to offer perceptible control or account for users’ personal preferences about privacy-utility tradeoffs due to the lack of user involvement. To bridge this gap, we designed, built, and evaluated Rescriber, a browser extension that supports user-led data minimization in LLM-based conversational agents by helping users detect and sanitize personal information in their prompts. Our studies (N=12) showed that Rescriber helped users reduce unnecessary disclosure and addressed their privacy concerns. Users’ subjective perceptions of the system powered by Llama3-8B were on par with that by GPT-4o. The comprehensiveness and consistency of the detection and sanitization emerge as essential factors that affect users’ trust and perceived protection. Our findings confirm the viability of smaller-LLM-powered, user-facing, on-device privacy controls, presenting a promising approach to address the privacy and trust challenges of AI.
Time: 11:10–11:22 a.m.
Type: Papers
Authors: Jijie Zhou, Eryue Xu, Yaoyao Wu, Tianshi Li
Personal health informatics systems have been centered around individual efforts, overlooking the role of social factors in health. Over seven years of research (n = 153), we examined how socially-enabled personal informatics systems can support physical activity — a behavior critical in promoting physical and mental health. We prioritized exploring this topic with families in low-socioeconomic status (SES) neighborhoods because they face increased barriers to being active due to inequities. Through our systems development, qualitative studies, and theoretical foundation, we developed the Socio-Cognitive Framework for Personal Health Informatics systems that shows how five socio-cognitive concepts (aspirations, data exposure, stories, belongingness, and impediments) influence self-efficacy and outcome expectations that are linked to health behavior. We then provide recommendations on how to design and evaluate such systems. We further argue that socially enabled health informatics tools can support marginalized communities in reducing health disparities through the collective efforts of families, neighbors, and peers.
Time: 11:46–11:58 a.m.
Type: Journal
Authors: Herman Saksono (+Bouvé), Andrea G. Parker
Why Can’t Black Women Just Be?: Black Femme Content Creators Navigating Algorithmic Monoliths
Honorable mention
Content creation allows many online social media users to support themselves financially through creativity. The “creator economy” empowers individuals to create content (i.e. lifestyle, fitness, beauty) about their interests, hobbies and daily life. Social media platforms in turn moderate content (e.g., banning accounts, flagging and reporting videos) to create safer online communities. However, Black women, femme, and non-binary people content creators have seen their content disproportionately suppressed, thus limiting their success on the platform. In this paper, we investigate Black femme content creators’ (BFCC) theories about how their identities impact both how they create content and how that content is subsequently moderated. In our findings, we share the perceptions participants felt the algorithm constrains Black creators to. We build upon Critical Technocultural Discourse studies and algorithmic folk theories attributed to Black women and non-binary content creators to explore how Black joy can be prioritized online to resist algorithmic monoliths.
Time: 11:46–11:58 a.m.
Type: Papers
Authors: Gianna Williams, Natalie Chen, Michael Ann DeVito, Alexandra To (+CAMD)
We present Persona-L, a novel approach for creating personas using Large Language Models (LLMs) and an ability-based framework, specifically designed to improve the representation of people with complex needs. Traditional methods of persona creation often fall short of accurately depicting the dynamic and diverse nature of complex needs, resulting in oversimplified or stereotypical profiles. Persona-L enables users to create and interact with personas through a chat interface. Persona-L was evaluated through interviews with UX designers (N=6), where we examined its effectiveness in reflecting the complexities of lived experiences of people with complex needs. We report our findings that indicate the potential of Persona-L to increase empathy and understanding of complex needs while also revealing the need for transparency of data used in persona creation, the role of the language and tone, and the need to provide a more balanced presentation of abilities with constraints.
Time: 12:10–12:22 p.m.
Type: Papers
Authors: Lipeipei Sun, Tianzi Qin, Anran Hu, Jiale Zhang, Shuojia Lin, Jianyan Chen, Mona Ali, Mirjana Prpa
Proactive Conversational Agents with Inner Thoughts
One of the long-standing aspirations in conversational AI is to allow them to autonomously take initiatives in conversations, i.e. being proactive. This is especially challenging for multi-party conversations. Prior NLP research focused mainly on predicting the next speaker from contexts like preceding conversations. In this paper, we demonstrate the limitations of such methods and rethink what it means for AI to be proactive in multi-party, human-AI conversations.We propose that just like humans, rather than merely reacting to turn-taking cues, a proactive AI formulates its own inner thoughts during a conversation, and seeks the right moment to contribute. Through a formative study with 24 participants and inspiration from linguistics and cognitive psychology, we introduce the Inner Thoughts framework. Our framework equips AI with a continuous, covert train of thoughts in parallel to the overt communication process, which enables it to proactively engage by modeling its intrinsic motivation to express these thoughts. We instantiated this framework into two real-time systems: an AI playground web app and a chatbot. Through a technical evaluation and user studies with human participants, our framework significantly surpasses existing baselines on aspects like anthropomorphism, coherence, intelligence, and turn-taking appropriateness.
Time: 2:10–2:22 p.m.
Type: Papers
Authors: Xingyu Bruce Liu, Shitao Fang, Weiyan Shi (+CoE), Chien-Sheng Wu, Takeo Igarashi, Xiang ‘Anthony’ Chen
Explaining Complex ML Models to Domain Experts Using LLM & Visualization: An Exploration in the French Breadmaking Industry
Modeling a complex system from data can aid understanding and decision-making. Bayesian networks are one such method that, when accurately constructed, can support inference and help understand the underlying system that generated the data. However, the outputs of these models are not always intuitive, especially for users that lack a statistical background. In this work, we examine how the recent advancements in modern Large Language Models (LLMs) may be applied to help explain machine learning (ML) models. Following a user-centered design methodology, we collaborated with a team of ML modelers and a domain expert in the French breadmaking industry to develop a causal inference application with an integrated chat assistant. From qualitative feedback sessions with modelers and the domain expert, we note some unique advantages but also a host of challenges in using current LLMs for model explainability.
Time: 4:44–4:56 p.m.
Type: Case studies
Authors: Briggs Twitchell, George Katsirelos, Anastasia Bezerianos, Nadia Boukhelifa
Wednesday, April 30
Examining Student and Teacher Perspectives on Undisclosed Use of Generative AI in Academic Work
With the widespread adoption of Generative Artificial Intelligence (GenAI) tools, ethical issues are being raised around the disclosure of their use in publishing, journalism, or artwork. Recent research has found that college students are increasingly using GenAI tools; however, we know less about when, why, and how they choose to hide or disclose their use of GenAI in academic work. To address this gap, we conducted an online survey (n=97) and interviews with fifteen college students followed by interviews with nine teachers who had experience with students’ undisclosed use of GenAI. Our findings elucidate the strategies students employ to hide their GenAI use and their justifications for doing so, alongside the strategies teachers follow to manage such non-disclosure. We unpack students’ non-disclosure of GenAI through the lens of cognitive dissonance and discuss practical considerations for teachers and students regarding ways to promote transparency in GenAI use in higher education.
Time: 10:12–10:24 a.m.
Type: Papers
Authors: Rudaiba Adnin, Atharva Pandkar, Bingsheng Yao, Dakuo Wang (+CAMD), Maitraye Das (+CAMD)
Lost in Translation: How Does Bilingualism Shape Reader Preferences for Annotated Charts?
Visualizations are powerful tools for conveying information but often rely on accompanying text for essential context and guidance. This study investigates the impact of annotation patterns on reader preferences and comprehension accuracy among multilingual populations, addressing a gap in visualization research. We conducted experiments with two groups fluent in English and either Tamil (n = 557) or Arabic (n = 539) across six visualization types, each varying in annotation volume and semantic content. Full-text annotations yielded the highest comprehension accuracy across all languages, while preferences diverged: English readers favored highly annotated charts, whereas Tamil/Arabic readers preferred full-text or minimally annotated versions. Semantic variations in annotations (L1–L4) did not significantly affect comprehension, demonstrating the robustness of text comprehension across languages. English annotations were generally preferred, with a tendency to think technically in English linked to greater aversion to non-English annotations, though this diminished among participants who regularly switched languages internally. Non-English annotations incorporating visual or external knowledge were less favored, particularly in titles. Our findings highlight cultural and educational factors influencing perceptions of visual information, underscoring the need for inclusive annotation practices for diverse linguistic audiences.
Time: 10:12–10:24 AM
Type: Papers
Authors: Anjana Arunkumar, Lace M. Padilla (+CoS), Chris Bryan
Framing Health Information: The Impact of Search Methods and Source Types on User Trust and Satisfaction in the Age of LLMs
Large language model (LLM)-based chatbots are transforming online health information search by offering interactive access to resources but raise concerns about inaccurate or harmful content. This study examined how different search methods —search engine, standalone chatbot, and retrieval-augmented chatbot+ — and source credibility (reputable health websites vs. social media) influence user trust and satisfaction. Key findings include: (a) Trust trended higher for chatbots than search engine results, regardless of source credibility; (b) Satisfaction was highest with standalone chatbot, followed by chatbot+ and search engine; (c) Source type had minimal impact unless they were compared side by side. Interestingly, in interviews where participants could compare the methods directly, several participants preferred search engines due to familiarity and response diversity. However, they valued chatbots for their concise, time-saving answers. This study highlights the critical role of user interfaces in fostering trust and satisfaction, emphasizing the need for accurate, responsibly designed chatbots for health information dissemination.
Time: 10:30–11 a.m., 3:40–4:20 p.m.
Type: Late-breaking work
Authors: Hye Sun Yun, Timothy Bickmore
UXAgent: An LLM Agent-Based Usability Testing Framework for Web Design
Usability testing is a fundamental yet challenging research method for user experience (UX) researchers to evaluate a web design. Recent advances in Large Language Model-simulated Agent (LLM Agent) research inspired us to design UXAgent to support UX researchers in evaluating and reiterating their usability testing study design before they conduct the real human-subject study. Our system features an LLM Agent module and a universal browser connector module so that UX researchers can automatically generate thousands of simulated users to test the target website. The system can generate UX study results in qualitative (e.g., interviewing how an agent thinks), quantitative (e.g., # of actions), and video recording formats for UX researchers to analyze. Through a heuristic user evaluation with five UX researchers, participants praised the innovation of our system but also expressed concerns about the future of UX study with LLM Agents.
Time: 10:30–11:10 a.m.; 3:40–4:20 p.m.
Type: Late-breaking work
Authors: Yuxuan Lu, Bingsheng Yao, Hansu Gu, Jing Huang, Jessie Wang, Laurence Li, Jiri Gesi, Qi He, Toby Jia-Jun Li, Dakuo Wang
In Suspense About Suspensions? The Relative Effectiveness of Suspension Durations on a Popular Social Platform
It is common for digital platforms to issue consequences for behaviors that violate Community Standards policies. However, there is limited evidence about the relative effectiveness of consequences, particularly lengths of temporary suspensions. This paper analyzes two massive field experiments (N1 = 511,304; N2 = 262,745) on Roblox that measure the impact of suspension duration on safety- and engagement-related outcomes. The experiments show that longer suspensions are more effective than shorter ones at reducing reoffense rate, the number of consequences, and the number of user reports. Further, they suggest that the effect of longer suspensions on reoffense rate wanes over time, but persists for at least 3 weeks. Finally, they demonstrate that longer suspensions are more effective for first-time violating users. These results have significant implications for theory around digitally-enforced punishments, understanding recidivism online, and the practical implementation of product changes and policy development around consequences.
Time: 11:46–11:58 a.m.
Type: Papers
Authors: Jeffrey Gleason, Alex Leavitt, Bridget Daly
Despite recent advances in cancer treatments that prolong patients’ lives, treatment-induced cardiotoxicity (i.e., the various heart damages caused by cancer treatments) emerges as one major side effect. The clinical decision-making process of cardiotoxicity is challenging, as early symptoms may happen in non-clinical settings and are too subtle to be noticed until life-threatening events occur at a later stage; clinicians already have a high workload focusing on the cancer treatment, no additional effort to spare on the cardiotoxicity side effect. Our project starts with a participatory design study with 11 clinicians to understand their decision-making practices and their feedback on an initial design of an AI-based decision-support system. Based on their feedback, we then propose a multimodal AI system, CardioAI, that can integrate wearables data and voice assistant data to model a patient’s cardiotoxicity risk to support clinicians’ decision-making. We conclude our paper with a small-scale heuristic evaluation with four experts and the discussion of future design considerations.
Time: 11:58 a.m. –12:10 p.m.
Type: Papers
Authors: Siyi Wu, Weidan Cao, Shihan Fu, Bingsheng Yao, Ziqi Yang, Changchang Yin, Varun Mishra (+Bouvé ), Daniel Addison, Ping Zhang, Dakuo Wang (+CAMD)
Promises, Promises: Understanding Claims Made in Social Robot Consumer Experiences
Social robots are a class of emerging smart consumer electronics devices that promise sophisticated experiences featuring emotive capabilities, artificial intelligence, conversational interaction, and more. With unique risk factors like emotional attachment, little is known on how social robots communicate these promises to consumers and whether they adequately deliver upon them within their overall product experiences prior to and during user interaction. Animated by a consumer protection lens, this paper systematically investigates manufacturer claims made for four commercially available social robots, evaluating these claims against the provided user experience and consumer reviews. We find that social robots vary widely in the manner and extent to which they communicate intelligent features and the supposed benefits of these features, while consumer perspectives similarly include a wide range of perceptions on robot and AI performance, capabilities, and product frustrations.We conclude by discussing social robots’ unique characteristics and propensities for consumer risk, and consider implications for key stakeholders like regulators, developers, and researchers of social robots
Time: 12:10–12:22 p.m.
Type: Papers
Authors: Johanna Gunawan, Sarah Elizabeth Gillespie, David Choffnes, Woodrow Hartzog, Christo Wilson
The Impact of Generative AI Coding Assistants on Developers Who Are Visually Impaired
The rapid adoption of generative AI in software development has impacted the industry, yet its effects on developers with visual impairments remain largely unexplored. To address this gap, we used an Activity Theory framework to examine how developers with visual impairments interact with AI coding assistants. For this purpose, we conducted a study where developers who are visually impaired completed a series of programming tasks using a generative AI coding assistant. We uncovered that, while participants found the AI assistant beneficial and reported significant advantages, they also highlighted accessibility challenges. Specifically, the AI coding assistant often exacerbated existing accessibility barriers and introduced new challenges. For example, it overwhelmed users with an excessive number of suggestions, leading developers who are visually impaired to express a desire for “AI timeouts.” Additionally, the generative AI coding assistant made it more difficult for developers to switch contexts between the AI-generated content and their own code. Despite these challenges, participants were optimistic about the potential of AI coding assistants to transform the coding experience for developers with visual impairments. Our findings emphasize the need to apply activity-centered design principles to generative AI assistants, ensuring they better align with user behaviors and address specific accessibility needs. This approach can enable the assistants to provide more intuitive, inclusive, and effective experiences, while also contributing to the broader goal of enhancing accessibility in software development.
Time: 2:22–2:34 p.m.
Type: Papers
Authors: Claudia Flores-Saviaga, Benjamin V. Hanrahan, Kashif Imteyaz, Steven Clarke, Saiph Savage
Feasibility and Utility of Multimodal Micro Ecological Momentary Assessment on a Smartwatch
μΜEMAs allow participants to answer a short survey quickly with a tap on a smartwatch screen or a brief speech input. The short interaction time and low cognitive burden enable researchers to collect self-reports at high frequency (once every 5-15 minutes) while maintaining participant engagement. Systems with single input modality, however, may carry different contextual biases that could affect compliance. We combined two input modalities to create a multimodal-μEMA system, allowing participants to choose between speech or touch input to self-report. To investigate system usability, we conducted a 7-day field study where we asked 20 participants to label their posture and/or physical activity once every five minutes throughout their waking day. Despite the intense prompting interval, participants responded to 72.4% of the prompts. We found participants gravitated towards different modalities based on personal preferences and contextual states, highlighting the need to consider these factors when designing context-aware multimodal μEMA systems.
Time: 2:34–2:46 p.m.
Type: Papers
Authors: Ha Le, Veronika Potter, Rithika Lakshminarayanan, Varun Mishra (+Bouvé ), Stephen Intille (+Bouvé )
GenieWizard: Multimodal App Feature Discovery with Large Language Models
Multimodal interactions are more flexible, efficient, and adaptable than graphical interactions, allowing users to execute commands beyond simply tapping GUI buttons. However, the flexibility of multimodal commands makes it hard for designers to prototype and provide design specifications for developers. It is also hard for developers to anticipate what actions users may want. We present GenieWizard, a tool to aid developers in discovering potential features to implement in multimodal interfaces. GenieWizard supports user-desired command discovery early in the implementation process, streamlining the development process. GenieWizard uses an LLM to generate potential user interactions and parse these interactions into a form that can be used to discover the missing features for developers. Our evaluations showed that GenieWizard can reliably simulate user interactions and identify missing features. Also, in a study (N = 12), we demonstrated that developers using GenieWizard can identify and implement 42% of the missing features of multimodal apps compared to only 10% without GenieWizard.
Time: 4:20–4:32 p.m.
Type: Papers
Authors: Jackie (Junrui) Yang, Yingtian Shi, Chris Gu, Zhang Zheng, Anisha Jain, Tianshi Li, Monica Lam, James A. Landay
Client-Service Representatives (CSRs) are vital to organizations. Frequent interactions with disgruntled clients, however, disrupt their mental well-being. To help CSRs regulate their emotions while interacting with uncivil clients, we designed Care-Pilot, an LLM-powered assistant, and evaluated its efficacy, perception, and use. Our comparative analyses between 665 human and Care-Pilot-generated support messages highlight Care-Pilot’s ability to adapt to and demonstrate empathy in various incivility incidents. Additionally, 143 CSRs assessed Care-Pilot’s empathy as more sincere and actionable than human messages. Finally, we interviewed 20 CSRs who interacted with Care-Pilot in a simulation exercise. They reported that Care-Pilot helped them avoid negative thinking, recenter thoughts, and humanize clients; showing potential for bridging gaps in coworker support. Yet, they also noted deployment challenges and emphasized the indispensability of shared experiences. We discuss future designs and societal implications of AI-mediated emotional labor, underscoring empathy as a critical function for AI assistants for worker mental health.
Time: 4:32–4:44 p.m.
Type: Papers
Authors: Vedant Das Swain, Qiuyue “Joy” Zhong, Jash Rajesh Parekh, Yechan Jeon, Roy Zimmerman, Mary P Czerwinski, Jina Suh, Varun Mishra (+Bouvé ), Koustuv Saha, Javier Hernandez
From Locked Rooms to Open Minds: Escape Room Best Practices to Enhance Reflection in Extended Reality Learning Environments
Extended reality (XR) learning environments result in greater knowledge gains when coupled with opportunities to reflect on one’s actions and learning. However, when and how one should prompt reflection in XR learning environments (XRLEs) to effectively enhance learning, without breaking immersion, remains an open question. In this work, we argue that we can extract insights on how to design effective, immersive reflection for XRLEs from the expertise of escape room game masters (GMs) who regularly provide reflective hints and prompts in complex, immersive problem solving environments. To explore what we can learn from GMs, we conducted exploratory semi-structured interviews with 13 escape room GMs and, via iterative open coding, captured their best practices in how they provide hints and give nudges to escape room players.
Time: 4:32–4:44 p.m.
Type: Papers
Authors: Erica Kleinman (CAMD), Rana Jahani, Eileen McGivney (CAMD), Seth Cooper, Casper Harteveld (+CAMD)
Promoting Prosociality via Micro-acts of Joy: A Large-Scale Well-Being Intervention Study
Prosociality has been well-documented to positively impact mental, social, and physical well-being. However, existing studies of interventions for promoting prosociality have limitations such as small sample sizes or unclear benchmarks. To address this gap, we conducted a global-scale well-being intervention deployment study, BIGJOY, with more than 18,000 participants from 172 countries and regions. The week-long BIGJOY intervention consists of seven daily micro-acts (i.e., brief actions that require minimal effort), each adapted from validated positive psychology interventions. The analyses of large-scale intervention data reveal unique insights into the impact of well-being micro-acts across diverse populations, patterns of responses, effectiveness of specific micro-acts and their nuanced impacts across different populations, linkages between improvements in prosociality and in well-being, as well as the potential for machine learning to predict changes in prosociality. This study offers valuable insights into a set of design guidelines for future well-being and prosociality interventions. We envision our work as a stepping stone towards future large-scale prosociality interventions that foster a more unified and compassionate world.
Time: 4:44–4:56 p.m.
Type: Papers
Authors: Hitesh Goel, Yoobin Park, Jin Liou, Darwin A. Guevarra, Peggy Callahan, Jolene Smith, Bingsheng Yao, Dakuo Wang (+CAMD), Xin Liu, Daniel McDuff, Noemie Elhadad, Emiliana Simon-Thomas, Elissa Epel, Xuhai “Orson” Xu
Inaccessible and Deceptive: Examining Experiences of Deceptive Design with People Who Use Visual Accessibility Technology
Deceptive design patterns manipulate people into actions to which they would otherwise object. Despite growing research on deceptive design patterns, limited research examines their interplay with accessibility and visual accessibility technology (e.g., screen readers, screen magnification, braille displays). We present an interview and diary study with 16 people who use visual accessibility technology to better understand experiences with accessibility and deceptive design. We report participant experiences with six deceptive design patterns, including designs that are intentionally deceptive and designs where participants describe accessibility barriers unintentionally manifesting as deceptive, together with direct and indirect consequences of deceptive patterns. We discuss intent versus impact in accessibility and deceptive design, how access barriers exacerbate harms of deceptive design patterns, and impacts of deceptive design from a perspective of consequence-based accessibility. We propose that accessibility tools could help address deceptive design patterns by offering higher-level feedback to well-intentioned designers.
Time: 5:32–5:44 p.m.
Type: Papers
Authors: Aaleyah Lewis, Jesse J Martinez, Maitraye Das (+CAMD), James Fogarty
Thursday, May 1
Computational thinking (CT) is regarded as a fundamental twenty-first century skill and has been implemented in many early childhood education curriculum. Yet, the needs of neurodivergent children have remained largely overlooked in the extensive research and technologies built to foster CT among children. To address this, we investigated how to support neurodiverse (i.e., groups involving neurodivergent and neurotypical) preschoolers aged 3-5 in learning CT concepts. Grounded in interviews with six teachers, we deployed an age-appropriate, programmable robot called KIBO in two preschool classrooms involving 12 neurodivergent and 17 neurotypical children for eight weeks. Using interaction analysis, we illustrate how neurodivergent children found enjoyment in assembling KIBO and learned to code with it while engaging in cooperative and competitive play with neurotypical peers and the adults. Through this, we discuss accessible adaptations needed to enhance CT among neurodivergent preschoolers and ways to reimagine technology-mediated social play for them.
Time: 9:12–9:24 a.m.
Type: Papers
Authors: Maitraye Das (+CAMD), Megan Tran, Amanda Chi-han Ong, Julie A. Kientz, Heather Feldner