Date: Jan 25th, 2024 12:30 PM
Speaker: Alex Leitch (Pictured Left), Co-Director of the MS in Human-Computer Interaction, University of Maryland
Celia Chen, 2nd year PhD student in the Information Studies program, University of Maryland
Location: HBK 2105 and Zoom
Watch Here
Abstract:
Novel HCI devices are prone to planned obsolescence, which sometimes causes clever ideas and great sensor packages to be trashed before being thoroughly explored. This is a particular problem in closed-source hardware designed with strongly opinionated interfaces. The Myo armband by Thalmic Labs packed high-grade EMG sensors into a successful, compact wearable, before being discontinued in 2018. This talk covers how we repurposed a Myo armband to take advantage of its subtle muscle tracking to activate a pneumatic sculpture made from materials that are similarly regarded as junk in the making. This creative hacking approach is a promising way to thwart planned obsolescence, which is especially important when it comes to HCI and accessibility devices.
By interfacing a Myo to a Raspberry Pi 3B+, we enabled forearm muscles to trigger air valves and animate assemblies of latex, bamboo, and PLA. These forms were then programmed to maintain peristalsis, only to dramatically deflate and flop about in response to custom gesture control. Though the interactions aim more for surprise and delight than technical polish, this comedy of errors examines the latent expressiveness of both the obsolete Myo hardware and everyday trash. It also allowed us to explore which software systems, exactly, would be required to take further advantage of the Myo system in an open-hardware environment. By finding fresh ways to work with what’s on our shelves, we hope to squeeze more value from devices otherwise destined for landfills.
Bios: Alex Leitch: Alex Leitch investigates human-computer interaction through a blend of critical scholarship, hands-on pedagogy, and interactive installation art. They currently serve as Co-Director of the MS in Human-Computer Interaction at the University of Maryland, where they have taught courses on programming, interaction design, and digital fabrication since 2019. Alex’s installations invite public engagement while probing the embedded values in sociotechnical systems. As an interaction designer, they analyze issues like gender representation in engineering spaces, legibility in code, and truth in algorithms. Currently pursuing a PhD in Information Studies, their research examines the labour consequences of browser infrastructure underpinning today’s dominant digital interfaces. Their praxis fuses empirical studies with speculative artifacts that reimagine society’s relationship to emerging tech. For this project, Alex made the sculpture and debugged key elements of the software to ensure the serial port communication worked properly.
Celia Chen:
Celia Chen is a 2nd year PhD student in the Information Studies program at the University of Maryland. They hold BS and MS degrees in Cognitive and Psychological Data Science from Rensselaer Polytechnic Institute, where they worked with the RPIrates on computational text analysis of political tweets and creating predictive models using WHO data to estimate early COVID-19 infection spread under the advisement of Dr. James Hendler. Concurrently advised by Dr. Alicia Walf, they wrote protocols for using Fitbits and other biometric sensors for human subjects research, gaining experience with wearable sensors and physiological data. Currently advised by Dr. Jen Golbeck, their personal research explores user identity construction and language use in online spaces. For this project, Celia handled the coding to enable communication between the Myo armband, Raspberry Pi, and pneumatic robot, drawing on their background in cognitive science and human sensor input.
Date: Feb 1st, 2024 12:30 PM
Talk Title: Becoming Teammates: Designing Assistive, Collaborative Machines
Speaker: Chien-Ming Huang, John C. Malone Assistant Professor, Department of Computer Science, Johns Hopkins University
Location: HBK 2105 and Zoom
Watch Here
Abstract: The growing power in computing and AI promises a near-term future of human-machine teamwork. In this talk, I will present my research group’s efforts in understanding the complex dynamics of human-machine interaction and designing intelligent machines aimed to assist and collaborate with people. I will focus on 1) tools for onboarding machine teammates and authoring machine assistance, 2) methods for detecting, and broadly managing, errors in collaboration, and 3) building blocks of knowledge needed to enable ad hoc human-machine teamwork. I will also highlight our recent work on designing assistive, collaborative machines to support older adults aging in place.
Bio: Chien-Ming Huang is the John C. Malone Assistant Professor in the Department of Computer Science at the Johns Hopkins University. His research focuses on designing interactive AI aimed to assist and collaborate with people. He publishes in top-tier venues in HRI, HCI, and robotics including Science Robotics, HRI, CHI, and CSCW. His research has received media coverage from MIT Technology Review, Tech Insider, and Science Nation. Huang completed his postdoctoral training at Yale University and received his Ph.D. in Computer Science at the University of Wisconsin–Madison. He is a recipient of the NSF CAREER award. https://www.cs.jhu.edu/~cmhuang/
Date: Feb 8th, 2024 12:30 PM
Location: HBK 2105 and Zoom
This BBL will be dedicated to four student lightning talks. We are excited to hear what they are working on!
How do lightning talks work? Typically, people give a 4-5 minute "presentation" -- this can be very informal or involve slides. The presentation gives some background on your project and then introduces a specific question or "ask" that you want feedback on. Then we have ~15 minutes of conversation with attendees about your question/topic. This is a great opportunity for students to get feedback on research ideas or projects in various stages.
Date: Feb 15th, 2024 12:30 PM
Speaker: Jaeyeon Lee, Assistant Professor, Computer Science and Engineering, UNIST
Location: HBK 2105 and Zoom
Abstract: Computers became small yet powerful enough to be worn and provide information to the user in their daily life. However, interacting with those computers is still challenging, primarily due to their small and rigid form factors. This is problematic since one of the significant reasons for wearing computers is to access information from anywhere comfortably. This talk introduces studies enriching expressivity and natural interactions on small computers using the human sense of touch. It explores how wearable tactile displays can offer enhanced efficiency, comfort, and ease of use. Specifically, it compares design options for a tactile display on the backside of a smartwatch in terms of information transfer. Incorporating multiple distinct tactile sensations can increase the capacity for conveying information. Furthermore, non-contact tactile displays present an opportunity to enhance the wearability of these devices and deliver intuitive spatiotemporal patterns on the face. Finally, the findings from these studies have broader implications for future computing environments, including ultra-thin skin interfaces and technologies such as VR and AR.
Bio: Jaeyeon Lee is an Assistant Professor in Computer Science and Engineering at UNIST (Ulsan National Institute of Science and Technology). She earned her B.Eng. in Control Engineering from KwangWoon University, M.S. in Electrical Engineering, and Ph.D. in Computer Science from KAIST. Her research in Human-Computer Interaction focuses on physical user interfaces enabling rich and intuitive haptic interaction on future computers. Her research work has been published in leading venues in the field of HCI, including ACM CHI and ACM UIST. She has served on the Steering Committee, Organizing Committee, and Program Committee of HCI and Haptics research communities. She is a recipient of EECS Rising Stars in Korea, GradUS Global Scholarship, and NAVER Ph.D. Fellowship.
Date: Feb 22nd, 2024 12:30 PM
Speaker: Veronica Rivera, Embedded Ethics Postdoctoral Scholar, Stanford University
Location: HBK 2105 and Zoom
Watch Here!
Abstract: Algorithms increasingly mediate interactions that cross the digital-physical divide, creating both online and offline safety risks. In this talk, I will share my work on understanding safety in algorithmically-mediated offline introductions (AMOIs). In AMOIs, digital platforms use algorithms to match strangers for offline meetups (e.g., online dating, gig work). Thus, harm in AMOIs transcends digital boundaries into the physical world, raising questions about how to measure harm and who bears responsibility. In my first study, I examine how women gig workers’ experiences with safety are shaped by both individual risk factors and platform design. In my second study, I systemetize harms and protective behaviors across gig workers and online daters and measure the prevalence of different harms and behaviors. Ultimately, my work shows that users who engage in seemingly disparate kinds of AMOIs actually share many safety concerns and protective behaviors.
Bio: Veronica Rivera is an Embedded Ethics Postdoctoral Scholar at Stanford University where she works with the Empirical Security Research Group, the Institute for Human-Centered AI, and the Center for Ethics in Society. Her research lies at the intersection of HCI and security. She studies the digital safety needs and challenges of marginalized and vulnerable populations. She has a PhD in computational media from the University of California, Santa Cruz and a BS in computer science and math from Harvey Mudd College. She was previously a visitor at the Max Planck Institute for Software Systems and at the Center for Privacy and Security of Marginalized and Vulnerable Populations at the University of Florida.
Date: Feb 29th, 2024 12:30 PM
Speaker: Yvette Wohn, Associate professor of Informatics, New Jersey Institute of Technology, Director of the Social Interaction Lab
Location: HBK 2105 and Zoom
Watch Here!
Abstract: Online harassment is a problem that we still have been unable to solve in the social media age of Web 2.0. Moreover, as we move deeper into Web 3.0, which includes 3D virtual worlds, moderation moves beyond content to include behavioral components such as embodied interactions.
While much of the research in computing focuses on how to deal with bad content through technological advancement, this talk presents research from the past few years that focuses on the social complexities involved when communities, rather than companies, try to self moderate.
Bio: Dr. Wohn (she/her) is an associate professor of Informatics at New Jersey Institute of Technology and director of the Social Interaction Lab (socialinteractionlab.com). Her research is in the area of Human Computer Interaction (HCI) where she studies the characteristics and consequences of social interactions in online environments such as virtual worlds and social media. Her main projects examine 1) moderation, online harassment, and the creation/maintenance of online safe spaces and 2) social exchange in digital economies &
Date: Mar 7th, 2024 12:30 PM
Speaker: Albert Park, Assistant Professor in the Department of Software and Information Systems, College of Computing and Informatics, University of North Carolina-Charlotte
Location: HBK 2105 and Zoom
Watch Here!
Abstract: Today, I want to discuss how we can leverage the vast amount of data from social media to gain insights into mental health and community engagement. I will start by exploring the impact of online depression communities. Initial concerns focused on the potential for negative emotion spread, research reveals a surprising trend: members often experience positive changes in their emotional language use and language impairment over time. This suggests that these communities can hold unexpected benefits for both mental well-being.
Building on this understanding, I’ll introduce a study examining how to encourage active participation in online health communities. We delve into the concept of homophily, which describes our natural tendency to connect with those who share something similar. Here we look at language patterns. Our findings across diverse online communities show that shared vocabulary significantly predicts future interaction among members. This holds valuable implications for fostering deeper engagement and meaningful peer support by harnessing the power of shared language.
Bio: I am Albert Park, currently an Assistant Professor in the Department of Software and Information Systems within the College of Computing and Informatics at the University of North Carolina-Charlotte. I was a National Institutes of Health-National Library of Medicine Post-Doctoral Fellow at the University of Utah. I hold a bachelor’s and master’s degrees in Computer Science from Virginia Tech, and a Ph.D. in Biomedical and Health Informatics from the University of Washington in 2015. My research focuses on the analysis of social interactions and social networks using modern data analysis and development of novel computational approaches to study social interactions and relationships in the context of health.
Date: Mar 14th, 2024 12:30 PM
Speaker: Stephanie Valencia Valencia, PhD, Assistant professor, College of Information Studies, University of Maryland
Location: HBK 2105 and Zoom
Abstract: Agency and communication are essential to our personal development, we advance our individual goals by communicating them. Nonetheless, agency is not a fixed property. Many individuals who use speech-generating devices to communicate encounter social constraints and barriers that reduce their agency in conversation including how much they can say, how they can say it, and when they can say it. In this BBL talk, I will argue how using agency as a design framework can serve us to generate accessible communication experiences and center the perspectives of people with disabilities in the design process of new technology. Through empirical studies and co-design with people with disabilities I explore how different technology materials can support their agency in conversation. In doing so, I will present accessible design methods as well as new design guidelines for augmented communication using automated transcription, physical artefacts, and AI-based language generative tools.
Bio: Stephanie Valencia, PhD, is an assistant professor at the College of Information Studies at the University of Maryland. Dr. Valencia is a Human-Computer Interaction researcher who builds accessible technologies that are grounded in behavioral theory, co-designed with people with disabilities, and deployed to users for impact. Her research focuses on designing for accessibility and conversational agency when using assistive technologies such as augmentative and alternative communication (AAC) devices that support communication for users with motor and speech disabilities. Dr. Valencia uses participatory design to explore how different design materials such as AI and non-anthropomorphic robots can be used to create agency-increasing AAC systems and builds and deploys these systems to evaluate their impact. Dr. Valencia received her PhD and MS in Human-Computer Interaction at Carnegie Mellon University and a BS in Biomedical Engineering from EIA and CES university in Colombia. She has been awarded a Postgraduate Fellowship at the Yale School of Medicine, the MIT Technology Review Top 35 Innovators Under 35 Award in Latin America, and the Ada Lovelace fellowship from the Open Source Hardware Association.
Date: Mar 28th, 2024 12:30 PM
Speaker: Muhammad Adamu, Senior Research Associate, Imagination Lancaster Digital Good SIG, Lancaster University, UK
Location: HBK 2105 and Zoom
Watch Here!
Abstract: What is this thing AI? Is it the possible mimicry of the technical, social, or cultural intelligence of the human or the actuality of superintelligence? But wait, which Human, the Hegelian Man-as-Human or the Wynterian Beyond MAN, towards the Human? We don’t know! This thing AI that was presented to us during the short-lived summers and long cold winters will solve the “common sense” problem, i.e., model human knowledge of the everyday, what Heideggerian phenomenology calls “Being-in-the-World”.
In this talk, I will introduce a particular dimension of the Heideggerian critique of AI, i.e. enflaming [Gestell] and standing reserve [Bestand]. In particular, I will adopt the concept of standing reserve to articulate a particular relation of the African citizen – a user, a client, a producer, or a labourer- within a largely Eurocentric AI landscape, and attempt to demonstrate how the existing institutional conception of the African as an objectifiable subject that can be resourced for capital will inform (and reform) the African orientation of the future of AI. In short, I will argue that the African, just as Kalluri and colleagues (2023) “Surveillance AI pipeline” paper has demonstrated how Humans are conceived as entities under the umbrella term of “objects” or “region of interest” in computer vision research, is historically and continuously co-opted as standing reserve for the total mobilization of technocratic ideals – thus to be catalogued, computed and used as resource that is disposable and replaceable.
Bio: Muhammad Adamu is a Senior Research Associate for Imagination Lancaster Digital Good SIG at Lancaster University, UK. Muhammad is strongly associated with the “African perspective” in Human-computer interaction, and more recently the social futures of artificial intelligence. His current interdisciplinary research focuses on establishing the themes of “Good AI societies” and “AI for Good” in Africa and has been funded by the Tertiary Education Trust Fund (TETFUND) and Petroleum Technology Development Fund (PTDF), Nigeria and the UKRI Research England.
Date: Apr 4th, 2024 12:30 PM
Speaker: Mako Hill, Social scientist and Technologist
Location: HBK 2105 and Zoom
Watch Here
Abstract: After increasing rapidly over seven years, the number of active contributors to English Wikipedia peaked in 2007 and has been in decline since. A body of evidence will be presented that suggests English Wikipedia’s pattern of growth and decline appears to be a general feature of “peer production”—the model of collaborative production that has produced millions of wikis, free/open source software projects, websites like OpenStreetMap, and more.
It will be argued that this pattern of growth, maturity, and decline is not caused by newcomers who have stopped showing up, but rather because communities have become less open to the newcomers who do
arrive. A theoretical model and a range of empirical evidence will be provided that suggests why this surprising dynamic may be a rational approach to the shifting governance challenges faced by digital knowledge commons.
Bio: Benjamin Mako Hill is a social scientist and technologist. In both roles, he works to understand the social dynamics that shape online communities. His work focuses on communities engaged in the peer production of digital public goods—like Wikipedia and Linux. He is an Associate Professor in the Department of Communication at the University of Washington and a founding member of the Community Data Science Collective. He is also a Faculty Associate at the Berkman Klein Center for Internet and Society at Harvard University. He has also been an activist, developer, contributor, and leader in the
free and open source software and free culture movements for more than two decades as part of the Debian, Ubuntu, and Wikimedia projects. During the 2023-2024 academic year, he is a Fellow at the Center for Information Technology Policy at Princeton University.
Date: Apr 11th, 2024 12:30 PM
Speaker: Divya Ramesh, PhD candidate, Computer Science and Engineering, University of Michigan, Ann Arbor
Location: HBK 2105 and Zoom
Abstract: Algorithmic accountability ensures that the design, development, and use of AI systems has public approval and trust. However, barring a few success stories, accountability as a governance mechanism to ensure public trust in AI remains elusive. Recent work suggests that accountability mechanisms may be effective in a handful of democratic, rich, and western contexts. Algorithmic accountability also relies on the support of a critical public, watchdog journalism and strong institutions; these conditions are not universally available or true. How does algorithmic accountability mediate in contexts when one or more preconditions are not true? How do vulnerable users of AI systems perceive their relations to algorithmic accountability in such contexts?
In this talk, I will present two examples from my research that give a glimpse into how AI-based decisions govern financially vulnerable communities in India and the US; and how such communities perceive their relations to algorithmic accountability. Placing the two case studies in conversation with one another does two things: 1) nudges us to examine the political nature of algorithmic accountability, and 2) raises questions about the extent to which our current accountability proposals address the needs of vulnerable communities around the world. I will conclude by re-centering the politics of technical questions in human-centered AI, and proposing alternative governance approaches that could better address the needs of vulnerable communities and rebuild their trust in AI.
Bio: Divya Ramesh is a PhD candidate in Computer Science and Engineering at the University of Michigan, Ann Arbor, where she examines the social, ethical, and design implications of AI in high stakes decision-making domains. Her dissertation situates empirical work on financially vulnerable communities in India and the US within theories of human-computer interaction and science and technology studies to inform design and policy for Responsible AI.
Divya’s work has appeared in both HCI and AI venues such as ACM CHI, FAccT, DIS, TOCHI and AAMAS. She received a Best Paper nomination and Pragnesh Jay Modi Best Student Paper award at AAMAS 2020. Divya has also contributed to public conversations via media outlets such as CNBC TV18 and Techcrunch, one of which helped shape policy outcomes for Google in 2023. Divya was recognized as an inaugural Quad Fellow by the governments of Australia, India, Japan and the US in 2023. She was also recently recognized as a Barbour Scholar by the University of Michigan for 2024-25. In a previous life, Divya was a computer vision engineer at a startup called CloudSight, where architected the company’s first human-AI interaction pipeline for analyzing visual content in real-time.
Date: Apr 18th, 2024 12:30 PM
Speaker: Alex Wen, Postdoctoral researcher, Computer Science Department, Virginia Tech
Location: HBK 2105 and Zoom
Abstract: This talk will explore the implicit struggles teenagers face in using computing technologies, focusing on the challenges presented by e-learning tools and social virtual reality (VR) applications. It uncovers that teenagers frequently conceal their emotional distress arising from learning challenges, exacerbated by the accessibility issues of e-learning tool designs. Furthermore, within social VR settings, teenagers demonstrate overconfidence in their protective strategies and a misplaced sense of safety, issues arising from inadequate design of interaction safety measures. These findings highlight the unique needs of teenagers, distinct from those of adults, shaped by their specific social situations and evolving mental models. Through a sociological lens, I aim to deepen the understanding of these needs and identify solutions that truly meet the teenagers. My goal is to design digital environments that are both accessible and safe for all young users, catering specifically to the nuanced demands of teenagers.
Bio: Dr. Zikai Alex Wen is a researcher in human-centered computing committed to improving how young users (i.e., children, teenagers, and special education students) engage with AI agents. His research hones in on two critical challenges: (1) safeguarding usable privacy and security in AI interactions, and (2) dismantling barriers to AI accessibility for learners with neurodevelopmental disabilities. By focusing on these pivotal areas, he aims to create more engaging, inclusive, and safe AI agents that cater to the unique needs of our younger generation. His research has been published at prestigious CS conferences such as ACM CHI, ACM ASSETS, ACM CCS, and IEEE S&P.
He is a postdoctoral researcher in the Computer Science Department at Virginia Tech, working with Prof. Yaxing Yao. He is fortunate to have worked as a postdoc at the HKUST Visualization Lab. He received his Ph.D. degree in computer science from Cornell University. Before that, he received his joint First-class Honours bachelor’s degree in computer science from the University of Strathclyde, Glasgow, U.K., and BUCT, Beijing, China.
Date: Apr 25th, 2024 12:30 PM
Speaker: Morgan Klaus Scheuerman, Postdoctoral Associate, Information Science, University of Colorado Boulder
Abstract: Computer vision technologies have been increasingly scrutinized in recent years for their propensity to cause harm. Broadly, the harms of computer vision focus on demographic biases (favoring one group over another) and categorical injustices (through erasure, stereotyping, or problematic labels). Prior work has focused on both uncovering these harms and mitigating them, through, for example, better dataset collection practices and guidelines for more contextual data labeling. There is opportunity to further understand how human identity is embedded into computer vision not only across these artifacts, but also across the network of human workers who shape computer vision systems. Further, given computer vision is designed by humans, there is ample opportunity to understand how human positionality influences the outcomes of computer vision systems. In this talk, I present work on how identity is implemented in computer vision, from how identity is represented in models and datasets to how different worker positionalities influence the development process. Specifically, I showcase how representations of gender and race in computer vision are exclusionary, and represent problematic histories present in colonialist worldviews. I also highlight how traditional tech workers enact a positional power over data workers in the global south. Through these findings, I demonstrate how identity in computer vision moves from something more open, contextual, and exploratory to a completely closed, binary and prescriptive classification.
Bio: Morgan Klaus Scheuerman is a Postdoctoral Associate in Information Science at University of Colorado Boulder and a 2021 MSR Research Fellow. His research focuses on the intersection of technical infrastructure and marginalized identities. In particular, he examines how gender and race characteristics are embedded into algorithmic infrastructures and how those permeations influence the entire system. His work has received multiple best paper awards and honorable mentions at CHI and CSCW. He earned his MS degree in Human-Centered Computing from University of Maryland Baltimore County and his BA in Communication & Media Studies (Minor Gender & Sexuality Studies) from Goucher College.
Date: May 2nd, 2024 12:30 PM
Speaker: Merrie Morris, Director for Human-AI Interaction Research, Google DeepMind
Location: HBK 2105 and Zoom
Abstract: We are at a transformational junction in computing, in the midst of an explosion in capabilities of foundational AI models that may soon match or exceed typical human abilities for a wide variety of cognitive tasks, a milestone often termed Artificial General Intelligence (AGI). Achieving AGI (or even closely approaching it) will transform computing, with ramifications permeating through all aspects of society. This is a critical moment not only for Machine Learning research, but also for the field of Human-Computer Interaction (HCI).
In this talk, I will define what I mean (and what I do NOT mean) by “AGI.” I will then discuss how this new era of computing necessitates a new sociotechnical research agenda on methods and interfaces for studying and interacting with AGI. For instance, how can we extend status quo design and prototyping methods for envisioning novel experiences at the limits of our current imaginations? What novel interaction modalities might AGI (or superintelligence) enable? How do we create interfaces for computing systems that may intentionally or unintentionally deceive an end-user? How do we bridge the “gulf of evaluation” when a system may arrive at an answer through methods that fundamentally differ from human mental models, or that may be too complex for an individual user to grasp? How do we evaluate technologies that may have unanticipated systemic side-effects on society when released into the wild?
I will close by reflecting on the relationship between HCI and AI research. Typically, HCI and other sociotechnical domains are not considered as core to the ML research community as areas like model building. However, I argue that research on Human-AI Interaction and the societal impacts of AI is vital and central to this moment in computing history. HCI must not become a “second class citizen” to AI, but rather be recognized as fundamental to ensuring the path to AGI and beyond is a beneficial one.
Bio: Meredith Ringel Morris is Director for Human-AI Interaction Research at Google DeepMind. Prior to joining DeepMind, she was Director of the People + AI Research team in Google Research’s Responsible AI division. She also previously served as Research Area Manager for Interaction, Accessibility, and Mixed Reality at Microsoft Research. In addition to her industry role, Dr. Morris has a faculty appointment at the University of Washington, where she is an Affiliate Professor in The Paul G. Allen School of Computer Science & Engineering and also in The Information School. Dr. Morris has been recognized as a Fellow of the ACM and as a member of the ACM SIGCHI Academy for her contributions to Human-Computer Interaction research. She earned her Sc.B. in computer science from Brown University and her M.S. and Ph.D. in computer science from Stanford University. More details on her research and publications are available at http://merrie.info.
Date: May 9th, 2024 12:30 PM
Talk Title: Exploring immersive meetings in the Metaverse: A conceptual model and first empirical insights
Speaker: Marvin Grabowski, PhD Candidate at University of Hamburg, Germany
Location: HBK 2105 and Zoom
Abstract: New technological developments open up new possibilities for the way teams can work together virtually. In particular, immersive extended reality (XR) meetings enable groups to represent, view, and interact with each other in a shared three-dimensional (3D) space. XR meetings take place in the highly publicized “metaverse”, defined as a multi-user interaction space that merges the virtual world with the real world (e.g., Dwivedi et al., 2022). By wearing a headset that blocks off perception from their current physical environment, group members become immersed into a shared virtual environment (i.e., the metaverse). Users generate realistic embodied avatars that are qualitatively different from two-dimensional (2D) video interactions, such as Zoom (e.g., Hennig-Thurau et al., 2023). We developed a conceptual framework of 3D immersive XR group meetings that integrates technological design characteristics, subjective attendee experiences, mediating mechanisms, and meeting outcomes. I am going to present our preliminary findings on meeting outcomes and individual XR experiences (i.e., group interaction characteristics, avatar perception, simulator sickness, and task load). Following the talk, you are cordially invited to discuss about opportunities and challenges of the metaverse as a platform for enabling immersive learning scenarios and conducting workplace meetings in the future.
Bio: My research as a current PhD Candidate at University of Hamburg, Germany, highlights the future of workplace meetings. Between the interface of Industrial & Organizational Psychology and Human Computer Interaction, the immersive experience through VR glasses opens up new interdisciplinary perspectives. In particular, I am interested in underlying mechanisms of fruitful interactions in immersive meetings in the metaverse. Furthermore, I am interested in success factors of hybrid meetings with the goal of gaining new insights into how the new framework of New Work can be used practically Based on national and international academic stations, I am happy to build bridges between organizational needs and scientific findings. In addition, I am a speaker on career guidance and professional orientation after high-school and published the book “Early Life Crisis”.
Date: Aug 29th, 2024 12:30 PM
Talk Title: Welcome Back Event!
Location: HBK-2105 only
Join us into welcoming everyone back to the HCIL for the 2024 fall semester. Come chat with friends and enjoy some pizza!
Note: this event will only be in person.
Date: Sep 5th, 2024 12:30 PM
Talk Title: CHI Writing and Reflecting
Location: HBK 2105
Description: With the CHI deadline looming, we’ll use this week’s brown bag time slot for folks to take a break from writing to relax (a little), enjoy some pizza with colleagues, and get ready for the final push. So if you’re on campus, stop by HBK2105 to get a slice and chat with other HCIL members.
Date: Sep 12th, 2024 12:30 PM
Location: HBK 2105
It's the CHI deadline!
As many of our members will be putting finishing touches on their CHI 2025 submissions, we won't have a speaker today. Instead, stop by the lab (HBK-2105) to take a breather, grab a snack, and chat with your HCIL colleagues.
Date: Sep 19th, 2024 12:30 PM
Talk Title: Cooperative Inquiry: When Children and Adults Design Together
Speaker: beth bonsignore, Associate Research Professor; Director, BA in Tech & Info Design; Director, KidsTeam
Location: HBK 2105 and Zoom
Abstract: The goal of Participatory Design is to include as many people (users) as possible in all stages of the technology design process. Initially, it was unclear whether children could be actively involved in participatory design in any role beyond "end user" or "tester." In 1998, KidsTeam was launched at UMD’s Human-Computer Interaction Lab to explore practical and ethical questions about co-design between children and adults. This research resulted in Cooperative Inquiry, a design-based research approach that is now in use internationally across academia and industry. Its participatory design practices and techniques have been incorporated into HCI curricula and integrated into design-based research in the Learning Sciences, with impacts on industry practice. KidsTeam has also expanded its reach, demonstrating its replicability, utility, and generalizability as similar intergenerational co-design capabilities have been created in university/K-12, not-for-profit, and industry settings.
More recently, new horizons for intergenerational co-design have opened up. For example, the Cooperative Inquiry design framework has become foundational in emerging critical design and computational empowerment programs. This raises interesting research questions about the role of youth in these new efforts. In this talk, Beth will provide a brief overview of KidsTeam at UMD: how it started, how it's going, and how it might best meet these new challenges.
Bio: Elizabeth (“beth”) Bonsignore is an associate research professor at UMD’s College of Information and Human-Computer Interaction Lab (HCIL). Her research explores the design of interactive play and social experiences that promote new media literacies and arts-integrated science learning. She co-designs and advocates with youth, families, and local communities with the goal of empowering youth historically underrepresented in STEM to advance in these fields. Her recent collaborations with amazing graduate students have explored the challenges (and conundrum) of making participatory design as inclusive as possible through assets-based design and funds of identity.
Date: Sep 26th, 2024 12:30 PM
Talk Title: Student Lightning Talks
Location: HBK 2105 and Zoom
Description:
This BBL will be dedicated to four student lightning talks. We are excited to hear what they are working on!
How do lightning talks work?
Typically, people give a 4-5 minute "presentation" -- this can be very informal or involve slides. The presentation gives some background on your project and then introduces a specific question or "ask" that you want feedback on. Then we have ~15 minutes of conversation with attendees about your question/topic. This is a great opportunity for students to get feedback on research ideas or projects in various stages.
Date: Oct 3rd, 2024 12:30 PM
Talk Title: From Haptic Illusions to Beyond Real Interactions in Virtual Reality
Speaker: Parastoo Abtahi, Assistant Professor of Computer Science, Princeton University
Location: HBK 2105 and Zoom
Watch Here
Abstract: Advances in audiovisual rendering have led to the commercialization of virtual reality (VR) hardware; however, haptic technology has not kept up with these advances. While haptic devices aim to bridge this gap by simulating the sensation of touch, many hardware limitations make realistic touch interactions in VR challenging. In my research, I explore how by understanding human perception, we can design VR interactions that not only overcome the current limitations of VR hardware but also extend our abilities beyond what is possible in the real world. In this talk, I will present my work on redirection illusions that leverage the limits of human perception to improve the perceived performance of encountered-type haptic devices, such as improving the position accuracy of drones, the speed of tabletop robots, and the resolution of shape displays when used for haptics in VR. I will then present a framework I have developed through the lens of sensorimotor control theory to argue for the exploration and evaluation of VR interactions that go beyond mimicking reality.
Bio: Parastoo Abtahi is an Assistant Professor of Computer Science at Princeton University, where she leads Princeton’s Situated Interactions Lab (Ψ Lab) as part of the Princeton HCI Group. Before joining Princeton, Parastoo was a visiting research scientist at Meta Reality Labs Research. She received her PhD in Computer Science from Stanford University, working with Prof. James Landay and Prof. Sean Follmer. Her research area is human-computer interaction, and she works broadly on augmented reality and spatial computing. Parastoo received her bachelor’s degree in Electrical and Computer Engineering from the University of Toronto, as part of the Engineering Science program.
Date: Oct 10th, 2024 12:30 PM
Talk Title: Social Media’s Midlife Crisis? How Public Discourse Imagines Platform Futures
Speaker: Chelsea Butkowski (left), American University &
Frances Corry (right), University of Pittsburgh
Location: HBK 2105 and Zoom
Watch Here
Abstract: Though the social media ecosystem has never been
stable—with platforms constantly emerging, evolving, aging, and closing—the last few years have appeared particularly volatile. Major companies like Meta and X have undergone historic transformations, and a slew of new platforms have also emerged, including TikTok, BeReal, Threads, Bluesky, Mastodon, and others. It appears as if social media companies, the platforms they run, and the users they support, have arrived at an existential juncture. What is social media for in today’s society––and what does its future look like? Decades on, is “new media” still “new” after all? In this talk, Drs. Chelsea Butkowski and Frances Corry draw on their recent research analyzing press coverage of emerging platforms to argue that contemporary social media discourse has become fueled by cultural memory, a phenomenon that they call “nostalgic anticipation.” In other words, speculation about social media's volatile future is persistently filtered through a yearning for its past. Butkowski and Corry will discuss how this unique juncture for social media can contribute to reframing understandings of platforms in our scholarship and our everyday lives.
Bio:
Chelsea Butkowski
Chelsea Butkowski is an Assistant Professor of Communication at American University. Their research examines the relationship between media technologies and identity, including the social practices and effects of everyday social media use. Butkowski's recent work focuses on digital identity during periods of sociotechnical transition and disruption.
Frances Corry
Frances Corry is an Assistant Professor in the Department of Information Culture & Data Stewardship at the University of Pittsburgh. Her research and teaching focus on the prehistories and afterlives of data-intensive systems – from social media platforms to AI tools. Corry’s book project examines the process of social media platform closure and content deletion to ask about the future of cultural memory.
Date: Oct 17th, 2024 12:30 PM
Talk Title:Scaling Expertise via Language Models with Applications to Education
Speaker: Rose Wang, Computer Science PhD candidate, Stanford University
Location: HBK 2105 and Zoom
Watch Here
Abstract: Access to expert knowledge is essential for fostering high-quality practices across domains like education. However, many novices—such as new teachers—lack expert guidance, limiting their growth and undermining student outcomes. While language models (LMs) hold potential for scaling expertise, current methods focus on surface patterns rather than capturing latent expert reasoning. In this talk, I'll discuss how my research addresses this by (1) identifying problematic practices for intervention from noisy, large-scale interaction data, (2) developing benchmarks that measure expert quality of practices, and (3) extracting latent expert reasoning to adapt LMs for real-time educational interventions. I'll highlight how my methods have been deployed to improve K-12 education at scale, positively impacting millions of live interactions between students and educators.
Bio: Rose E. Wang is a Computer Science PhD candidate at Stanford University. She develops machine learning and natural language processing methods to tackle challenges in real-world interactions, with a focus on Education. Her work directly improves the education of under-served students through partnerships she has cultivated during her Ph.D., including Title I school districts and several education companies, impacting 200,000+ students, 1,700+ teachers, 16,100+ tutors, in millions of tutoring sessions across the U.S., UK and India. Her work is recognized by NSF Graduate Research Fellowship, CogSci Best Paper Award, NeurIPS Cooperative AI Best Paper Award, ICLR Oral, Rising Star in Data Science, Building Educational Applications Ambassador Paper Award, and the Learning Engineering Tools Competition Award.
Date: Oct 24th, 2024 12:30 PM
Talk Title: Designs to Support Better Visual Data Communication
Speaker: Cindy Xiong, Assistant Professor, School of Interactive Computing, Georgia Institute of Technology
Location: HBK 2105 and Zoom
Watch Here
Abstract: Well-chosen data visualizations can lead to powerful and intuitive processing by a viewer, both for visual analytics and data storytelling. When badly chosen, visualizations leave important patterns opaque or misunderstood. So how can we design an effective visualization? I will share several empirical studies demonstrating that visualization design can influence viewer perception and interpretation of data, referencing methods and insights from cognitive psychology. I leverage these study results to design natural language interfaces that recommend the most effective visualization to answer user queries and help them extract the ‘right’ message from data.
I then identify two challenges in developing such an interface. First, human perception and interpretation of visualizations is riddled with biases, so we need to understand how people extract information from data. Second, natural language queries describing takeaways from visualizations can be ambiguous and thus difficult to interpret and model, so we need to investigate how people use natural language to describe a specific message. I will discuss ongoing and future efforts to address these challenges, providing concrete guidelines for visualization tools that help people more effectively explore and communicate data.
Bio: Cindy Xiong Bearfield is an Assistant Professor in the School of Interactive Computing at Georgia Institute of Technology. Bridging the fields of psychology and data visualization, Professor Bearfield aims to understand the cognitive and perceptual processes that underlie visual data interpretation and communication. Her research informs the design and development of visualizations and visualization tools that elicit calibrated trust in complex data to facilitate more effective visual data analysis and communication.
She received her Ph.D. in Cognitive Psychology and her MS in Statistics from Northwestern University. Her research at the intersection of human perception, cognition, and data visualization has been recognized with an NSF CAREER award. She has received paper awards at premier psychology and data visualization venues, including ACM CHI, IEEE PacificVis, Psychonomics, and IEEE VIS. She is also one of the founding leaders of VISxVISION (visxvision.com), an initiative dedicated to increasing collaboration between visualization researchers and perceptual + cognitive psychologists.
Date: Oct 31st, 2024 12:30 PM
Talk Title: A New Model for News Engagement Depends on Human-Computer Interaction
Speaker: Dr. Ronald Yaros, Associate Professor, UMD’s Digital Engagement Lab (.org)
Philip Merrill College of Journalism
Location: HBK 2105 and Zoom
Watch Here
Abstract: Despite the enduring importance of quality writing, reporting, and sourcing in local
journalism, digital communicators have yet to fully leverage cutting-edge research from
other disciplines to meet the evolving needs of today’s news consumers. Since 2005,
Yaros has combined journalism with concepts from cognitive psychology, educational psychology, and human-computer interaction to develop a new model for digital engagement. The unique model combines ten user and content variables and is incorporated into a “smart story suite” so users can select their news narrative. The interface builds what Yaros calls "attention momentum" without depending on clickbait, text-heavy pages, and video. As news consumption and advertising revenues continue to decline, the model seeks to increase the probability that more users will spend more time with more news. Yaros looks forward to presenting this applied research and welcomes collaboration with his team in the digital engagement lab.
Bio: Dr. Yaros is an Associate Professor in the Philip Merrill College of Journalism and an Affiliate Associate Professor in the College of Information Science. He earned his Ph.D. at the University of Wisconsin-Madison. He then taught at the University of Utah from 2005-2008 where he completed eye-tracking research of his early model before joining Maryland in 2008. Yaros is also a Tow-Knight Disruptive Educator for Journalism Innovation and Entrepreneurship, an Apple Distinguished Educator, and recipient of one of the first campus-wide Donna B. Hamilton Excellence in Undergraduate Teaching Awards.
Date: Nov 7th, 2024 12:30 PM
Talk Title:Understand, Predict, and Enhance User Behavior in Mixed Reality
Speaker: Yukang Yan, Assistant Professor, Department of Computer Science, University of Rochester
Location: HBK 2105 and Zoom
Watch Here
Abstract: My research focuses on the enhancements in human computer interaction in Mixed Reality. As the integration of digital and physical worlds through Mixed Reality expands the interaction space beyond traditional screens, it has a significant impact on how users perceive and interact with the world. Through user studies, I observe and model the behavioral and perceptual patterns of users as they interact with Mixed Reality. Based on the findings, I design and develop interaction techniques that are tailored to these behavioral changes in order to facilitate user input and information display. Additionally, I explore augmentation methods that allow users to surpass their capabilities in the real world, such as embodying healthier virtual avatars or non-humanoid avatars to gain unique experiences not possible in reality.
Bio: I'm an Assistant Professor at the Department of Computer Science, University of Rochester. I serve as co-director of ROCHCI Group and lead BEAR Lab. I'm part of the AR/VR Initiative at University of Rochester as a participating faculty. Prior to this, I worked as a postdoc in Augmented Perception Lab at Carnegie Mellon University. I got my Ph.D. degree from Tsinghua University. My research is focused in the intersection area of Human-Computer Interaction and Mixed Reality. I publish at ACM CHI, UIST, IMWUT, and IEEE VR, with two Best Paper Honorable Mention Awards from CHI 20 and 23, and one Best Paper Nominee Award from VR 23. I served as CHI 23 Late-Breaking Work Co-Chair, UIST 24 Registration Co-Chair.
Date: Nov 14th, 2024 12:30 PM
Talk Title: Visualizing the Unseen: Perceptographer – A Pioneering AI Paradigm for Brain-Computer Interaction
Speaker: Elia Shahbazi
Location: HBK 2105 and Zoom
Abstract: Understanding the complexities of human perception is a fundamental challenge in neuroscience. We have recently developed an innovative approach called Perceptography to visualize intricate perceptual distortions resulting from localized brain stimulation in the inferotemporal (IT) cortex. Perceptography leverages machine learning to create and refine specific image distortions that are challenging for animals to distinguish from the effects of cortical stimulation. In this talk, I will present Perceptographer, a groundbreaking, customizable framework for visualizing brain-stimulation-induced perceptual events across various regions of the visual cortex. By overcoming the limitations of existing image generation models in handling complex distortions, Perceptographer opens new pathways for exploring and understanding the intricate phenomena of brain induced perception.
Bio: Elia Shahbazi is a trailblazing computational neuroscientist whose diverse expertise spans applied and pure mathematics, software engineering, artificial intelligence, and entrepreneurial leadership. In 2018, Elia joined the NIH as a Research Scientist Fellow in the Unit of Neuron, Behavior, and Circuits. As a computational neuroscientist, he has been at the forefront of merging AI with neuroscience and bio-related sciences.
Date: Nov 21st, 2024 12:30 PM
Talk Title: Intent-AI Interaction: Elevating Human-Computer Interaction to the Intent and Conceptual Level
Speaker: Jason Ding
Location: HBK 2105 and Zoom
Watch Here
Abstract: Technological advancements are continually reshaping human-computer interaction (HCI). Although direct manipulation methods, such as clicking and dragging icons in graphical user interfaces (GUIs), remain widespread, generative AI now has the ability to understand user interfaces and autonomously perform tasks. This reduces the reliance on direct user manipulation and prompts a reimagining of HCI paradigm. In this talk, we introduce "intent-AI interaction" as a forward-looking paradigm where interactions are driven by the user's intent and conceptual reasoning rather than command-level actions. We will demonstrate this paradigm shift through three studies: human-AI co-creation of news headlines, ideation enabled by cross-domain analogies, and data exploration.
Bio: Zijian "Jason" Ding is a 4th-year PhD candidate at the University of Maryland's Human-Computer Interaction Lab. His research focuses on intent-AI interaction as a new paradigm in human-computer interaction. His work has been published in top-tier AI and HCI conferences, including EMNLP, CHI, CSCW, and UIST, with recognition such as a best paper honorable mention from ACM Creativity & Cognition. Ding's industry experience includes internships at Microsoft Research, the MIT-IBM Watson AI Lab (IBM Research), and Dataminr, where his work led to publications, first-authored patents, and real-world products.
Date: Dec 5th, 2024 12:30 PM
Talk Title: Community-Based Approaches to Building Peer Support Systems for Work
Speaker: Yasmine Kotturi, Assistant Professor of Human-Centered Computing, Information Systems, University of Maryland, Baltimore County
Location: HBK 2105 and Zoom
Watch Here
Abstract: The “future of work” promises innovation and opportunity, yet for many, it manifests as uncertainty and instability—exposing a stark divide between optimistic predictions and lived realities. In this talk, I explore the critical role of peer networks in addressing worker challenges such as isolation and skill development in digitally-mediated work. Drawing on community-based, participatory design methods, I present three peer support systems—Hirepeer, Peerdea, and Tech Help Desk—that tackle these issues by fostering trust and accountability within worker communities. These systems demonstrate how localized, community-based approaches can overcome the limitations of current approaches to building sociotechnical systems which prioritize scale over relationship building. Finally, my work highlights the importance of constructive community-academic partnerships in computing which kickstart and sustain community initiatives.
Bio: Dr. Yasmine Kotturi is an Assistant Professor of Human-Centered Computing at the University of Maryland, Baltimore County in the Information Systems Department. Her research focuses on digitally-mediated employment and entrepreneurship, examining how distributed workers leverage peer networks to navigate precarity and advance their careers. Dr. Kotturi has been recognized as an EECS Rising Star, WAIM Fellow (Work in the Age of Intelligent Machines), and Siebel Scholar. She has collaborated with nonprofits, as well as leading companies including Instagram and Etsy. Dr. Kotturi earned her Ph.D. in Human-Computer Interaction from Carnegie Mellon University and has held positions at leading research institutions such as Microsoft Research Asia and MIT’s Teaching Systems Lab. Learn more about Dr. Kotturi’s work: ykotturi.github.io and @yasminekotturi.
Date: Jan 30th, 2025 12:30 PM
Talk Title: Learning to Code with AI
Speaker: Majeed Kazemi, PhD candidate at University of Toronto
Location: HBK 2105 and Zoom
Abstract:"In the evolving landscape of programming with generative AI, critical questions emerge around its impact on cognition, interaction, and learning. In this talk, I will present findings from my research on three key topics: (a) What are the implications of using AI when learning to code for the first time? Does AI enhance learning or foster over-reliance, potentially hindering outcomes? (b) How can we design novel interfaces that cognitively engage learners with AI-generated solutions—enhancing users’ ability to extend and modify code without creating friction? (c) How to design pedagogical AI coding assistants for educational contexts? I will discuss the design of CodeAid, results from its 12-week deployment in a large class of 750 students, and perspectives from students and educators."
Bio: "Majeed is a PhD candidate in Computer Science at the University of Toronto, advised by Prof. Tovi Grossman. His research in Human-Computer Interaction liest at the intersection of programming, education, and AI. As a systems researcher, his work draws from learning sciences and interaction design to develop novel tools that address fundamental challenges surrounding interaction and cognition when integrating AI into programming. His work has been published at top-tier HCI venues such as CHI, UIST, IDC, and IUI, and his research in AI and education is among the most highly cited CHI papers of the past two years. Prior to his PhD, Majeed completed his PhD at the University of Maryland, where he worked with Prof. Jon Froehlich at the HCIL. During this time, he designed and built MakerWear–a tangible, modular electronic toolkit that enables young children to create interactive wearables–which earned a Best Paper Award at CHI."
Date: Feb 6th, 2025 12:30 PM
Talk Title: Making Data Strange in Nonprofit Organizations
Speaker: Dr. Amy Voida, Associate professor and founding faculty in the Department of Information Science, University of Colorado Boulder
Location: HBK 2105 and Zoom
Abstract: "This is a talk with an alter ego. As a research talk, I explore the myriad ways in which the use of data in nonprofit organizations disrupts our expectations of what it means to design organizational information systems — defamiliarizing data or… making data strange. From needing to address the coerciveness of the nonprofit database’s primary key to requiring new approaches for identifying the manipulative uses of data by ideologically polarized nonprofits, research about this sector serves as a critical case study of information systems at a state of enormous precarity and politicization. The research talk’s alter ego is a teaching talk in which I introduce defamiliarization, a construct that transcends subdisciplines and extends from one end of the design process to the other. Despite this impressive resume, defamiliarization is rarely taught in our curriculum, so I also take this opportunity to share seven strategies for using defamiliarization in your own work. I conclude by offering a glimpse of a new course I have designed to put defamiliarization center stage."
Bio: "Dr. Amy Voida is an associate professor and founding faculty in the Department of Information Science at the University of Colorado Boulder. She conducts empirical and design research in human–computer interaction and computer supported cooperative work, with a focus on philanthropic informatics—an interdisciplinary domain she pioneered exploring the role of information and communication technologies in supporting nonprofit and other work for the public good. Dr. Voida earned her Ph.D. in Human–Centered Computing from the Georgia Institute of Technology. She also holds an M.S. in Human–Computer Interaction from Georgia Tech and a B.A.E. in Elementary Education from Arizona State University. "
Date: Feb 13th, 2025 12:30 PM
Talk Title: Steps Towards an Infrastructure for Scholarly Synthesis
Speaker: Dr. Joel Chan, Assistant Professor; Assistant Director, PhD Information Studies; Associate Director, HCIL
Location: HBK 2105 and Zoom
Abstract: Sharing, reusing, and synthesizing knowledge is central to research progress. But these core functions are not well-supported by our formal scholarly publishing infrastructure: documents aren't really the right unit of analysis, so researchers resort to laborious "hacks" and workarounds to "mine" publications for what they need. Information scientists have proposed an alternative infrastructure based on the more appropriately granular model of a discourse graph of claims, and evidence, along with key rhetorical relationships between them. However, despite significant technical progress on standards and platforms, the predominant infrastructure remains stubbornly document-based. What can HCI do about this? Drawing from infrastructure studies, I diagnose a critical infrastructural bottleneck that HCI can help with: the lack of local systems that integrate discourse-centric models to augment synthesis work, from which an infrastructure for synthesis can be grown. In this talk, I'll describe what we can and should build in order to grow a discourse-centric synthesis infrastructure. Drawing on 3 years of research through design and field deployment in a distributed community of hypertext notebook users, I'll sketch out a design vision of a thriving ecosystem of researchers authoring local, shareable discourse graphs to improve synthesis work, enhance primary research and research training, and augment collaborative research. I'll discuss how this design vision -- and our empirical work -- contributes steps towards a new infrastructure for synthesis, and increases HCI's capacity to advance collective intelligence and solve infrastructure-level problems.
Bio: Dr. Chan’s research and teaching explore systems that support creative knowledge work. He conceives of “systems” very broadly, from individual cognitive skills, interfaces, tools and practices, to collaborative and organizational dynamics and tools, collective intelligence and crowdsourcing, social computing, all the way to sociotechnical infrastructures within which knowledge work is done. Dr. Chan is also broadly interested in creative work across many domains, although he spends most of his time considering the disciplines of design and scientific discovery. His long-term vision is to help create a future where any person or community can design the future(s) they want to live in.
Before coming to the College of Information Studies, Dr. Chan was a Postdoctoral Research Fellow and Project Scientist in the Human-Computer Interaction Institute (HCII) at Carnegie Mellon University. Dr. Chan received his Ph.D. in Cognitive Psychology at the University of Pittsburgh.
Date: Feb 20th, 2025 12:30 PM
Talk Title: HCAI Research in Industry
Speaker: Dr. Tiffany D. Do, Assistant Professor, Drexel University
Location: HBK 2105 and Zoom
Abstract: Tiffany Do, an Assistant Professor specializing in human-centered AI, will provide an in-depth overview of industry research, drawing on her experiences at Microsoft Research and Google Labs. This talk will explore the key distinctions between industry and academic research, offering students a comprehensive understanding of the objectives, methodologies, and opportunities unique to industry research. Attendees will gain practical insights into navigating and excelling in research careers beyond academia.
Bio: Dr. Tiffany D. Do is an Assistant Professor in Computer Science at Drexel University, specializing in Human-Centered AI, augmented reality (AR), virtual reality (VR), and virtual avatars. Her research focuses on the potential of AI to personalize experiences for individuals, placing a premium on their unique identities and perspectives. Previously, she conducted research at Microsoft Research and Google, where she focused on user experience (UX) and interactions with AI language applications, particularly large language models (LLMs) and virtual agents.
Date: Feb 27th, 2025 12:30 PM
Talk Title: Engineering Bodies and Subjectivity
Speaker: Jun Nishida, Assistant Professor, Department of Computer Science and Immersive Media Design program, University of Maryland, College Park
Location: HBK 2105 and Zoom
Abstract: While today’s tools allow us to communicate effectively with others via video and text, they leave out other critical communication channels, such as physical skills and embodied knowledge. These bodily cues are important not only for face-to-face communication but even when communicating motor skills, subjective feelings, and emotions. Unfortunately, the current paradigm of communication is rooted only in symbolic and graphical communication, leaving no space to add these additional haptic and/or somatosensory modalities.
This is precisely the research question I tackle: how can we also communicate our physical experience across users?
In this talk, I introduce how I have engineered wearable devices that allow for sharing physical experiences across users, such as between a physician and a patient, including people with neuromuscular impairments and even children. These custom-built user interfaces include exoskeletons, virtual reality systems, and interactive devices based on electrical muscle stimulation.
I then investigated how we can extend this concept to support interactive activities, such as product design, through the communication of one's bodily cues.
Lastly, I discuss how we can optimize our subjectivity using the psychophysics approach, such as a sense of agency, when our bodies are modified, actuated, or shared with a computer or a human partner.
I conclude my talk by discussing how we can further explore the possibilities enabled by a user interface that communicates more than audio-visual cues and the roadmap for using this approach in new territories, such as understanding how our bodies, perceptions, and somatic interactions contribute to the formation of human embodiment, subjectivity, and behavior.
Bio: Jun Nishida is an Assistant Professor in the Department of Computer Science and Immersive Media Design program at the University of Maryland, College Park, where he leads Embodied Dynamics Laboratory (https://emd.cs.umd.edu/). Previously he was a postdoctoral fellow at the University of Chicago, advised by Prof. Pedro Lopes. He received his Ph.D. in Human Informatics from the University of Tsukuba, Japan in 2019. His research interests focus on developing interaction techniques and wearable interfaces where users can communicate their embodied experiences to support each other by means of wearable and human augmentation technologies, with applications in the fields of rehabilitation, education, and design. He has received ACM UIST Best Paper Award, ACM CHI Best Paper Honorable Mention Award, Microsoft Research Asia Fellowship Award, and Forbes 30 Under 30 Award among others.
Date: Mar 6th, 2025 12:30 PM
Talk Title: Safe(r) Digital Intimacy: Lessons for Internet Governance & Digital Safety
Speaker: Dr. Elissa M. Redmiles, Clare Luce Boothe Assistant Professor, Computer Science Department, Georgetown University; Faculty Associate, Berkman Klein Center for Internet & Society, Harvard University.
Location: HBK 2105 and Zoom
Abstract: The creators of sexual content face a constellation of unique online risks. In this talk I will review findings from over half a decade of research I’ve conducted in Europe and the US on the use cases, threat models, and protections needed for intimate content and interactions. We will start by discussing what motivates for the consensual sharing of intimate content in recreation ("sexting") and labor (particularly on OnlyFans, a platform focused on commercial sharing of intimate content). We will then turn to the threat of image-based sexual abuse, a form of sexual violence that encompasses the non-consensual creation and/or sharing of intimate content. We will discuss two forms of image-based sexual abuse: the non-consensual distribution of intimate content that was originally shared consensually and the rising use of AI to create intimate content without people’s consent. The talk will conclude with a discussion of how these issues inform broader conversations around internet governance, digital discrimination, and safety-by-design for marginalized and vulnerable groups.
Bio: Dr. Elissa M. Redmiles is the Clare Luce Boothe Assistant Professor at Georgetown University in the Computer Science Department and a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University. She was previously a faculty member at the Max Planck Institute for Software Systems and has additionally served as a consultant and researcher at multiple institutions, including Microsoft Research, the Fred Hutchinson Cancer Center, Meta, the World Bank, the Center for Democracy and Technology, and the Partnership on AI. Dr. Redmiles uses computational, economic, and social science methods to understand users’ security, privacy, and online safety-related decision-making processes. She particularly focuses on designing systems that improve safety & equity for members of marginalized communities. Dr. Redmiles has presented her research at the White House, European Commission, and the National Academies and her work has been featured in venues such as the New York Times, Wall Street Journal, Scientific American, Rolling Stone, Wired, and Forbes. She is the recipient of the 2024 ACM SIGSAC Early Career Award for exceptional contributions to the field of computer security and privacy and her research has received multiple paper recognitions at USENIX Security, ACM CCS, ACM CHI, ACM CSCW, and ACM EAAMO. She received her B.S., M.S., and Ph.D., all from the University of Maryland. Go Terps!
Date: Mar 13th, 2025 12:30 PM
Talk Title: Trans Technologies
Speaker: Oliver Haimson, Assistant Professor, University of Michigan School of Information
Location: HBK 2105 and Zoom
Abstract: In this talk, drawing from my new book Trans Technologies (MIT Press, 2025), I discuss how technology creates new possibilities for transgender people, and how trans experiences, in turn, create new possibilities for technology. Mainstream technologies often exclude or marginalize transgender users, but when trans creators take technology design into their own hands, transformative possibilities emerge. Through in-depth interviews with over 100 creators of trans technology—including apps, games, health resources, extended reality systems, and supplies designed to address challenges trans people face—I uncover what trans technology means and explore its possibilities, limitations, and future prospects. I examine the design processes that brought these technologies to life, the role of community in their creation, and how they empower trans individuals to create their own tools to navigate a world that often fails to meet trans needs. This work highlights the successes and limitations of current trans technologies, identifies gaps still to be addressed, and investigates how privilege, race, and access to resources shape which trans technologies are created, who benefits, and who may be left out. Finally, I chart new directions for design and innovation to drive meaningful social change, inviting us to rethink the relationship between technology and marginalized communities.
Bio: Oliver Haimson is an Assistant Professor at University of Michigan School of Information, author of Trans Technologies (MIT Press 2025), and a recipient of a National Science Foundation CAREER award. He conducts social computing research focused on envisioning and designing trans technologies, social media content moderation and marginalized populations, and changing identities online during life transitions.
Date: Mar 27th, 2025 12:30 PM
Talk Title:Reading, Augmented
Speaker: Andrew Head, Assistant Professor, Computer Science, University of Pennsylvania
Location: HBK 2105 and Zoom
Abstract: Have you ever read a text and failed to get much out of it? Why did that happen? There is a good chance it is because you came to a text with different context than the author expected. In this talk, I offer a vision of texts where they are always augmented to provide the necessary context. These texts explain their complex jargon. They simplify their own dense passages. They provide indexes into their best passages. And they enliven the stuffiest notations. Then, I show this vision is close to reality. It is based on a series of novel interfaces my lab and collaborators have developed. Lab studies of these interfaces have shown they improve information acquisition and change the way readers navigate texts. Their design has even influenced production reading applications. Come to this talk to examine a most common intellectual activity—reading—from a new viewpoint.
Bio: Andrew Head is an assistant professor in computer science at the University of Pennsylvania. He is co-founder and co-lead of the Penn HCI research group in human-computer interaction. His group develops novel technologies for interactive reading and reasoning. He publishes in ACM CHI, UIST, and other top venues for HCI research. To learn more about his group’s work, see his website: https://andrewhead.info.
Date: Apr 3rd, 2025 12:30 PM
Talk Title: Threat Modeling Reproductive Health Privacy
Speaker: Dr. Nora McDonald, Assistant Professor, Department of Information Science and Technology, George Mason University
Location: HBK 2105 and Zoom
Abstract: In a post-Roe landscape, reproductive privacy has become increasingly complex and high-stakes. This talk draws on mixed-methods research with healthcare providers and people who can become pregnant to examine how both groups understand and respond to evolving privacy risks. My work with colleagues found that providers’ privacy threat models often overlook new legal, digital, and contextual risks. While many are thinking critically about patient safety, their models need updating. Meanwhile, patients—deeply aware of their risks—are increasingly taking extreme privacy measures but still rely on guidance from providers. I conclude by proposing a concept that I have been evolving over the years, privacy intermediaries, as a promising framework to support people navigating these urgent, evolving threats.
Bio: Dr. Nora McDonald is an Assistant Professor in the Department of Information Science and Technology at George Mason University. She holds a PhD in Information Science from Drexel University's College of Computing and Informatics, where she focused on digital privacy and vulnerability. Her research examines the development of safe and ethical technologies, focusing on the impacts of complex surveillance systems and legal ecosystems, as well as the emerging relationships between identities, shifting norms around privacy and surveillance, and the data collected by privacy-invasive social media algorithms. This work bridges studies on reproductive privacy, teens' privacy in relation to these algorithms, and broader privacy concerns in the digital age. Positioned at the intersection of HCI, social computing, and critical computing, her work is published in leading venues such as CHI, CSCW, TOCHI, PETS, and USENIX.
Date: Apr 10th, 2025 12:30 PM
Talk Title: Advancing Digital Health: AI-Driven Interventions for Patient Care and Workflow Optimization
Speaker: Matthew Louis Mauriello, Assistant Professor, Department of Computer & Information
Sciences, University of Delaware
Location: HBK 2105 and Zoom
Abstract: As digital health technologies evolve, new possibilities for enhancing healthcare access and delivery are emerging. In this talk, I will present an overview of my research at the intersection of computer science and digital health, focusing on developing intelligent digital interventions. Specifically, I will discuss therapeutic chatbot systems for patient support, predictive tools for stress and burnout detection, and the integration of large language models for data processing and summarization. These technologies can potentially transform medical systems by enabling novel patient interactions and improving data acquisition while reducing administrative burdens. I will explore key challenges in designing and deploying these systems, including usability, ethical considerations, and the potential for clinical integration. By leveraging human-centered design principles and these emerging technologies, we can develop digital interventions that enhance healthcare efficiency and patient outcomes, ultimately shaping the future of intelligent health technologies.
Bio: Dr. Matthew Louis Mauriello is an Assistant Professor in the Department of Computer & Information Sciences at the University of Delaware, where he directs the Sensify Lab. His research lies at the intersection of human-computer interaction and ubiquitous computing, focusing on digital health, personal informatics, wearables, and AI-driven interventions. His work explores developing and evaluating intelligent systems for patient support, stress and burnout detection, and workflow
optimization. Dr. Mauriello has an extensive background in interdisciplinary research, leveraging
advances in machine learning, social computing, and information visualization to design and assess
interactive health technologies. His research has been supported by the NSF, the Maggie E. Neumann
Health Sciences Research Fund, and industry partners. He earned his Ph.D. in Computer Science from the University of Maryland’s Department of Computer Science. He then completed a postdoctoral fellowship at Stanford University’s School of Medicine, where he worked on pervasive well-being technologies. Dr. Mauriello is also an active mentor, educator, and advocate for responsible AI and human-centered design in computing.
Date: Apr 17th, 2025 12:30 PM
Talk Title: Navigating Bias and Leveraging AI: Exploring the Dual Reality for Users with Disabilities
Speaker: Dr. Vinitha Gadiraju, Assistant Professor, Department of Computer Science, Wellesley College
Location: HBK 2105 and Zoom
Abstract: Generative AI holds immense potential to revolutionize how we work, communicate, and access information. But are we building a future that includes everyone? In this talk, we will delve into how Large language models (LLMs) trained on real-world data can inadvertently reflect harmful societal biases, particularly toward people with disabilities and other marginalized communities. As we discuss biases, we will characterize the subtle yet harmful stereotypes people with disabilities have encountered that were reinforced by LLM-based chatbots, such as inspiration porn and able-bodied saviors. In contrast, we will also examine the creative and resourceful ways people with disabilities leverage these tools, how chatbots fit into their technological ecosystem, and their desires for the next iteration of generative AI tools. Finally, we will contemplate the role of chatbots in common and high-risk use cases in the context of previous foundational research and disability justice principles.
Bio: Dr. Vinitha Gadiraju is an Assistant Professor in the Department of Computer Science at Wellesley College. She has a Ph.D. in Computer Science from the University of Colorado Boulder Department of Computer Science, where she investigated and designed accessible, collaborative educational tools for visually impaired children and their social networks (supported by the National Science Foundation Graduate Research Fellowship). Dr. Gadiraju’s lab at Wellesley College now focuses on studying how adults with disabilities interact and form relationships with Large Language Model-based chatbots and the harms and benefits that arise during these experiences. She is a 2024 Google Research Scholar Recipient and publishes in leading HCI, AI, and accessibility research venues such as CHI, FAccT, and ASSETS.
Date: Apr 24th, 2025 12:30 PM
Time: 12:30pm-1:30pm ET
Location: HBK 2105
This week we’ll do another round of our experimenting with “research speed dating”! If it’s anything like the last iteration of this in the Spring, it’ll be a fun time to hear from each other about what we’re brainstorming/working on, and give feedback in a lightweight, informal, low-stakes setup!
Date: May 1st, 2025 12:30 PM
Talk Title: Tool-making, Accessibility, and Interactive Data Experiences
Speaker: Frank Elavsky, PhD candidate and Researcher, Human-Computer Interaction Institute, Carnegie Mellon University
Location: HBK 2105 and Zoom
Abstract: "Come and join me for a first-ever prototype of my (eventual) job talk! This talk presents practical and research advancements in making interactive data experiences more accessible through a suite of tools and frameworks designed to enhance both the usability and creation of accessible, interactive data experiences. Central to this work is the rethinking of accessibility, focusing not just on the functionality of representations and visualizations but on how the tools and methodologies used to build them can shape accessible outcomes. Frank's research introduces Chartability, a heuristic framework that enables practitioners, especially those with limited accessibility expertise, to evaluate and improve data visualizations across various disabilities. Complementing this, Data Navigator offers a dynamic system that allows designers to build accessible data navigation structures, supporting a variety of input modalities and assistive technologies to ensure inclusive data exploration. The concept of Softerware is introduced to aid tool designers in creating data representation systems that empower end-users with disabilities to personalize and customize their own experiences. Finally, the cross-feelter—a blind-centered data analysis hardware prototype—is presented, showcasing a tactile input device that significantly enhances how blind users explore complex relationships in linked data interfaces. Together, these contributions emphasize the importance of tools and toolmaking in creating accessible, inclusive, and customizable data interactions."
Bio: "Frank is a PhD candidate and researcher at the Human-Computer Interaction Institute at Carnegie Mellon University. His work explores the intersection of interactive data visualization, accessibility, and tooling as an intervention in the design process. Frank has collaborated with companies such as Apple's Human-Centered Machine Intelligence research group, Adobe, Microsoft, Visa, and Highcharts. Frank’s contributions focus on reimagining accessibility as an integral part of the design and tool-making process, enabling data analysts and designers to build interfaces that proactively empower people with disabilities. His work bridges the gap between technical innovation and disability-centered design, transforming traditional approaches to accessibility into dynamic social and technical interventions that enhance both data exploration and interaction."
Date: May 8th, 2025 12:30 PM
Talk Title:Human-Model Interaction in Civic Sectors
Speaker: Fumeng Yang, Assistant Professor, Department of Computer Science, University of Maryland, College Park
Location: HBK 2105 and Zoom
Abstract:
Computational models—from probabilistic forecasts to AI foundation models—are increasingly shaping decisions in public life. It is critical to ensure that individuals and groups, from the general public to domain experts and data scientists, can perceive, use, and develop these models effectively and responsibly. In this talk, I will share my work on human-model interaction in three areas: election forecasting, AI for education, and AI for decision-making. I will first present our experiments using uncertainty visualizations to build appropriate trust in probabilistic election forecasts. I will then discuss our ongoing work on understanding K–12 teachers' needs and co-designing an LLM-based classroom assessment authoring tool with them. Finally, if time permits, I will briefly introduce our survey of human-AI decision-making.
Bio:
Fumeng Yang is an Assistant Professor in the Department of Computer Science at the University of Maryland, College Park. Her research focuses on Human-Computer Interaction and Data Visualization, with an emphasis on how people interact with computational models in decision-making, education, and public communication. Her work has been published in premier venues such as CHI and VIS, and has been recognized with two Best Paper Awards and three Best Paper Honorable Mention Awards. Prior to joining UMD, she was a CCC/CRA Computing Innovation Postdoctoral Fellow at Northwestern University. She received her Ph.D. in Computer Science from Brown University.
Date: Sep 4th, 2025 12:30 PM
Talk Title: Welcome Back Event!
Location: Iribe (IRB) 4105 only
Join us into welcoming everyone back to the HCIL for the 2024 fall semester. Come chat with friends and enjoy some pizza! Note: this event will only be in person.
Date: Sep 11th, 2025 12:30 PM
It's the CHI deadline! As many of our members will be putting finishing touches on their CHI 2026 submissions, we won't have a speaker today.
Date: Sep 18th, 2025 12:30 PM
Talk Title: Rural Computing: Perspectives on Human-Centered Computing from Rural Michigan
Speaker: Jean Hardy, Assistant Professor of Media & Information ; Associate Director of the Quello Center for Media & Information Policy ; Director of the Rural Computing Research Consortium
Location: IRB 4105 and Zoom
Abstract: This talk examines rural computing as an emerging field addressing the technological needs and structural inequalities unique to rural communities. Drawing from over a decade of conducting computing research in rural communities, I present a framework that intentionally bridges social and technical approaches through community-centered design and translational research. Using a case study of digital agriculture adoption with a community farm in a former mining community in Michigan, I demonstrate how translational computing work can democratize access to digital agricultural technologies for disadvantaged rural populations. I also share lessons from adapting design research methods after initial failures in rural deployment to better engage communities in conversations about digital infrastructure. Throughout, I illustrate how rural computing requires not just technological innovation but reimagined research approaches that center rural voices and build community capacity to address digital equity challenges.
Bio: Jean Hardy is an Assistant Professor of Media & Information where he serves as the Associate Director of the Quello Center for Media & Information Policy and as Director of the Rural Computing Research Consortium. His research employs ethnographic and design methods to investigate the complex and growing relationship between digital technology and rural economic and community development in the United States.
Date: Sep 25th, 2025 12:30 PM
Talk Title: Outlining the Borders for LLM Applications in Patient Education: Developing an Expert-in-the-Loop LLM-Powered Chatbot for Prostate Cancer Patient Education
Speaker: Yuexing Hao,IvyPlus Exchange Ph.D. Scholar, MIT ; Final-year Ph.D. candidate, Cornell University
Location: IRB 4105 and Zoom
Abstract: Navigating the transition from diagnosis to treatment remains a significant challenge for many cancer patients, particularly those with limited health literacy and access to institutional resources. In this talk, I will explore how Human-Computer Interaction (HCI) principles can guide the design of Large Language Model (LLM)-based systems to support patient education in oncology. I will present the iterative development of MedEduChat, a patient-facing agent designed to provide accessible, tailored information about prostate cancer. Grounded in a needs assessment and developed through co-design with patients and clinicians, the system adopts a closed-domain, semi-structured interaction model that integrates with patients' electronic health records. Usability evaluations highlight the importance of interpretability, control, and personalization in shaping patients’ engagement with LLM agents. This work contributes to the growing area of patient-AI interaction by articulating design guidelines for building transparent, responsive, and trustworthy LLM-based healthcare applications that align with real-world patient needs.
Bio: Yuexing Hao is an IvyPlus Exchange Ph.D. Scholar at MIT and a final-year Ph.D. candidate at Cornell University. She holds Computer Science degrees from Rutgers University (B.A.) and Tufts University (M.S.). Her research focuses on Health Intelligence, Human-Computer Interaction, and AI, with an emphasis on data-driven approaches to clinical decision-making and patient-centered technologies. Yuexing has been awarded over $140,000 in competitive funding as a principal investigator during her doctoral studies. This includes the APF K. Anders Ericsson Dissertation Grant, the PCCW Frank H.T. Rhodes Leadership and Mission Grants, 2024 North America Women in Tech Most Disruptive Award (powered by Amazon), and the NCWIT AIC Collegiate Award (Honorable Mention). Her work has been published at CHI, AAAI, Bioinformatics, and the Intelligent Systems Conference. She actively serves the research community as Registration Co-Chair for ACM FAccT and Associate Chair for CSCW and CHI.
Date: Oct 2nd, 2025 12:30 PM
Talk Title: The next Frontier of AI in Creative Spaces: Designing Human-Centered Co-Creative AI
Speaker: Jeba Rezwana, Assistant Professor, Department of Computer & Information Sciences, Towson University
Location: IRB 4105 and Zoom
Abstract: Human-AI co-creativity involves humans and AI collaborating as partners to produce creative artifacts, ideas or performances. This emerging paradigm represents a form of hybrid intelligence, enabling outcomes that neither could achieve alone. With the rapid rise of generative AI systems, human-AI co-creativity has gained unprecedented momentum across domains like design, music composition, visual art, and creative writing. Yet, the full potential of GenAI requires systems that go beyond content generation to effectively communicate, collaborate, understand and adapt to human needs and styles.
Unlike traditional human-computer interaction, human-AI co-creation creates more complex dynamics as 1) AI actively collaborates and shapes the creative process rather than merely responding to commands, 2) AI assumes human-like roles of partner, evaluator, and generator, and 3) AI contributes novel content blended with the user's contribution. These dynamics surface critical challenges for Human-Centered co-creative AI: How should AI systems interact and communicate to foster collaboration and transparency? How can control be shared meaningfully between humans and AI to balance agency and AI autonomy? And how can generative AI adapt to diverse user needs and perceptions to augment both creativity and learning?
My research explores these questions by arguing that the next frontier of co-creative AI requires not just algorithmic competence but also prioritizes collaboration, transparency, user agency and human needs. Such systems should adapt to users’ creative goals and cognitive needs across different stages of creation, supporting dynamic and context-sensitive interactions. My research goal is to design and develop co-creative AI systems that are human-centered, inclusive, engaging, adaptable, and collaborative, empowering humans to create novel artifacts, develop skills, and solve complex problems in diverse creative sectors.
Bio: Jeba Rezwana is an Assistant Professor in the Department of Computer & Information Sciences at Towson University, where she directs the Human-Centered Computing (HCC) Lab. She earned her PhD in 2023 from the University of North Carolina at Charlotte. Her research lies at the intersection of Human-Computer Interaction, Human-AI Co-Creativity, Human-Centered AI, and Ethical AI. Her long-term research goal is to design co-creative AI systems that are human-centered, collaborative, ethical, and adaptable that empower users in creating novel artifacts, developing skills, and solving complex problems across creative domains. Rezwana is actively engaged in the international research communities of HCI and computational creativity. She has served on the organizing committee of ACM Creativity & Cognition (C&C) as Posters and Demos Chair in 2025 and will continue in this role for C&C 2026. She has co-organized the XAIxArts (Explainable AI for the Arts) workshop at C&C since 2023 and contributed to the HAI-GEN (Human-Centered Generative AI) workshop at ACM IUI as both a program committee member and a panelist since 2022. Additionally, she served on the program committee for the Workshop on Computational Design and Computer-Aided Creativity at ICCC 2025 and as an Associate Chair for the CHI review committee. Through her research and service to the creative community, she envisions a future where technology and AI empower people to expand their creative and cognitive potential.
Date: Oct 9th, 2025 12:30 PM
Talk Title: AI-Augmented XR for Secure Space Operations: Reducing Cognitive Load and Enhancing Human Decision-Making
Speaker: Christiana Chamon Garcia, Assistant Professor, Bradley Department of Electrical and Computer Engineering, Virginia Tech ; Director of the Stochastic Noise and Cyber Innovation Lab (SNACIL)
Location: IRB 4105 and Zoom
Abstract: The increasing complexity of space exploration requires innovative solutions to address cybersecurity threats, communication latency, and cognitive overload experienced by human operators. This work proposes an AI-augmented framework that integrates explainable AI, immersive augmented/virtual reality interfaces, and resilient machine learning models to enhance threat detection, astronaut training, and operational efficiency in environments such as low Earth orbit satellites and Mars missions. A central element is a virtual reality-based Orbital Cyber Range for simulating cyberattacks, dynamically monitoring cognitive load through eye-tracking and physiological metrics, including pupil dilation and heart rate variability. Initial findings demonstrate the efficacy of machine learning approaches—such as LightGBM with 0.782 AUC—for cognitive load detection, and the value of explainable AI in transparent threat mitigation. Visual Language Models are utilized to process multimodal sensory data, and reinforcement learning is applied to adapt encryption protocols in real time, reducing latency-induced stress. The work discusses current challenges, including AI vulnerabilities to adversarial attacks and the need for scalable datasets, and highlights opportunities for refining models to enable robust, human-centered decision support for future space operations.
Bio: Christiana Chamon Garcia received the B.S. degree in electrical engineering from the University of Houston, Houston, TX, USA, in 2017 and the M.S. and Ph.D. degrees in electrical engineering from Texas A&M University, College Station, TX, USA, in 2020 and 2022, respectively. She is currently an Assistant Professor for the Bradley Department of Electrical and Computer Engineering at Virginia Tech, Blacksburg, VA, USA and the director of the Stochastic Noise and Cyber Innovation Lab (SNACIL).
In 2016 she was a Hardware Design Intern and in 2017 a Product Management Intern with Hewlett Packard Enterprise. In 2022, she was an Applications Engineer at Vidatronic Inc, and from 2022 to 2024, she was an Instructional Assistant Professor for the Department of Computer Science and Engineering at Texas A&M University, College Station, TX, USA. Her research interests include stochastic processes, physical unclonable functions, decentralized networks, cyber-physical systems, security in artificial intelligence, engineering education, human-computer interaction, accessibility in engineering and kinesiology, and exercise science.
Dr. Garcia is a 2024 CCI Faculty Fellow, 2023 Future Faculty Diversity Program (FFDP) Fellow, 2023 WISCPROF: Future Faculty in Engineering Workshop Fellow, 2020-2021 Ebensberger Fellow, and a member of ACM, IEEE, Eta Kappa Nu (HKN), and Golden Key International Honor Society.
Date: Oct 16th, 2025 12:30 PM
Talk Title: Local Circular Electronics Lifecycle Empowered by Sustainable Computational Fabrication
Speaker: Zeyu Yan, Ph.D. candidate, Computer Science, University of Maryland (UMD)
Location: IRB 4105 and Zoom
Abstract: The evolution of electronic devices has exposed critical sustainability and resilience challenges, driven by reliance on centralized manufacturing. Meanwhile, the rapid growth of digital fabrication—from makerspaces to print farms—offers a unique foundation for building local infrastructures that can disassemble, recycle, and re-manufacture electronics. This talk presents printed circuit board assemblies (PCBAs) as a testbed for this vision and highlights three research projects: SolderlessPCB, PCB Renewal, and DissolvPCB. Each explores a distinct approach to enabling circular PCBA lifecycles through fabrication methods that are accessible and adaptable. Building on these prototypes, I set forth a vision of digital fabrication as the foundation for future sustainable electronics—extending beyond PCBs to diverse materials and assemblies, and becoming seamless, scalable, and integral to everyday life.
Bio: Zeyu Yan is a Ph.D. candidate in Computer Science at the University of Maryland (UMD). He conducts interdisciplinary human-computer interaction (HCI) research, spans digital fabrication, tangible user interfaces, accessibility, and haptic interaction, with a recent focus on designing innovative digital fabrication techniques that promote sustainable practices in printed circuit board (PCB) prototyping. His work has been published and recognized at top-tier HCI conferences, contributing to advancements in accessible and eco-conscious technology design.
Fall 2023
-
Date: Sep 7th, 2023 12:30 PM
Speaker: Dr. Susan Winter, Associate Dean for Research, College of Information Studies, the University of Maryland
Location: HBK 2105 and Zoom Watch Here!
Abstract: Technology is no longer just about technology – now it is about living. So, how do we have ethical technology that creates a better life and a better society? Technology must become truly “human-centered,” not just “human-aware” or “human-adjacent”. Diverse users and advocacy groups must become equal partners in initial co-design and in continual assessment and management of information systems with human, social, physical, and technical components. But we cannot get there without radically transforming how we think about, develop, and use technologies. In this chapter, we explore new models for digital humanism and discuss effective tools and techniques for designing, building, and maintaining sociotechnical systems that are built to be, and remain continuously ethical, responsible, and human-centered.
Bio: Dr. Susan Winter, Associate Dean for Research, College of Information Studies, the University of Maryland. Dr. Winter studies the co-evolution of technology and work practices, and the organization of work. She has recently focused on ethical issues surrounding civic technologies and smart cities, the social and organizational challenges of data reuse, and collaboration among information workers and scientists acting within highly institutionalized sociotechnical systems. Her work has been supported by the U.S. National Science Foundation and by the Institute of Museum and Library Services. She was previously a Science Advisor in the Directorate for Social Behavioral and Economic Sciences, a Program Director, and Acting Deputy Director of the Office of Cyberinfrastructure at the National Science Foundation supporting distributed, interdisciplinary scientific collaboration for complex data-driven and computational science. She received her PhD from the University of Arizona, her MA from the Claremont Graduate University, and her BA from the University of California, Berkeley.
!! There are hundreds of productivity apps and tools to help you get work done--far too many for any one person to go through and figure out what works best for them. In this week's BBL, we want you to share the tools, apps, and tips you use to help you in your research, classwork, and writing. How do you stay organized? What helps you be productive? What are things that didn't work for you? We'll talk about what people like and don't and run some quick demos during this BBL.
Fill out this form to share what you use.
Join us in the lab (HBK-2105) or on Zoom to hear about cool tools and to share the ones you use!
-
Date: Sep 21st, 2023 12:30 PM
Time: 12:30pm-1:30pm ET
Location: HBK 2105
This week we’ll do another round of our experimenting with “research speed dating”! If it’s anything like the last iteration of this in the Spring, it’ll be a fun time to hear from each other about what we’re brainstorming/working on, and give feedback in a lightweight, informal, low-stakes setup!
-
Date: Sep 28th, 2023 12:30 PM
Speaker: Dr. Madina Khamzina, postdoctoral associate, Department of Family Science, School of Public Health, University of MarylandLocation: HBK 2105 and Zoom
Watch Here! | Slides Here!
Abstract: This talk discusses the opportunities and challenges of technology to support successful aging. The population of people aged 65 and older is growing faster than any other age group worldwide. While people are living longer, it's crucial to ask whether those additional years are being lived healthier and happier. Successful aging has become a central priority at both societal and individual health levels. Technology holds the promise to significantly contribute to successful aging in various ways. For example, keeping people physically active, enabling independent living through fall detection and smart home technology, aiding in the early detection and management of diseases, as well as helping maintain social connections to reduce isolation. Keeping in mind that aging in the digital era presents its own set of challenges, we need to ensure that technologies are inclusive and accessible to everyone regardless of age. Addressing the specific needs and older adults’ factors is crucial in the endeavor to reap the benefits of technology for successful aging.
Bio: Madina earned her Ph.D. degree from the University of Illinois at Urbana-Champaign in December 2022. She is currently a postdoctoral associate at the School of Public Health and is primarily focused on work with the University of Maryland Extension Services. While working in the Human Factors and Aging Lab in Illinois, she became passionate about the role of technology in supporting successful aging. She is a principal investigator for a research project the University of Maryland Extension that is aimed to assess the needs and challenges of broadband internet and technology adoption among older adults in Maryland.
-
Date: Oct 5th, 2023 12:30 PM
Location: HBK 2105 and Zoom
Abstract: Even if you didn’t submit a paper to this year’s CHI conference, if you’re doing research, you probably know something about the review process. For most journals and conferences, submitted papers are read by 2-4 anonymous reviewers, who provide written feedback on the strengths and weaknesses of the paper and decide whether a paper should be accepted, rejected, or revised. But what should go into the review process? And how should you respond to reviews? In this session, we’ll discuss tips and tricks for being an effective reviewer, how to provide constructive criticism, and how to respond to reviewer comments. Bring your questions and experiences with reviewing, and learn more about the ups and downs of academic publishing.
-
Date: Oct 5th, 2023 12:30 PM
Location: HBK 2105 and Zoom
Abstract: Even if you didn’t submit a paper to this year’s CHI conference, if you’re doing research, you probably know something about the review process. For most journals and conferences, submitted papers are read by 2-4 anonymous reviewers, who provide written feedback on the strengths and weaknesses of the paper and decide whether a paper should be accepted, rejected, or revised. But what should go into the review process? And how should you respond to reviews? In this session, we’ll discuss tips and tricks for being an effective reviewer, how to provide constructive criticism, and how to respond to reviewer comments. Bring your questions and experiences with reviewing, and learn more about the ups and downs of academic publishing.
-
Date: Oct 12th, 2023 12:30 PM
Speaker: Ming Yin, Assistant Professor, Department of Computer Science, Purdue UniversityLocation: HBK 2105 and Zoom
Watch Here!
Abstract: Artificial intelligence (AI) technologies have been increasingly integrated into human workflows. For example, the usage of AI-based decision aids in human decision-making processes has resulted in a new paradigm of human-AI decision making—that is, the AI-based decision aid provides a decision recommendation to the human decision makers, while humans make the final decision. The increasing prevalence of human-AI collaborative decision making highlights the need to understand how humans and AI collaborate with each other in these decision-making processes, and how to promote the effectiveness of these collaborations. In this talk, I'll discuss a few research projects that my group carries out on empirically understanding how humans trust the AI model via human-subject experiments, quantitatively modeling humans' adoption of AI recommendations, and designing interventions to influence the human-AI collaboration outcomes (e.g., improve human-AI joint decision-making performance).
Bio: Ming Yin is an Assistant Professor in the Department of Computer Science, Purdue University. Her current research interests include human-AI interaction, crowdsourcing and human computation, and computational social sciences. She completed her Ph.D. in Computer Science at Harvard University and received her bachelor's degree from Tsinghua University. Ming was the Conference Co-Chair of AAAI HCOMP 2022. Her work was recognized with multiple best paper (CHI 2022, CSCW 2022, HCOMP 2020) and best paper honorable mention awards (CHI 2019, CHI 2016).
-
Date: Oct 19th, 2023 12:30 PM
Speaker: Karen HoltzblattLocation: HBK 2105 and ZoomWatch Here!
Abstract: Advancements in technology, the globalization of companies, and a growing awareness of environmental issues have catalyzed a shift in work cultures, transforming traditional face-to-face meetings into online ones. The COVID-19 pandemic further accelerated this transition, establishing videoconferencing as the prevailing mode of professional interaction. But now companies are asking workers to come back to the office at least some of the time. They cite better collaboration, information sharing, and coaching for early career folks. But is that true and what does it really mean? To find out, we 11 conducted deep dive interviews primarily with HCI professionals to understand their experience of working in person vs remotely or hybrid. HCI professionals often find themselves organizing, leading, facilitating, and participating in complex interactive meetings of various kinds: data synthesis, ideation, brainstorming, design review with whiteboarding, roadmapping, and project kickoffs. Our work complements recent research on Return-to-Work that has been conducted by surveys and gives a deeper understanding of what is going on. We sought to gain insights into these types of meetings and interactions to understand participants’ experiences and what works and what doesn’t. We hope these findings will helpguide both HCI professionals and companies as they choose when to be in-person and how to best run hybrid and remote meetings. We spoke with both senior people and early career professionals. Our insights are also against the backdrop of last year’s research into the experience of remote working during the pandemic and related literature. The presentation will tell stories of our experiences and explicate what drives people to bring people together for these complex meetings and what impacts the success of these meetings in any context. We will also describe the impact of the social dimension of working together. We discuss the need for a shared understanding, ensuring engagement, managing the meeting, and the powerful role of nonverbal communication as well as the need and desire for connection both for its own sake and for the sake of the work and career.
Bio: Karen Holtzblatt is a thought leader, industry speaker, and author. A recognized innovator in requirements and design, Karen has developed transformative design approaches throughout her career. She introduced Contextual Inquire and Contextual Design, the industry standard for understanding the customer and organizing that data to drive innovative product and service concepts. Her newest book Contextual Design 2nd Edition Design for Life is used by companies and universities worldwide. Karen co-founded InContext Design in 1992 with Hugh Beyer to use Contextual Design techniques to coach product teams and deliver market data and design solutions to businesses across scores of industries in many countries. As CEO of InContext, Karen has worked with product, application, and design teams for over 30 years. Karen is also the driving force behind the Women in Tech Retention Project housed at witops.org. WITops research explores why women in technology professions leave the field and creates tested interventions to help women thrive and succeed. Her new book with Nicola Marsden, Retaining Women in Tech: Shifting the Paradigm shares the work. Karen consults with companies to help them understand their diverse teams and improve retention, team cohesion, and equal participation by all. As a member of ACM SIGCHI (The Association of Computing Machinery’s Special Interest Group on Computer-Human Interaction) Karen was awarded membership to the CHI Academy a gathering of significant contributors and received the first Lifetime Award for Practice for her impact on the field. Karen has also been an Adjunct Research Scientist at the University of Maryland’s iSchool (College of Information Studies). Karen has worked with many universities to help design curriculum for training user experience professionals. Karen has more than 30 years of teaching experience professionally, at conferences and university settings. She holds a doctorate in applied psychology from the University of Toronto.
-
Date: Oct 26th, 2023 12:30 PM
Speaker: Dr. Emma Dixon, Assistant Professor, Clemson University
Location: HBK 2105 and Zoom Watch Here! | Slides Here!
Abstract: We are seeing new AI systems for people with dementia, such as brain games which detect and diagnose cognitive impairment and smart-home systems to monitor the daily activities of people with dementia while caregivers are away. Although these are important areas of research, there are open opportunities to extend the use of AI to support individuals with dementia in a variety of different aspects of everyday life outside of diagnosis and monitoring. In this talk, Emma Dixon will briefly discuss her work in the area of AI for people experiencing age-related cognitive changes. The first study examines the technology accessibility needs of individuals with dementia, uncovering ways AI may be used to provide personalized solutions. The second study explores the ways tech-savvy people with dementia configure commercially available AI systems to support their everyday activities. Finally, the third study focuses on the design of future applications of AI to support the everyday life of people with dementia.
Bio: Dr. Emma Dixon is an Assistant Professor in Human-Computer Computing with a joint appointment in Industrial Engineering at Clemson University. Her research investigates technology use by neurodivergent individuals and people living with neurodegenerative conditions. In doing so, her research agenda is situated at the intersection of health information technology and cognitive accessibility research. Due to the complexity of this space, she takes a mixed methods approach, using qualitative methods to ground her work deeply in situated understanding of people’s experiences and quantitative methods to test the usability of emerging technologies. She earned her undergraduate degree in Industrial Engineering at Clemson University and her PhD in Information Studies at University of Maryland, College Park. Her research has received a Dean’s Award for Outstanding iSchool Doctoral Paper, as well as a Best Paper Nomination and Honorable Mention awards at ASSETS and CSCW conferences. She has published her work in CHI, CSCW, ASSETS, JMIR Mental Health, Applied Ergonomics, and TACCESS. Her dissertation work was supported by the NSF Graduate Research Fellowship.
-
Date: Nov 2nd, 2023 12:30 PM
Speaker: Foad Hamidi, Assistant Professor in Information Systems at the University of Maryland, Baltimore County (UMBC)
Location: HBK 2105 and Zoom
Watch Here!
Abstract: Community-based participatory design (PD) offers inclusive and exciting principles and methods for enabling mutual learning among diverse interested parties. As PD moves from the workplace to other domains, such as Do-it-Yourself (DIY) design spaces, informal learning contexts, and domestic and home settings, we need to rethink and redefine what it means to do PD and what outcomes can move us towards desired futures. In this talk, I draw on several of my recent projects where I use PD to investigate and interrogate emerging technologies, such as DIY assistive technologies and living media interfaces (LMIs).
Bio: Foad Hamidi is an Assistant Professor in Information Systems at the University of Maryland, Baltimore County (UMBC). His research focuses on several areas within Human-Computer Interaction (HCI), including Living Media Interfaces, ParticipatoryDesign, and DIY assistive technology. He conducts transdisciplinary community-engaged research and regularly collaborates with community partners. At UMBC, he directs the DesigningpARticipatoryfuturEs (DARE) lab and the Interactive Systems Research Center (ISRC). He has a PhD in Computer Science from York University, Toronto.
-
Date: Nov 9th, 2023 12:30 PM
Speaker: Dr. Herman Saksono, Assistant Professor, Health Sciences & CS, Northeastern University
Location: HBK 2105 and Zoom
Watch Here!
Abstract: We live in a storied life. Stories from people at present and in the past are guiding our actions in the future. Although this narrative mode of knowing complements the pragmatic mode, the pragmatic mode of knowing is the only ubiquitously supported mode in personal health informatics systems. In this talk, I will present my research on personal health informatics that uses storytelling to support health behavior in marginalized communities. These studies examined how storytelling technologies can amplify social connections and knowledge within the family and neighbors. The use of stories socially is a departure from health technologies that are often individually focused. Technologies that portray health solely as an individual’s responsibility could widen health disparities because marginalized communities face numerous health barriers due to systemic inequities. Storytelling health informatics could lessen this burden by supporting health behaviors as collective community efforts.
Bio: Dr. Herman Saksono is an Assistant Professor at Northeastern University with a joint appointment at the Bouvé College of Health Sciences and the Khoury College of Computer Sciences. Previously, he was a postdoctoral research fellow at the Center for Research on Computation and Society at Harvard University. He completed his Ph.D. in Computer Science at Northeastern University and a Fulbright scholarship recipient.
Herman’s interdisciplinary research contributions are in Personal Health Informatics, Human-Computer Interaction, and Digital Health Equity. His research investigates how digital tools can catalyze social interactions that encourage positive health behaviors, thus facilitating collective efforts toward health equity. He conducts the entire human-centered design process by designing, building, and evaluating innovative health technologies in collaboration with local community partners. Herman published his work in ACM CHI and CSCW where he received honorable mentions for Best Paper awards.
-
Date: Nov 16th, 2023 12:30 PM
Talk Title: Student Lightning Talks
Location: HBK 2105 and Zoom
This BBL will be dedicated to four student lightning talks. We are excited to hear what they are working on!
How do lightning talks work?
Typically, people give a 4-5 minute “presentation” — this can be very informal or involve slides. The presentation gives some background on your project and then introduces a specific question or “ask” that you want feedback on. Then we have ~15 minutes of conversation with attendees about your question/topic. This is a great opportunity for students to get feedback on research ideas or projects in various stages.
Talks are held in the HCIL (HBK2105), but if you can’t make it in person, register for Zoom here.
-
Date: Nov 16th, 2023 12:30 PM
Location: HBK 2105 and Zoom
This BBL will be dedicated to four student lightning talks. We are excited to hear what they are working on!
How do lightning talks work? Typically, people give a 4-5 minute “presentation” — this can be very informal or involve slides. The presentation gives some background on your project and then introduces a specific question or “ask” that you want feedback on. Then we have ~15 minutes of conversation with attendees about your question/topic. This is a great opportunity for students to get feedback on research ideas or projects in various stages.
-
Date: Nov 30th, 2023 12:30 PM
Speaker: Dr. Jane Chung, Associate Professor, Virginia Commonwealth University School of Nursing
Location: HBK 2105 and Zoom Watch Here!
Abstract: Older adult residents of low-income housing are at a high risk of unmanaged health conditions, loneliness, and limited healthcare access. Smart speakers have the potential to improve social connections and well-being among older adult residents. We conducted an iterative, user-centered design study with primarily African American older adults who lived alone in low-income housing to develop low-fidelity prototypes of smart speaker applications for wellness and social connections. Focus groups were held to elicit feedback about challenges with maintaining wellness and attitudes towards smart speakers. Through design workshops, they identified several smart speaker functionalities perceived as necessary for improving wellness and social connectedness. Then, several low-fidelity prototypes and use scenarios were developed in the following categories: wellness check-ins, befriending the virtual agent, community involvement, and mood detection. We demonstrate how smart speakers can provide a tool for their wellness and increase access to applications that provide a virtual space for social engagement. This presentation will also highlight strategies for addressing digital health inequities among socially vulnerable older adults. The goal is to enhance technology proficiency, reduce fear, and ultimately foster the acceptance of essential technologies.
Bio: Dr. Jane Chung is an Associate Professor at Virginia Commonwealth University School of Nursing. She is a nurse scientist with special emphasis on aging and technology research. Her research program has two foci: 1) advancing the methods for functional health monitoring and risk detection among older adults using innovative sensor technologies and 2) improving social connectedness and well-being in socially vulnerable older adults based on advances in data science and digital technologies including novel machine learning algorithms. She currently leads two NIH-funded studies – R01 project to identify digital biomarkers of mobility that are predictive of cognitive decline in community-dwelling older adults, and R21 project where her team is developing a smart speaker-based system for automatic loneliness assessment in older adults. Recently, she has been selected as a fellow for the Betty Irene Moore Fellowship for Nurse Leaders and Innovators, and in this fellowship program, she is working on a smart speaker-based intervention designed to assist low-income older adults in managing chronic conditions and daily activities more effectively.
-
Date: Dec 7th, 2023 12:30 PM
Speaker: Seongkook Heo, Assistant Professor, CS, University of Virginia
Location: HBK 2105 and Zoom Slides Here!
Abstract: Computers are more deeply integrated into our daily lives than ever before, and recent advancements in ML and AI technologies enable computers to comprehend the real world. However, using such capabilities for daily tasks still induces friction because of inefficient interactions with them.
In this talk, I will share my group's research on how we can better connect the physical and virtual worlds through the design and development of interactive systems. First, I will discuss how we can bring objects and interactions of the physical world into the virtual world to make virtual communication rich and frictionless. In many computer-mediated meetings, we not only share our faces and voices but also physical objects. We developed a remote meeting system that supports the instant conversion of physical objects into virtual objects to allow efficient sharing and manipulation of objects during the conversation.
Second, I will share how we can physicalize computation results into physical actions. Many projects and applications have demonstrated the use of AI in assisting users with visual impairments. However, computers usually only provide guidance feedback to the user and leave the interpretation of the feedback and the execution to the user, which can be cognitively heavy tasks. We suggested automated hand-based spatial guidance to bridge the gap between guidance and execution, allowing visually impaired users to move their hands between two points automatically. Finally, I will discuss the implications and remaining challenges in bridging the two realities.
Bio: Seongkook Heo is an assistant professor in the Department of Computer Science at the University of Virginia. He has been working on Human-Computer Interaction (HCI) research, focusing on bridging the gap between physical and virtual worlds to make computers better support rich and nuanced human interactions by designing novel interactive systems and developing sensing and feedback technologies. His research has been published at top HCI venues, including CHI, UIST, and CSCW, and recognized by Best Paper and Poster Awards at CHI, MobileHCI, and IEEE VR. He is also the recipient of the Engineering Research Innovation Award at the University of Virginia and the Meta Research Award. He received his Ph.D. at KAIST and worked at the University of Toronto as a postdoctoral researcher before joining the University of Virginia.
Spring 2023
-
Date: Jan 26th, 2023 12:30 PM
Catherine Plaisant is a Research Scientist Emerita at UMIACS and HCIL member. Catherine earned a Doctorat d’Ingénieur degree in France and joined HCIL in 1988. She works with multidisciplinary teams on designing and evaluating new interface technologies that are useful and usable. In 2015 she was elected to the ACM SIGCHI Academy recognizing principal leaders in the field of Human-Computer Interaction. In 2018 she was awarded an INRIA International Chair, and in 2020 she received the IEEE VIS Career Award and the ACM SIGCHI Lifetime Service Award. She has published over 200 papers, on subjects as diverse as information visualization, medical informatics, universal access, decision making, digital humanities or technology for families. Her work spans the interface development lifecycle, with contributions to requirements gathering, interface design, and evaluation.
-
Date: Feb 2nd, 2023 12:30 PM
There are hundreds of productivity apps and tools to help you get work done--far too many for any one person to go through and figure out what works best for them. In this week's BBL, we want you to share the tools, apps, and tips you use to help you in your research, classwork, and writing. How do you stay organized? What helps you be productive? What are things that didn't work for you? We'll talk about what people like and don't and run some quick demos during this BBL.
Fill out this form to share what you use.
Join us in the lab (HBK-2105) or on Zoom to hear about cool tools and to share the ones you use!
-
Date: Feb 16th, 2023 12:30 PM
Abstract: In this talk, I'll present the potential of using biosensor-based feedback to support instructors in providing emotional and instructional scaffolding for English language learners (ELLs). This research includes classifying the intensity and characteristics of public speaking anxiety (PSA) and foreign language anxiety (FLA) among ELLs, with a view to providing tailored feedback to instructors. A focus group interview was conducted to identify instructors’ needs for solutions providing emotional and instructional support for ELLs. This was followed by an ideation and design session, where prototypes incorporating biosensing technology were designed to support teaching. I conclude this talk by discussing the feasibility of using electrodermal activity (EDA) to measure ELLs' emotional states, provide an algorithm for classifying speaking anxiety, and offer design guidance for an educational system using EDA data in an ESL/EFL environment as well as the instructors’ perspectives about using biosensor-based feedback in teaching.
Bio: Heera Lee is a Lecturer in the Information Studies department at the University of Maryland, College Park. Her interest research area in Human-Computer Interaction (HCI) is educational technologies and affective computing for English language learners (ELLs) from diverse cultures. She has been focusing on investigating contributing factors to the public speaking anxiety (PSA) and foreign language speaking anxiety (FLA) among ELLs by analyzing their self-report questionnaire, individual interviews, non-verbal behaviors, and physiological data including electrodermal activity (EDA). These interests stem from her teaching experience as an instructor in English Language Institute (ELI), Teaching Assistant in undergraduate programs at University of Maryland, Baltimore County (UMBC), and Adjunct faculty at Towson University.
Join us in the lab or on Zoom (register here).
-
Date: Feb 23rd, 2023 12:30 PM
Abstract: In recent years, Computational thinking (CT) and creativity have been recognized as essential skills to acquire from a young age. Despite the rich and fruitful research efforts to understand these skills, the association between CT and creativity still needs to be fully understood. In this lecture, I will present our research on the connection between CT and creativity among middle school students through designed challenges in a game-based learning environment. I will discuss the impact of our intervention program to promote these skills and describe the practices for collecting and analyzing data from standard creativity tests and the learning environment logfiles.
Bio: Rotem Israel-Fishelson is a postdoctoral researcher in the Department of Teaching & Learning, Policy & Leadership in the College of Education at the University of Maryland. Her research explores ways to introduce learners to data science using engaging computational learning experiences. She is also interested in assessing computational thinking and creativity skills in game-based learning environments using learning analytics methods. Rotem holds a Ph.D. in Science Education from Tel Aviv University, an M.Sc. in Media Technology from Linnaeus University, and a B.A. in Instructional Design from the Holon Institute of Technology.
Join us in the lab or on Zoom (register here).
-
Date: Mar 2nd, 2023 12:30 PM
Abstract: Human spaceflight over the past 60 years has been remarkably safe. This has been largely due to the fact that support from Earth, in the form of near-real-time communication, resupply, and evacuation options, has been a successful countermeasure to the significant hazards associated with in-space operations. Longer duration missions to the Lunar surface, and then to Mars, will quickly break this approach, requiring a paradigm shift in terms of on-board, in-mission capabilities for increased Earth-independence.
Bio: Dr. Alonso Vera is Chief of the Human Systems Integration Division at NASA Ames Research Center. He has worked at NASA for over 20 years and has served as Division Chief since 2010. Dr. Vera has cross-disciplinary expertise
in human performance, human-computer interaction and artificial intelligence. He has led the development and deployment of software systems across NASA robotic and human space flight missions including Mars Exploration Rovers, Phoenix Mars Lander, Mars Science Laboratory, Space Shuttle, International Space Station, and Exploration Systems. Dr. Vera received a Bachelor of Science from McGill University and a Ph.D. from Cornell University. He went on to a Post-Doctoral Fellowship in the School of Computer Science at Carnegie Mellon University.
Join us in the lab or on Zoom (register here).
This event is cosponsored with the Organizational Teams and Technology Research Society (OTTRS). Read more about this research group at https://ottrs.ischool.umd.edu.
-
Date: Mar 9th, 2023 12:30 PM
Abstract: In this talk, I will discuss key issues underlying the current incentive systems for research evaluation, summarize existing data on the relation between key indicators of research quality and traditional metrics, and highlight some of the challenges with reputation-based systems. I argue that real reform in research evaluation requires a fundamental rethinking of how we conceptualize research productivity, moving away from traditional incentive structures that heavily weigh quantity and toward a model in which the incentives align with our institutional and scientific values. I suggest that these reforms must be designed in a way to incentivize researchers to engage in pro-social behaviors.
Bio: Dr. Dougherty received his PhD in 1999 from the University of Oklahoma and his BS from Kansas State University in 1993. Dr. Dougherty has received numerous research awards, including the Hillel Einhorn Early Investigator Award from the Society for Judgment and Decision Making, and the early investigator CAREER award from the National Science Foundation. Dr. Dougherty was appointed chair of the Department of Psychology in 2017.
Join us in the lab or on Zoom (register here).
-
Date: Mar 16th, 2023 12:30 PM
Abstract: What happens to the little ones, the tweens, and the teenagers, when technology, ubiquitous in the world they inhabit, becomes a critical part of their lives? Technology’s Child brings clarity to what we know about technology’s role in child development and provides guidance on how to help children of all ages make the most of their digital experiences.
From toddlers who are exploring their immediate environment to twenty-somethings who are exploring their place in society, technology inevitably and profoundly affects their development. Drawing on her expertise in developmental science and design research, Dr. Katie Davis describes what happens when child development and technology design interact, and how this interaction is complicated by children’s individual characteristics as well as social and cultural contexts. Critically, she explains how a self-directed experience of technology—one initiated, sustained, and ended voluntarily—supports healthy child development, especially when it takes place within the context of community support, and how an experience that lacks these qualities can have the opposite effect.
Bio: Katie Davis is Associate Professor at the University of Washington (UW) Information School, Adjunct Associate Professor in the UW College of Education, and a founding member and Co-Director of the UW Digital Youth Lab. Katie investigates the impact of digital technologies on young people’s learning, development, and well-being, and co-designs positive technology experiences for youth and their families. Her work bridges the fields of human development, human-computer interaction, and the learning sciences. In addition to her academic papers, Katie is the author of three books exploring technology’s role in young people’s lives: Technology’s Child: Digital Media’s Role in the Ages and Stages of Growing Up (MIT Press, 2023), Writers in the Secret Garden: Fanfiction, Youth, and New Forms of Mentoring (with Cecilia Aragon, MIT Press, 2019), and The App Generation: How Youth Navigate Identity, Intimacy, and Imagination in a Digital World (with Howard Gardner, Yale University Press, 2013). Prior to joining the faculty at the University of Washington, Katie was a research scientist at Harvard Project Zero, where she worked on the research team that collaborated with Common Sense Media to develop the first iteration of their digital citizenship curriculum. She holds two master’s degrees and a doctorate in Human Development and Education from Harvard Graduate School of Education.
Join us in the lab or on Zoom (register here).
-
Date: Mar 30th, 2023 12:30 PM
Dr. Andrea Parker
Transforming the Health of Communities through Innovations in Social Computing
March 30, 2023, 12:30pm ET
Abstract: Digital health research—the investigation of how technology can be designed to support wellbeing—has exploded in recent years. Much of this innovation has stemmed from advances in the fields of human-computer interaction and artificial intelligence. A growing segment of this work is examining how information and communication technologies (ICTs) can be used to achieve health equity, that is, fair opportunities for all people to live a healthy life. Such advances are sorely needed, as there exist large disparities in morbidity and mortality across population groups. These disparities are due in large part to social determinants of health, that is, social, physical, and economic conditions that disproportionately inhibit wellbeing in populations such as low-socioeconomic status and racial and ethnic minority groups.
Despite years of digital health research and commercial innovation, profound health disparities persist. In this talk, I will argue that to reduce health disparities, ICTs must address social determinants of health. Intelligent interfaces have much to offer in this regard, and yet their affordances—such as the ability to deliver personalized health interventions—can also act as pitfalls. For example, a focus on personalized health interventions can lead to the design of interfaces that help individuals engage in behavioral change. While such innovations are important, to achieve health equity there is also a need for complimentary systems that address social relationships. Social ties are a crucial point of focus for digital health research as they can provide meaningful supports for positive health, especially in populations that disproportionately experience barriers to wellbeing.
I will offer a vision for digital health equity research in which interactive and intelligent systems are designed to help people build, enrich, and engage social relationships that support wellbeing. By expanding the focus from individual to social change, there is tremendous opportunity to create disruptive interventions that catalyze and sustain population health improvements.
Bio: Andrea Grimes Parker is an Associate Professor in the School of Interactive Computing at Georgia Tech. She is also an Adjunct Associate Professor in the Rollins School of Public Health at Emory University and at Morehouse School of Medicine. Dr. Parker holds a Ph.D. in Human-Centered Computing from Georgia Tech and a B.S. in Computer Science from Northeastern University. She is the founder and director of the Wellness Technology Lab at Georgia Tech. Her interdisciplinary research spans the domains of human-computer interaction and public health, as she examines how social and interactive computing systems can be designed to address health inequities.
Dr. Parker has published widely in the space of digital health equity and received several best paper honorable mention awards for her research. Her research has been funded through awards from the National Science Foundation, the National Institutes of Health, the Aetna Foundation, Google, and Johnson & Johnson. Additionally, she is a recipient of the 2020 Georgia Clinical & Translational Science Alliance Team Science Award. Dr. Parker has held various leadership roles, including serving as co-chair for Workgroup on Interactive Systems in Healthcare (WISH) and as a member of the Johnson & Johnson / Morehouse School of Medicine Georgia Maternal Health Research for Action Steering Committee.
Join us in the lab or on Zoom (register here).
-
Date: Apr 6th, 2023 12:30 PM
Talk Title: Evidence Standards and Data Science for All
Abstract: Scientific fields often believe that they hold a strong basis of evidence for claims made by their own community. In practice, however, exactly what evidence is expected for a paper to be published or for a hypothesis to become an accepted theory is complex and historically bizarre. In this talk, we will discuss a snippet of the history of evidence and how these lessons are morphing into what some scholars are calling data science. In the process, we will discuss barriers and problems in data science that need resolution for it to become accessible to the general public, scholars, and notably people with disabilities.
Bio: Andreas Stefik is a professor of computer science at the University of Nevada, Las Vegas. For more than a decade, he has been creating technologies that make it easier for people, including those with disabilities, to write computer software. He helped establish the first national educational infrastructure for blind or visually impaired students to learn computer science and invented the first evidence-based programming language, Quorum. The design of Quorum is created from data derived through methodologies similar to those used in the medical community. He has been a principal investigator on 8 NSF-funded grants, many of which related to accessible graphics and computer science education. Finally, he was honored with the 2016 White House Champions of Change award and the Expanding CS Opportunities award from Code.org and the Computer Science Teachers Association.
-
Date: Apr 13th, 2023 12:30 PM
Arvind Satyanarayan, Associate Professor, MIT
Title: Intelligence Augmentation through the Lens of Interactive Data Visualization
Abstract: Recent rapid advances in machine learning have brought new energy to the future of human + machine partnerships. In this talk, I will use three research threads on interactive data visualization to better understand the balance between automation and augmentation. First, I will describe how new specifications of visual and non-visual data representations allow us to reason about visual perception and cognition. Second, I will explore how visualization can be used to bridge human mental models and machine-learned representations. And, finally, I will discuss how data visualization already exhibits an epistemological crisis of truth—one that generative models threaten to further widen.
Bio: Arvind Satyanarayan is Associate Professor of Computer Science at MIT, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). He leads the MIT Visualization Group, which uses visualization as a lens to explore how software systems can enhance our creativity and cognition, while respecting our agency. Arvind's work has been recognized with an IEEE VGTC Significant New Researcher award, an NSF CAREER and Google Research Scholar award, a Kavli fellowship, best paper awards at academic venues (e.g., ACM CHI and IEEE VIS), and honorable mentions amongst practitioners (e.g., Kantar's Information is Beautiful Awards). Visualization systems he has helped develop are widely used in industry (including at Apple, Google, Microsoft, and Netflix), on Wikipedia, and by the Jupyter/Python data science communities.
Join us in the lab or on Zoom (register here).
-
Date: Apr 20th, 2023 12:30 PM
Students and faculty prepare for upcoming ACM CHI conference talks.
Join us in the lab or on Zoom (register here).
-
Date: Apr 27th, 2023 12:30 PM
Carolina Batista (left) and Flávia Batista (right)
Title: Affective Polarization and Support for Democratic Institutions: Evidence from Survey Experiments in Brazil, Chile, and Colombia
Abstract: We examine the relationship between partisan social media messages and voters' support for democratic institutions. The experiments test whether partisan voters favor dissolving Congress or impeaching the president to advance in-group goals, considering a range of messages on consensual and wedge issues. We hypothesize that incumbent voters and opposition voters are more likely to reduce their support for democratic institutions controlled by out-group members, with opposition respondents more supportive of impeaching the president and government respondents more supportive of dissolving Congress. Partisan messages are expected to increase these effects, weakening those institutions controlled by the out-group party. We implement survey experiments in Chile, Brazil, and Colombia between October 2022 and March 2023. The experiments randomly expose respondents to partisan messages on issues such as inflation, abortion, crime, and protests. Inter-party differences conform to expectations, with opposition voters reporting higher preferences for impeaching the president and government supporters reporting higher preferences for dissolving Congress. However, we find no consistent social media effect. Incumbent and opposition voters support undemocratic policies that align with in-group goals, yet the effect does not increase with exposure to partisan social media messages on wedge issues.
Bios:
Carolina Batista is a Ph.D. student in Government and Politics at the University of Maryland. She holds a Masters’ degree in International Policy Analysis from the Pontifical Catholic University of Rio de Janeiro, Brazil. At UMD, Carolina is a member of the Interdisciplinary Laboratory of Computational Social Science (iLCSS) and the Latin American and Caribbean Studies Center (LACS). Her main research interests rely on computational methods, as well as political behavior, democracy, polarization, and social justice in Latin America.
Flávia Batista is a second-year Ph.D. student at the Government and Politics department at the University of Maryland, College Park, majoring in Comparative Politics and Political Methodology. At UMD, Flavia is a member of the Interdisciplinary Laboratory of Computational Social Science (iLCSS) and the Latin American and Caribbean Studies Center (LASC). She holds a B.A. in International Relations from the University of Brasilia, Brazil, and an M.A. in Brazilian Studies from the University of Illinois at Urbana-Champaign. Flávia's primary research interests include elections, electoral campaigns, disinformation, and democratic backsliding.
Join us in the lab or on Zoom (register here).
-
Date: May 4th, 2023 12:30 PM
Yi Ting Huang, Associate Professor, Department of Hearing and Speech Sciences, University of Maryland
Title: Technology and the future of clinical services: Language, communication, and disabilities
Abstract: Speech-language pathologists and audiologists are at the front lines of improving functional language and communication across the lifespan. These include treating wide-ranging disabilities such as language disorders, autism, stuttering, hearing loss, traumatic brain injury, stroke, dementia, etc. The success of early diagnosis paired with rapidly changing US demographics have introduced two broad challenges. First, clinicians are faced with massively increasing caseloads, and new populations that were unseen decades ago. Second, clinicians are a 93% white workforce and 56% of clients identify as people of color, and this raises a host of challenges related to cultural and linguistic diversity. While existing technology has focused on specific client needs (e.g., hearing aids, AAC devices), developing tools that can increase the efficiency and efficacy of service delivery in a heavily labor-intensive industry will improve quality of life for individuals with disabilities at scale. To do so, I will introduce three on-going projects that leverage 1) telehealth to provide language therapy for children with Developmental Language Disorder, 2) automated methods for multilingual transcription to accurately assess language knowledge in bilingual children, 3) video-calling platforms for create augmented spaces for communication for autistic and neurotypical adults. These examples demonstrate how technology can reach clients that are geographically inaccessible, offer services that typically take substantial time and expertise, and alter environments that provide communicatively relevant information. I will close by considering the wealth of opportunities at the intersection of language, communication, and disabilities, and invite others to brainstorm technology applications to address urgent needs in health care access, disproportionality in diagnosis, and diversity and inclusion in the workplace.
Bio: Yi Ting Huang is an Associate Professor in the Department of Hearing and Speech Sciences. She received her Ph.D. in Developmental Psychology at Harvard University and trained as a post-doctoral fellow in Cognitive Psychology at the University of North Carolina at Chapel Hill. Dr. Huang’s research focuses on how young language learners acquire the ability to coordinate linguistic representations during real-time comprehension. She explores this question by using eye-tracking methods to examine how the moment-to-moment changes that occur during processing influence the year-to-year changes that emerge during development. She has applied this approach to examine a variety of topics including word recognition, application of grammatical knowledge, and the generation of pragmatic inferences. Other interests include the relationship between language and concepts, language comprehension and production, and language development and literacy. She is currently a member of the Maryland Language Science Center and the Program in Neuroscience and Cognitive Science.
Join us in the lab or on Zoom (register here).
-
Date: May 11th, 2023 12:30 PM
Cognitive Architecture for Operant Conditioning
Abstract: In this talk, I will present my research on cognitive architecture for operant conditioning. To lay the foundation for the discussion, I will start by reviewing the definitions and tests for AI and propose a new definition and the test for human-level artificial intelligence (HLAI). I claim that the essence of HLAI to be the capability to learn from others' experiences via language. Based on the definition, a test based on the language acquisition task will be proposed with the simulated environment to run the test practically. A next milestone toward programming HLAI would be enabling operant conditioning inspired by the ‘Skinner box’ experiment. To achieve this goal, I will explain two lessons that we can learn from the biological brain. First, the working principle of neocortex can be modeled as Modulated Heterarchical Prediction Memory (mHPM). In mHPM, autoregressive universal modules in sparse distributed memory (SDM) representations are connected in a heterarchical network, and they are update in a local and distributed way instead of current deep learning trend of end-to-end optimization based on the single objective function. mHPM stores the multi-modal world model. Second lesson is that we need non-homogeneous cognitive architecture for innate and learned behaviors instead of current homogeneous architecture. I will explain the role of innate components such as hippocampus, reward system, hypothalamus, and amygdala. Those innate components use the world model in mHPM enabling episodic memory formation and rapid adaptation.
Bio: Dr. Deokgun Park is an assistant professor of the Computer Science and Engineering Department at the University of Texas at Arlington (UTA). He leads the Human Data Interaction Lab at UTA, which studies the Human-Level Artificial Intelligence. Dr. Park earned his doctoral degree from the University of Maryland in 2018. He completed M.S. in Interdisciplinary Engineering at Purdue University and M.S. in Biomedical Engineering at Seoul National University, where he also obtained a B.S. degree in Electrical Engineering. He worked at the Government and industry research labs, and startups. And his patents have been licensed to companies, including Samsung Electronics.
Join us in the lab or on Zoom (register here).
Fall 2022
-
Date: Sep 8th, 2022 12:30 PM
Talk Title: Aphasia Profiles and Implications for Technology Use
Speaker: Kristin Slawson, Clinical Associate Professor, University of Maryland Hearing and Speech Clinic, and Michael Settles
Location: HBK 2105
Abstract: Conservative estimates suggest that 2.5 million people in the US have aphasia, yet few people have ever heard of the condition. Aphasia is a poorly understood, "invisible disability" that specifically impacts use of language in all forms. People with aphasia are more likely than other stroke survivors to experience social isolation, loss of independence, and significantly lower levels of employment. These immediate consequences have negative ripple effects on the mental and physical health outcomes of survivors and their family members. This talk aims to increase awareness of specific aphasia profiles in hopes of exploring how technology can be adapted to help people with aphasia maintain their prior level of work, social engagement, and independence to the greatest degree possible.
Bio: Kristin Slawson is a Speech-Language Pathologist and a Clinical Associate Professor in Hearing and Speech Sciences. As a brain injury specialist, she is particularly interested in the functional impact of brain injuries on cognitive-linguistic abilities and implications of these changes on maintenance of social connections and return to school and work.
Bio: Michael Settles is a 2022 ASHA Media Champion Award for his work advocating for aphasia awareness. He is featured in a special exhibit on aphasia and word finding at the Planet Word Museum in Washington, DC. He is an advocate for expanded use of technology to support communication needs of people with aphasia.
Check out slides from Kristin's presentation here.
-
Date: Sep 15th, 2022 12:30 PM
Speaker: Cody Buntain, Assistant Professor, iSchool, UMD
Location: HBK 2105
Abstract: While originally developed to increase diversity in product recommendations and show individuals personalized content, recommendation systems have increasingly been criticized for their opacity, potential to radicalize vulnerable users, and incentivizing anti-social content. At the same time, studies have shown that modified recommendation systems can suppress anti-social content across the information ecosystem, and platforms are increasingly relying on such modifications for soft content-moderation interventions. These contradictions are difficult to reconcile as the underlying recommendation systems are often dynamic and commercially sensitive, making academic research on them difficult. This paper sheds light on these issues in the context of political news consumption by building several recommendation systems from first principles, populated with real-world engagement data from Twitter and Reddit. Using domain-level ideology measures, we simulate individuals' ideological trajectories through recommendations for news sources and examine whether standard recommendation approaches drive individuals to more partisan content and under what circumstances such radicalizing trajectories may emerge. We end with a discussion of personalization's impact in consuming political content, and implications for instrumenting deployed recommendation systems for anti-social effects.
Bio: Dr. Cody Buntain is an assistant professor in the College of Information Studies at the University of Maryland and a research affiliate for NYU's Center for Social Media and Politics, where he studies online information and social media. His work examines how people use online information spaces during crises and political unrest, with a focus on information quality, preventing manipulation, and enhancing resilience. His work in these areas has been covered by the New York Times, Washington Post, WIRED, and others. Prior to UMD, he was an assistant professor at the New Jersey Institute of Technology and a fellow at the Intelligence Community Postdoctoral Fellowship.
-
Date: Sep 22nd, 2022 12:30 PM
Speaker: Niklas Elmqvist, Professor, iSchool, UMD
Location: HBK 2105
Abstract: Mobile computing, virtual and augmented reality, and the internet of things (IoT) have transformed the way we interact with computers. Artificial intelligence and machine learning have unprecedented potential for amplifying human abilities. But how have these technologies impacted data analysis, and how will they cause data analysis to change in the future? In this talk, I will review my group's sustained efforts of going beyond the mouse and the keyboard into the "metaverse" of analytics: large-scale, distributed, ubiquitous, immersive, and increasingly mobile forms of data analytics augmented and amplified by AI/ML models. I will also present my vision for the fundamental theories, applications, design studies, technologies, and frameworks we will need to fulfill the vast potential of this exciting new area in the future.
Bio: Niklas Elmqvist (he/him/his) is a full professor in the iSchool (College of Information Studies) at University of Maryland, College Park. He received his Ph.D. in computer science in 2006 from Chalmers University in Gothenburg, Sweden. Prior to joining University of Maryland, he was an assistant professor of electrical and computer engineering at Purdue University in West Lafayette, IN. From 2016 to 2021, he served as the director of the Human-Computer Interaction Laboratory (HCIL) at University of Maryland, one of the oldest and most well-known HCI research labs in the United States. His research area is information visualization, human-computer interaction, and visual analytics. He is the recipient of an NSF CAREER award as well as best paper awards from the IEEE Information Visualization conference, the ACM CHI conference, the International Journal of Virtual Reality, and the ASME IDETC/CIE conference. He was papers co-chair for IEEE InfoVis 2016, 2017, and 2020, as well as a subcommittee chair for ACM CHI 2020 and 2021. He is also a past associate editor of IEEE Transactions on Visualization & Computer Graphics, as well as a current associate editor for the International Journal of Human-Computer Studies and the Information Visualization journal. In addition, he serves as series editor of the Springer Nature Synthesis Lectures on Visualization. His research has been funded by both federal agencies such as NSF, NIH, and DHS as well as by companies such as Google, NVIDIA, and Microsoft. He is the recipient of the Purdue Student Government Graduate Mentoring Award (2014), the Ruth and Joel Spira Outstanding Teacher Award (2012), and the Purdue ECE Chicago Alumni New Faculty award (2010). He was elevated to the rank of Distinguished Scientist of the ACM in 2018.
-
Date: Sep 29th, 2022 12:30 PM
Speaker: Oxana Loseva, Senior UX Researcher, Vanguard
Location: HBK 2105
Abstract: A detailed look at how Vanguard fosters inclusion of research participants with various disabilities. We will discuss how to build a panel of participants with different disabilities, the work that is being conducted by them at Vanguard, and the work a contractor with Down Syndrome has done during her 5-month tenure with Vanguard.
Bio: Oxana has an undergraduate degree in Service Design from Savannah College of Art and Design. While working on her bachelor's she started working with folks with disabilities and exploring the physical accessibility of spaces. She went on to earn a Master's in Design Research from Drexel University where she focused on developing a game for people with cognitive disabilities. She works at Vanguard as a Sr. UX Researcher and when she is not working on her game that teaches people with cognitive disabilities how to manage money, she spends time with her pup Pepper and takes her hiking around PA.
-
Date: Oct 6th, 2022 12:30 PM
Speaker: Vivian Motti, Assistant Professor, Department of Information Sciences and Technology, George Mason University
Location: HBK 2105
Abstract: Emotion regulation is an essential skill for young adults, impacting their prospects for employment, education and interpersonal relationships. For neurodiverse individuals, self-regulating their emotions is challenging. Thus, to provide them support, caregivers often offer individualized assistance. Despite being effective, such an approach is also limited. Wearables have a promising potential to address such limitations, helping individuals on demand, recognizing their affective state, and also suggesting coping strategies in a personalized, consistent and unobtrusive way. In this talk I present the results of a user-centered design project on assistive smartwatches for emotion regulation. We conducted interviews and applied questionnaires to formally characterize emotion regulation. We involved neurodiverse adults as well as parents, caregivers, and assistants as active participants in the project. After eliciting the application requirements, we developed an assistive smartwatch application to assist neurodiverse adults with emotion regulation. The app was implemented, tested and evaluated in field studies. I conclude this talk discussing the role of smartwatches to deliver regulation strategies, their benefits and limitations, as well as the users' perspectives about the technology.
Bio: Vivian Genaro Motti is an Assistant Professor in the Department of Information Sciences and Technology at George Mason University where she leads the Human-Centric Design Lab (HCD Lab). Her research focuses on Human Computer Interaction, Ubiquitous Computing, Assistive Wearables, and Usable Privacy. She is the principal investigator for a NIDILRR-funded project on assistive smartwatches for neurodiverse adults. Her research has been funded by NSF, TeachAccess, VRIF CCI, and 4-VA.
-
Date: Oct 13th, 2022 12:30 PM
Speaker: Carl Haynes-Magyar, Presidential Postdoctoral Fellow, Carnegie Mellon University
Location: HBK 2105
Abstract: Traditional introductory computer programming practice has included writing pseudocode, code-reading and tracing, and code-writing. These problem types are often time-intensive, frustrating, cognitively complex, in opposition to learners' self-beliefs, disengaging, and demotivating—and not much has changed in the last decade. Pseudocode is a plain language description of the steps in a program. Code-reading and tracing involve using paper and pencil or online tools such as PythonTutor to trace the execution of a program, and code-writing requires learners to write code from scratch. In contrast to these types of programming practice problems, mixed-up code (Parsons) problems require learners to place blocks of code in the correct order and sometimes require the correct indentation and/or selection between a distracter block and a correct code block. Parsons problems can increase the diversity of programmers who complete introductory computer programming courses by improving the efficiency with which they acquire knowledge and the quality of knowledge acquisition itself. This talk will feature experiments designed to investigate the problem-solving efficiency, cognitive load, pattern application and acquisition, and cognitive accessibility of adaptive Parsons problems. The results have implications for how to generate and sequence them.
Bio: Carl C. Haynes-Magyar is a Presidential Postdoctoral Fellow at Carnegie Mellon University's School of Computer Science in the Human–Computer Interaction Institute. Carl's master's work included evaluating curriculums based on their ability to develop a learner's proficiencies for assessment and assessing the relationship between perceived and actual learning outcomes during web search interaction. His doctoral work involved studying the design of learning analytics dashboards (LADs) to support learners' development of self-regulated learning (SRL) skills and investigating how people learn to program using interactive eBooks with adaptive mixed-up code (Parsons) problems. His postdoctoral work is a continued investigation into computing education that involves creating an online programming practice environment called Codespec. The goal is to scaffold the development of programming skills such as code reading and tracing, code writing, pattern comprehension, and pattern application across a gentle slope of different problem types. These types range from block-based programming problems to writing code from scratch. Codespec will support learners, instructors, and researchers by providing help-seeking features, generating multimodal learning analytics, and cultivating IDEAS: inclusion, diversity, equity, accessibility, sexual orientation and gender awareness. Carl has published several peer-reviewed articles at top venues such as the Conference on Human Factors in Computing Systems (CHI). He has taught as an instructor for courses on organizational behavior, cognitive and social psychology, human-computer interaction, learning analytics, educational data science, and data science ethics. He has been nominated for awards related to instruction and diversity, equity, and inclusion. He is a member of AAAI, ACM SIGCHI and SIGCSE, ALISE, and ISLS. Carl received his Ph.D. at the University of Michigan School of information in 2022, and a master's degree in Library and Information Science with honors from Syracuse University's School of Information Studies (iSchool) in 2016.
-
Date: Oct 22nd, 2022 12:30 PM
Speaker: Doğa Doğan, Ph.D. candidate, MIT
Location: HBK 2105
Abstract: Ubiquitous computing requires that mobile and wearable devices are aware of our surroundings so as to augment the real world with contextual information that enriches our interactions with them. For this to work, the objects around us need to carry machine-readable tags, such as barcodes and RFID labels, that describe what they are and communicate this information to devices. While barcodes are inexpensive to produce, they are typically obtrusive, less durable, and less secure than other tags. Regardless of their type, most conventional tags are added to objects post hoc as they are not part of the original design.
I propose to replace this post-hoc augmentation process with tagging approaches that extract objects’ integrated hidden features and use them as machine-detectable tags to make the real world more informative. In this talk, I will introduce three projects: (1) InfraredTags are invisible fiducial markers embedded into 3D printed objects using infrared-transmitting filaments, and detected using cheap infrared cameras. (2) G-ID marks different 3D printed copies of the same object by using unique printing (“slicing”) settings, which result in unobtrusive, machine-detectable surface artifacts. (3) SensiCut is a smart laser cutting platform that leverages speckle imaging and deep learning to distinguish visually similar workshop materials. It adjusts designs based on the chosen material and warns users against hazardous ones. I will show how these methods assist users in creative tasks and enable new interactive applications for augmented reality (AR), object traceability, and user identification.
Bio: DoğaDoğan is a Ph.D. candidate at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and currently an intern at Adobe Research, where he builds novel identification and tagging techniques. At CSAIL, he works with Stefanie Mueller as part of the HCI Engineering Group. Doğa’s research focuses on the fabrication and detection of unobtrusive physical tags embedded into everyday objects and materials. His work has been nominated for best paper and demo awards at CHI, UIST, and ICRA. He is a past recipient of the Adobe Research Fellowship and Siebel Scholarship. Prior to MIT, Doğa conducted research in the Laboratory for Embedded Machines and Ubiquitous Robots at UCLA, and the Physical Intelligence Department of the Max Planck Institute for Intelligent Systems. His website: https://www.dogadogan.com/.
-
Date: Oct 27th, 2022 12:30 PM
Speaker: Sang Won Lee, Assistant Professor, CS, Virginia Tech
Location: HBK 2105
Abstract: This talk discusses ways to design computational systems that facilitate empathic communication and collaboration in various domains. My research agenda is a journey for me to create a framework we can use to understand the components we need to consider in using technologies to foster empathy. The framework will be introduced, and I will focus on the recent projects that suggest sharing perspectives as a prerequisite towards empathy and address technical barriers to sharing perspectives in emerging technologies.
Bio: Sang Won Lee is an Assistant Professor in the Department of Computer Science at Virginia Tech. His research aims to understand how we can design interactive systems that facilitate empathy among people. His research vision of computer-mediated empathy comes from his computer music background, thriving to bring music's expressive, collaborative, and empathic nature to computational systems. He creates interactive systems that can facilitate understanding by providing ways to share perspectives, preserve context in computer-mediated communication, and facilitate self-reflection. He has applied these approaches to various applications, including creative writing, informal learning, writing, and programming.
-
Date: Nov 3rd, 2022 12:30 PM