University of Maryland

BBL Speaker Series

Join us each Thursday during the fall and spring semesters as we present interesting speakers on topics ranging from current areas of interest in HCI, software demos/reviews, study design, proposed research topics and more. The BBL is the one hour a week where we all come together and provide HCIL members the opportunity to build collaborations, increase awareness of each other’s activities, and generally just have a bit of fun together.

If you would like to give (or suggest) a future BBL talk, send email to HCIL Director Jessica Vitak (jvitak@umd.edu) with your proposed talk title, a brief abstract, and your bio.

Talks are held in the HCIL (HBK2105), but if you can’t make it in person, register for Zoom here.


 

Date: Thursday, January 25, 2024
Time: 12:30pm-1:30pm ET

Talk Title: Strain: Myoelectric Sculpture Control with the Thalmic Myo Armband
Speaker: Alex Leitch (Pictured Left), Co-Director of the MS in Human-Computer Interaction, University of Maryland

Celia Chen, 2nd year PhD student in the Information Studies program, University of Maryland
Location: HBK 2105 and Zoom

Watch Here!

Abstract: Novel HCI devices are prone to planned obsolescence, which sometimes causes clever ideas and great sensor packages to be trashed before being thoroughly explored. This is a particular problem in closed-source hardware designed with strongly opinionated interfaces. The Myo armband by Thalmic Labs packed high-grade EMG sensors into a successful, compact wearable, before being discontinued in 2018. This talk covers how we repurposed a Myo armband to take advantage of its subtle muscle tracking to activate a pneumatic sculpture made from materials that are similarly regarded as junk in the making. This creative hacking approach is a promising way to thwart planned obsolescence, which is especially important when it comes to HCI and accessibility devices.

By interfacing a Myo to a Raspberry Pi 3B+, we enabled forearm muscles to trigger air valves and animate assemblies of latex, bamboo, and PLA. These forms were then programmed to maintain peristalsis, only to dramatically deflate and flop about in response to custom gesture control. Though the interactions aim more for surprise and delight than technical polish, this comedy of errors examines the latent expressiveness of both the obsolete Myo hardware and everyday trash. It also allowed us to explore which software systems, exactly, would be required to take further advantage of the Myo system in an open-hardware environment. By finding fresh ways to work with what’s on our shelves, we hope to squeeze more value from devices otherwise destined for landfills.

Bios:

Alex Leitch: Alex Leitch investigates human-computer interaction through a blend of critical scholarship, hands-on pedagogy, and interactive installation art. They currently serve as Co-Director of the MS in Human-Computer Interaction at the University of Maryland, where they have taught courses on programming, interaction design, and digital fabrication since 2019. Alex’s installations invite public engagement while probing the embedded values in sociotechnical systems. As an interaction designer, they analyze issues like gender representation in engineering spaces, legibility in code, and truth in algorithms. Currently pursuing a PhD in Information Studies, their research examines the labour consequences of browser infrastructure underpinning today’s dominant digital interfaces. Their praxis fuses empirical studies with speculative artifacts that reimagine society’s relationship to emerging tech.

For this project, Alex made the sculpture and debugged key elements of the software to ensure the serial port communication worked properly.

Celia Chen: Celia Chen is a 2nd year PhD student in the Information Studies program at the University of Maryland. They hold BS and MS degrees in Cognitive and Psychological Data Science from Rensselaer Polytechnic Institute, where they worked with the RPIrates on computational text analysis of political tweets and creating predictive models using WHO data to estimate early COVID-19 infection spread under the advisement of Dr. James Hendler. Concurrently advised by Dr. Alicia Walf, they wrote protocols for using Fitbits and other biometric sensors for human subjects research, gaining experience with wearable sensors and physiological data. Currently advised by Dr. Jen Golbeck, their personal research explores user identity construction and language use in online spaces. For this project, Celia handled the coding to enable communication between the Myo armband, Raspberry Pi, and pneumatic robot, drawing on their background in cognitive science and human sensor input.

Date: Thursday, February 1, 2024
Time: 12:30pm-1:30pm ET

Talk Title: Becoming Teammates: Designing Assistive, Collaborative Machines
Speaker: Chien-Ming Huang, John C. Malone Assistant Professor, Department of Computer Science, Johns Hopkins University
Location: HBK 2105 and Zoom

Watch Here!


Abstract: The growing power in computing and AI promises a near-term future of human-machine teamwork. In this talk, I will present my research group’s efforts in understanding the complex dynamics of human-machine interaction and designing intelligent machines aimed to assist and collaborate with people. I will focus on 1) tools for onboarding machine teammates and authoring machine assistance, 2) methods for detecting, and broadly managing, errors in collaboration, and 3) building blocks of knowledge needed to enable ad hoc human-machine teamwork. I will also highlight our recent work on designing assistive, collaborative machines to support older adults aging in place.

Bio: Chien-Ming Huang is the John C. Malone Assistant Professor in the Department of Computer Science at the Johns Hopkins University. His research focuses on designing interactive AI aimed to assist and collaborate with people. He publishes in top-tier venues in HRI, HCI, and robotics including Science Robotics, HRI, CHI, and CSCW. His research has received media coverage from MIT Technology Review, Tech Insider, and Science Nation. Huang completed his postdoctoral training at Yale University and received his Ph.D. in Computer Science at the University of Wisconsin–Madison. He is a recipient of the NSF CAREER award. https://www.cs.jhu.edu/~cmhuang/

Date: Thursday, February 8, 2024
Time: 12:30pm-1:30pm ET

Talk Title: Student Lightning Talks
Location: HBK 2105 and Zoom

This BBL will be dedicated to four student lightning talks. We are excited to hear what they are working on!

How do lightning talks work?
Typically, people give a 4-5 minute “presentation” — this can be very informal or involve slides. The presentation gives some background on your project and then introduces a specific question or “ask” that you want feedback on. Then we have ~15 minutes of conversation with attendees about your question/topic. This is a great opportunity for students to get feedback on research ideas or projects in various stages.

Date: Thursday, February 15, 2024
Time: 12:30pm-1:30pm ET

Talk Title: Rich and Intuitive Haptic Interaction for Future Computers
Speaker: Jaeyeon Lee, Assistant Professor, Computer Science and Engineering, UNIST
Location: HBK 2105 and Zoom

Abstract: Computers became small yet powerful enough to be worn and provide information to the user in their daily life. However, interacting with those computers is still challenging, primarily due to their small and rigid form factors. This is problematic since one of the significant reasons for wearing computers is to access information from anywhere comfortably. This talk introduces studies enriching expressivity and natural interactions on small computers using the human sense of touch. It explores how wearable tactile displays can offer enhanced efficiency, comfort, and ease of use. Specifically, it compares design options for a tactile display on the backside of a smartwatch in terms of information transfer. Incorporating multiple distinct tactile sensations can increase the capacity for conveying information. Furthermore, non-contact tactile displays present an opportunity to enhance the wearability of these devices and deliver intuitive spatiotemporal patterns on the face. Finally, the findings from these studies have broader implications for future computing environments, including ultra-thin skin interfaces and technologies such as VR and AR.

Bio: Jaeyeon Lee is an Assistant Professor in Computer Science and Engineering at UNIST (Ulsan National Institute of Science and Technology). She earned her B.Eng. in Control Engineering from KwangWoon University, M.S. in Electrical Engineering, and Ph.D. in Computer Science from KAIST. Her research in Human-Computer Interaction focuses on physical user interfaces enabling rich and intuitive haptic interaction on future computers. Her research work has been published in leading venues in the field of HCI, including ACM CHI and ACM UIST. She has served on the Steering Committee, Organizing Committee, and Program Committee of HCI and Haptics research communities. She is a recipient of EECS Rising Stars in Korea, GradUS Global Scholarship, and NAVER Ph.D. Fellowship. 

Date: Thursday, February 22, 2024
Time: 12:30pm-1:30pm ET

Talk Title: Safety in Algorithmically-Mediated Offline Introductions
Speaker: Veronica Rivera, Embedded Ethics Postdoctoral Scholar, Stanford University
Location: HBK 2105 and Zoom

Watch Here!

Abstract: Algorithms increasingly mediate interactions that cross the digital-physical divide, creating both online and offline safety risks. In this talk, I will share my work on understanding safety in algorithmically-mediated offline introductions (AMOIs). In AMOIs, digital platforms use algorithms to match strangers for offline meetups (e.g., online dating, gig work). Thus, harm in AMOIs transcends digital boundaries into the physical world, raising questions about how to measure harm and who bears responsibility. In my first study, I examine how women gig workers’ experiences with safety are shaped by both individual risk factors and platform design. In my second study, I systemetize harms and protective behaviors across gig workers and online daters and measure the prevalence of different harms and behaviors. Ultimately, my work shows that users who engage in seemingly disparate kinds of AMOIs actually share many safety concerns and protective behaviors.

Bio: Veronica Rivera is an Embedded Ethics Postdoctoral Scholar at Stanford University where she works with the Empirical Security Research Group, the Institute for Human-Centered AI, and the Center for Ethics in Society. Her research lies at the intersection of HCI and security. She studies the digital safety needs and challenges of marginalized and vulnerable populations. She has a PhD in computational media from the University of California, Santa Cruz and a BS in computer science and math from Harvey Mudd College. She was previously a visitor at the Max Planck Institute for Software Systems and at the Center for Privacy and Security of Marginalized and Vulnerable Populations at the University of Florida.

Date: Thursday, February 29, 2024
Time: 12:30pm-1:30pm ET

Talk Title: Beyond Content: Understanding Volunteer Moderation in Social Media
Speaker: Yvette Wohn, Associate professor of Informatics, New Jersey Institute of Technology, Director of the Social Interaction Lab
Location: HBK 2105 and Zoom

Watch Here!

Abstract: Online harassment is a problem that we still have been unable to solve in the social media age of Web 2.0. Moreover, as we move deeper into Web 3.0, which includes 3D virtual worlds, moderation moves beyond content to include behavioral components such as embodied interactions.
While much of the research in computing focuses on how to deal with bad content through technological advancement, this talk presents research from the past few years that focuses on the social complexities involved when communities, rather than companies, try to self moderate.

Bio: Dr. Wohn (she/her) is an associate professor of Informatics at New Jersey Institute of Technology and director of the Social Interaction Lab (socialinteractionlab.com). Her research is in the area of Human Computer Interaction (HCI) where she studies the characteristics and consequences of social interactions in online environments such as virtual worlds and social media. Her main projects examine 1) moderation, online harassment, and the creation/maintenance of online safe spaces and 2) social exchange in digital economies & digital patronage (creator-supporter dynamics). Her work on moderation is supported by the National Science Foundation and Mozilla Foundation.

Date: Thursday, March 7, 2024
Time: 12:30pm-1:30pm ET

Talk Title: Data Analytics for Health: Utilizing Large Social Media Data
Speaker: Albert Park, Assistant Professor in the Department of Software and Information Systems, College of Computing and Informatics, University of North Carolina-Charlotte
Location: HBK 2105 and Zoom

Abstract: Today, I want to discuss how we can leverage the vast amount of data from social media to gain insights into mental health and community engagement. I will start by exploring the impact of online depression communities. Initial concerns focused on the potential for negative emotion spread, research reveals a surprising trend: members often experience positive changes in their emotional language use and language impairment over time. This suggests that these communities can hold unexpected benefits for both mental well-being.
Building on this understanding, I’ll introduce a study examining how to encourage active participation in online health communities. We delve into the concept of homophily, which describes our natural tendency to connect with those who share something similar. Here we look at language patterns. Our findings across diverse online communities show that shared vocabulary significantly predicts future interaction among members. This holds valuable implications for fostering deeper engagement and meaningful peer support by harnessing the power of shared language.

Bio: I am Albert Park, currently an Assistant Professor in the Department of Software and Information Systems within the College of Computing and Informatics at the University of North Carolina-Charlotte. I was a National Institutes of Health-National Library of Medicine Post-Doctoral Fellow at the University of Utah. I hold a bachelor’s and master’s degrees in Computer Science from Virginia Tech, and a Ph.D. in Biomedical and Health Informatics from the University of Washington in 2015. My research focuses on the analysis of social interactions and social networks using modern data analysis and development of novel computational approaches to study social interactions and relationships in the context of health.

Date: Thursday, March 14, 2024
Time: 12:30pm-1:30pm ET

Talk Title: AI + Agency + AAC: Identifying Challenges and Opportunities for Design
Speaker: Stephanie Valencia Valencia
Location: HBK 2105 and Zoom

Abstract: Agency and communication are essential to our personal development, we advance our individual goals by communicating them. Nonetheless, agency is not a fixed property. Many individuals who use speech-generating devices to communicate encounter social constraints and barriers that reduce their agency in conversation including how much they can say, how they can say it, and when they can say it. In this BBL talk, I will argue how using agency as a design framework can serve us to generate accessible communication experiences and center the perspectives of people with disabilities in the design process of new technology. Through empirical studies and co-design with people with disabilities I explore how different technology materials can support their agency in conversation. In doing so, I will present accessible design methods as well as new design guidelines for augmented communication using automated transcription, physical artefacts, and AI-based language generative tools. 

Bio: Stephanie Valencia, PhD,  is an assistant professor at the College of  Information Studies at the University of Maryland. Dr. Valencia is a Human-Computer Interaction researcher who builds accessible technologies that are grounded in behavioral theory, co-designed with people with disabilities, and deployed to users for impact. Her research focuses on designing for accessibility and conversational agency when using assistive technologies such as augmentative and alternative communication (AAC) devices that support communication for users with motor and speech disabilities. Dr. Valencia uses participatory design to explore how different design materials such as AI and non-anthropomorphic robots can be used to create agency-increasing AAC systems and builds and deploys these systems to evaluate their impact. Dr. Valencia received her PhD and MS in Human-Computer Interaction at Carnegie Mellon University and a BS in Biomedical Engineering from EIA and CES university in Colombia. She has been awarded a Postgraduate Fellowship at the Yale School of Medicine, the MIT Technology Review Top 35 Innovators Under 35 Award in Latin America, and the Ada Lovelace fellowship from the Open Source Hardware Association. 

Date: Thursday, March 28, 2024
Time: 12:30pm-1:30pm ET

Talk Title: Is the ‘African’ a Standing Reserve in Global AI Pipeline? Yes!
Speaker: Muhammad Adamu
Location: HBK 2105 and Zoom

Abstract: What is this thing AI? Is it the possible mimicry of the technical, social, or cultural intelligence of the human or the actuality of superintelligence? But wait, which Human, the Hegelian Man-as-Human or the Wynterian Beyond MAN, towards the Human? We don’t know! This thing AI that was presented to us during the short-lived summers and long cold winters will solve the “common sense” problem, i.e., model human knowledge of the everyday, what Heideggerian phenomenology calls “Being-in-the-World”. 

In this talk, I will introduce a particular dimension of the Heideggerian critique of AI, i.e. enflaming [Gestell] and standing reserve [Bestand]. In particular, I will adopt the concept of standing reserve to articulate a particular relation of the African citizen – a user, a client, a producer, or a labourer- within a largely Eurocentric AI landscape, and attempt to demonstrate how the existing institutional conception of the African as an objectifiable subject that can be resourced for capital will inform (and reform) the African orientation of the future of AI. In short, I will argue that the African, just as Kalluri and colleagues (2023) “Surveillance AI pipeline” paper has demonstrated how Humans are conceived as entities under the umbrella term of “objects” or “region of interest” in computer vision research, is historically and continuously co-opted as standing reserve for the total mobilization of technocratic ideals – thus to be catalogued, computed and used as resource that is disposable and replaceable. 

Bio: Muhammad Adamu is a Senior Research Associate for ImaginationLancaster Digital Good SIG at Lancaster University, UK. Muhammad is strongly associated with the “African perspective” in Human-computer interaction, and more recently the social futures of artificial intelligence. His current interdisciplinary research focuses on establishing the themes of “Good AI societies” and “AI for Good” in Africa and has been funded by the Tertiary Education Trust Fund (TETFUND) and Petroleum Technology Development Fund (PTDF), Nigeria and the UKRI Research England

Date: Thursday, April 4, 2024
Time: 12:30pm-1:30pm ET

Talk Title: TBD
Speaker: Mako Hill
Location: HBK 2105 and Zoom

Date: Thursday, April 11, 2024
Time: 12:30pm-1:30pm ET

Talk Title: TBD
Speaker: Divya Ramesh
Location: HBK 2105 and Zoom

Date: Thursday, April 18, 2024
Time: 12:30pm-1:30pm ET

Talk Title: TBD
Speaker: Alex Wen
Location: HBK 2105 and Zoom

Date: Thursday, April 25, 2024
Time: 12:30pm-1:30pm ET

Talk Title: Envisioning Identity: The Social Production of Computer Vision
Speaker: Morgan Klaus Scheuerman, Postdoctoral Associate, Information Science, University of Colorado Boulder
Location: HBK 2105 and Zoom

Abstract: Computer vision technologies have been increasingly scrutinized in recent years for their propensity to cause harm. Broadly, the harms of computer vision focus on demographic biases (favoring one group over another) and categorical injustices (through erasure, stereotyping, or problematic labels). Prior work has focused on both uncovering these harms and mitigating them, through, for example, better dataset collection practices and guidelines for more contextual data labeling. There is opportunity to further understand how human identity is embedded into computer vision not only across these artifacts, but also across the network of human workers who shape computer vision systems. Further, given computer vision is designed by humans, there is ample opportunity to understand how human positionality influences the outcomes of computer vision systems. In this talk, I present work on how identity is implemented in computer vision, from how identity is represented in models and datasets to how different worker positionalities influence the development process. Specifically, I showcase how representations of gender and race in computer vision are exclusionary, and represent problematic histories present in colonialist worldviews. I also highlight how traditional tech workers enact a positional power over data workers in the global south. Through these findings, I demonstrate how identity in computer vision moves from something more open, contextual, and exploratory to a completely closed, binary and prescriptive classification.

Bio: Morgan Klaus Scheuerman is a Postdoctoral Associate in Information Science at University of Colorado Boulder and a 2021 MSR Research Fellow. His research focuses on the intersection of technical infrastructure and marginalized identities. In particular, he examines how gender and race characteristics are embedded into algorithmic infrastructures and how those permeations influence the entire system. His work has received multiple best paper awards and honorable mentions at CHI and CSCW. He earned his MS degree in Human-Centered Computing from University of Maryland Baltimore County and his BA in Communication & Media Studies (Minor Gender & Sexuality Studies) from Goucher College.

Date: Thursday, May 2, 2024
Time: 12:30pm-1:30pm ET

Talk Title: TBD
Speaker: Merrie Morris
Location: HBK 2105 and Zoom

Date: Thursday, May 9, 2024
Time: 12:30pm-1:30pm ET

Talk Title: TBD
Speaker: Marvin Grabowski
Location: HBK 2105 and Zoom

Past Talks

Date: Thursday, December 7, 2023
Time: 12:30pm-1:30pm ET

Talk Title: Connecting Realities for Fluid Computer-Mediated Communication
Speaker: Seongkook Heo, Assistant Professor, CS, University of Virginia
Location: HBK 2105 and Zoom
Watch Here!| Slides Here!

Abstract: Computers are more deeply integrated into our daily lives than ever before, and recent advancements in ML and AI technologies enable computers to comprehend the real world. However, using such capabilities for daily tasks still induces friction because of inefficient interactions with them.

In this talk, I will share my group’s research on how we can better connect the physical and virtual worlds through the design and development of interactive systems. First, I will discuss how we can bring objects and interactions of the physical world into the virtual world to make virtual communication rich and frictionless. In many computer-mediated meetings, we not only share our faces and voices but also physical objects. We developed a remote meeting system that supports the instant conversion of physical objects into virtual objects to allow efficient sharing and manipulation of objects during the conversation.

Second, I will share how we can physicalize computation results into physical actions. Many projects and applications have demonstrated the use of AI in assisting users with visual impairments. However, computers usually only provide guidance feedback to the user and leave the interpretation of the feedback and the execution to the user, which can be cognitively heavy tasks. We suggested automated hand-based spatial guidance to bridge the gap between guidance and execution, allowing visually impaired users to move their hands between two points automatically. Finally, I will discuss the implications and remaining challenges in bridging the two realities.

Bio: Seongkook Heo is an assistant professor in the Department of Computer Science at the University of Virginia. He has been working on Human-Computer Interaction (HCI) research, focusing on bridging the gap between physical and virtual worlds to make computers better support rich and nuanced human interactions by designing novel interactive systems and developing sensing and feedback technologies. His research has been published at top HCI venues, including CHI, UIST, and CSCW, and recognized by Best Paper and Poster Awards at CHI, MobileHCI, and IEEE VR. He is also the recipient of the Engineering Research Innovation Award at the University of Virginia and the Meta Research Award. He received his Ph.D. at KAIST and worked at the University of Toronto as a postdoctoral researcher before joining the University of Virginia.

Date: Thursday, November 30, 2023
Time: 12:30pm-1:30pm ET

Talk Title: Fostering Digital Inclusion: Co-Design with Racial Minority, Low-Income Older Adults for Smart Speaker Applications to Enhance Social Connections and Well-being

Speaker: Dr. Jane Chung, Associate Professor, Virginia Commonwealth University School of Nursing
Location: HBK 2105 and Zoom

Watch Here!

Abstract: Older adult residents of low-income housing are at a high risk of unmanaged health conditions, loneliness, and limited healthcare access. Smart speakers have the potential to improve social connections and well-being among older adult residents. We conducted an iterative, user-centered design study with primarily African American older adults who lived alone in low-income housing to develop low-fidelity prototypes of smart speaker applications for wellness and social connections. Focus groups were held to elicit feedback about challenges with maintaining wellness and attitudes towards smart speakers. Through design workshops, they identified several smart speaker functionalities perceived as necessary for improving wellness and social connectedness. Then, several low-fidelity prototypes and use scenarios were developed in the following categories: wellness check-ins, befriending the virtual agent, community involvement, and mood detection. We demonstrate how smart speakers can provide a tool for their wellness and increase access to applications that provide a virtual space for social engagement. This presentation will also highlight strategies for addressing digital health inequities among socially vulnerable older adults. The goal is to enhance technology proficiency, reduce fear, and ultimately foster the acceptance of essential technologies.

Bio: Dr. Jane Chung is an Associate Professor at Virginia Commonwealth University School of Nursing. She is a nurse scientist with special emphasis on aging and technology research. Her research program has two foci: 1) advancing the methods for functional health monitoring and risk detection among older adults using innovative sensor technologies and 2) improving social connectedness and well-being in socially vulnerable older adults based on advances in data science and digital technologies including novel machine learning algorithms. She currently leads two NIH-funded studies – R01 project to identify digital biomarkers of mobility that are predictive of cognitive decline in community-dwelling older adults, and R21 project where her team is developing a smart speaker-based system for automatic loneliness assessment in older adults. Recently, she has been selected as a fellow for the Betty Irene Moore Fellowship for Nurse Leaders and Innovators, and in this fellowship program, she is working on a smart speaker-based intervention designed to assist low-income older adults in managing chronic conditions and daily activities more effectively.

Date: Thursday, November 16, 2023
Time: 12:30pm-1:30pm ET

Talk Title: Student Lightning Talks
Location: HBK 2105 and Zoom

This BBL will be dedicated to four student lightning talks. We are excited to hear what they are working on!

How do lightning talks work?
Typically, people give a 4-5 minute “presentation” — this can be very informal or involve slides. The presentation gives some background on your project and then introduces a specific question or “ask” that you want feedback on. Then we have ~15 minutes of conversation with attendees about your question/topic. This is a great opportunity for students to get feedback on research ideas or projects in various stages.

Date: Thursday, November 9, 2023
Time: 12:30pm-1:30pm ET

Talk Title: Storytelling Health Informatics: Supporting Collective Efforts Towards Health Equity
Speaker: Dr. Herman Saksono, Assistant Professor, Health Sciences & CS, Northeastern University
Location: HBK 2105 and Zoom

Watch Here!

Abstract: We live in a storied life. Stories from people at present and in the past are guiding our actions in the future. Although this narrative mode of knowing complements the pragmatic mode, the pragmatic mode of knowing is the only ubiquitously supported mode in personal health informatics systems. In this talk, I will present my research on personal health informatics that uses storytelling to support health behavior in marginalized communities. These studies examined how storytelling technologies can amplify social connections and knowledge within the family and neighbors. The use of stories socially is a departure from health technologies that are often individually focused. Technologies that portray health solely as an individual’s responsibility could widen health disparities because marginalized communities face numerous health barriers due to systemic inequities. Storytelling health informatics could lessen this burden by supporting health behaviors as collective community efforts.

Bio: Dr. Herman Saksono is an Assistant Professor at Northeastern University with a joint appointment at the Bouvé College of Health Sciences and the Khoury College of Computer Sciences. Previously, he was a postdoctoral research fellow at the Center for Research on Computation and Society at Harvard University. He completed his Ph.D. in Computer Science at Northeastern University and a Fulbright scholarship recipient.

Herman’s interdisciplinary research contributions are in Personal Health Informatics, Human-Computer Interaction, and Digital Health Equity. His research investigates how digital tools can catalyze social interactions that encourage positive health behaviors, thus facilitating collective efforts toward health equity. He conducts the entire human-centered design process by designing, building, and evaluating innovative health technologies in collaboration with local community partners. Herman published his work in ACM CHI and CSCW where he received honorable mentions for Best Paper awards.

Date: Thursday, November 2, 2023
Time: 12:30pm-1:30pm ET

Talk Title: Community-based Participatory Design Investigating Emerging Technologies
Speaker: Foad Hamidi, Assistant Professor in Information Systems at the University of Maryland, Baltimore County (UMBC)
Location: HBK 2105 and Zoom

Watch Here!

Abstract: Community-based participatory design (PD) offers inclusive and exciting principles and methods for enabling mutual learning among diverse interested parties. As PD moves from the workplace to other domains, such as Do-it-Yourself (DIY) design spaces, informal learning contexts, and domestic and home settings, we need to rethink and redefine what it means to do PD and what outcomes can move us towards desired futures. In this talk, I draw on several of my recent projects where I use PD to investigate and interrogate emerging technologies, such as DIY assistive technologies and living media interfaces (LMIs).

Bio: Foad Hamidi is an Assistant Professor in Information Systems at the University of Maryland, Baltimore County (UMBC). His research focuses on several areas within Human-Computer Interaction (HCI), including Living Media Interfaces, ParticipatoryDesign, and DIY assistive technology. He conducts transdisciplinary community-engaged research and regularly collaborates with community partners. At UMBC, he directs the DesigningpARticipatoryfuturEs (DARE) lab and the Interactive Systems Research Center (ISRC). He has a PhD in Computer Science from York University, Toronto. 

Date: Thursday, October 26, 2023
Time: 12:30pm-1:30pm ET

Talk Title: AI to Support Everyday Life for People with Dementia
Speaker: Dr. Emma Dixon, Assistant Professor, Clemson University
Location: HBK 2105 and Zoom

Watch Here! | Slides Here!

Abstract: We are seeing new AI systems for people with dementia, such as brain games which detect and diagnose cognitive impairment and smart-home systems to monitor the daily activities of people with dementia while caregivers are away. Although these are important areas of research, there are open opportunities to extend the use of AI to support individuals with dementia in a variety of different aspects of everyday life outside of diagnosis and monitoring. In this talk, Emma Dixon will briefly discuss her work in the area of AI for people experiencing age-related cognitive changes. The first study examines the technology accessibility needs of individuals with dementia, uncovering ways AI may be used to provide personalized solutions. The second study explores the ways tech-savvy people with dementia configure commercially available AI systems to support their everyday activities. Finally, the third study focuses on the design of future applications of AI to support the everyday life of people with dementia.

Bio: Dr. Emma Dixon is an Assistant Professor in Human-Computer Computing with a joint appointment in Industrial Engineering at Clemson University. Her research investigates technology use by neurodivergent individuals and people living with neurodegenerative conditions. In doing so, her research agenda is situated at the intersection of health information technology and cognitive accessibility research. Due to the complexity of this space, she takes a mixed methods approach, using qualitative methods to ground her work deeply in situated understanding of people’s experiences and quantitative methods to test the usability of emerging technologies. She earned her undergraduate degree in Industrial Engineering at Clemson University and her PhD in Information Studies at University of Maryland, College Park. Her research has received a Dean’s Award for Outstanding iSchool Doctoral Paper, as well as a Best Paper Nomination and Honorable Mention awards at ASSETS and CSCW conferences. She has published her work in CHI, CSCW, ASSETS, JMIR Mental Health, Applied Ergonomics, and TACCESS. Her dissertation work was supported by the NSF Graduate Research Fellowship.

Date: Thursday, October 19, 2023
Time: 12:30pm-1:30pm ET

Talk Title: Navigating the New Normal: An Exploration of Face-to -Face Design Meetings in the Era of Remote Work
Speaker: Karen Holtzblatt
Location: HBK 2105 and Zoom

Watch Here!

Abstract: Advancements in technology, the globalization of companies, and a growing awareness of environmental issues have catalyzed a shift in work cultures, transforming traditional face-to-face meetings into online ones. The COVID-19 pandemic further accelerated this transition, establishing videoconferencing as the prevailing mode of professional interaction. But now companies are asking workers to come back to the office at least some of the time. They cite better collaboration, information sharing, and coaching for early career folks. But is that true and what does it really mean? To find out, we 11 conducted deep dive interviews primarily with HCI professionals to understand their experience of working in person vs remotely or hybrid. HCI professionals often find themselves organizing, leading, facilitating, and participating in complex interactive meetings of various kinds: data synthesis, ideation, brainstorming, design review with whiteboarding, roadmapping, and project kickoffs. Our work complements recent research on Return-to-Work that has been conducted by surveys and gives a deeper understanding of what is going on. We sought to gain insights into these types of meetings and interactions to understand participants’ experiences and what works and what doesn’t.
We hope these findings will helpguide both HCI professionals and companies as they choose when to be in-person and how to best run hybrid and remote meetings. We spoke with both senior people and early career professionals. Our insights are also against the backdrop of last year’s research into the experience of remote working during the pandemic and related literature. The presentation will tell stories of our experiences and explicate what drives people to bring people together for these complex meetings and what impacts the success of these meetings in any context. We will also describe the impact of the social dimension of working together. We discuss the need for a shared understanding, ensuring engagement, managing the meeting, and the powerful role of nonverbal communication as well as the need and desire for connection both for its own sake and for the sake of the work and career.

Bio: Karen Holtzblatt is a thought leader, industry speaker, and author. A recognized innovator in requirements and design, Karen has developed transformative design approaches throughout her career. She introduced Contextual Inquire and Contextual Design, the industry standard for understanding the customer and organizing that data to drive innovative product and service concepts. Her newest book Contextual Design 2nd Edition Design for Life is used by companies and universities worldwide. Karen co-founded InContext Design in 1992 with Hugh Beyer to use Contextual Design techniques to coach product teams and deliver market data and design solutions to businesses across scores of industries in many countries. As CEO of InContext, Karen has worked with product, application, and design teams for over 30 years. Karen is also the driving force behind the Women in Tech Retention Project housed at witops.org. WITops research explores why women in technology professions leave the field and creates tested interventions to help women thrive and succeed. Her new book with Nicola Marsden, Retaining Women in Tech: Shifting the Paradigm shares the work. Karen consults with companies to help them understand their diverse teams and improve retention, team cohesion, and equal participation by all. As a member of ACM SIGCHI (The Association of Computing Machinery’s Special Interest Group on Computer-Human Interaction) Karen was awarded membership to the CHI Academy a gathering of significant contributors and received the first Lifetime Award for Practice for her impact on the field. Karen has also been an Adjunct Research Scientist at the University of Maryland’s iSchool (College of Information Studies). Karen has worked with many universities to help design curriculum for training user experience professionals. Karen has more than 30 years of teaching experience professionally, at conferences and university settings. She holds a doctorate in applied psychology from the University of Toronto.

Date: Thursday, October 12, 2023
Time: 12:30pm-1:30pm ET

Talk Title: Towards a Science of Human-AI Decision Making: Empirical Understandings, Computational Models, and Intervention Designs
Speaker: Ming Yin, Assistant Professor, Department of Computer Science, Purdue University
Location: HBK 2105 and Zoom

Watch Here!

Abstract: Artificial intelligence (AI) technologies have been increasingly integrated into human workflows. For example, the usage of AI-based decision aids in human decision-making processes has resulted in a new paradigm of human-AI decision making—that is, the AI-based decision aid provides a decision recommendation to the human decision makers, while humans make the final decision. The increasing prevalence of human-AI collaborative decision making highlights the need to understand how humans and AI collaborate with each other in these decision-making processes, and how to promote the effectiveness of these collaborations. In this talk, I’ll discuss a few research projects that my group carries out on empirically understanding how humans trust the AI model via human-subject experiments, quantitatively modeling humans’ adoption of AI recommendations, and designing interventions to influence the human-AI collaboration outcomes (e.g., improve human-AI joint decision-making performance).

Bio: Ming Yin is an Assistant Professor in the Department of Computer Science, Purdue University. Her current research interests include human-AI interaction, crowdsourcing and human computation, and computational social sciences. She completed her Ph.D. in Computer Science at Harvard University and received her bachelor’s degree from Tsinghua University. Ming was the Conference Co-Chair of AAAI HCOMP 2022. Her work was recognized with multiple best paper (CHI 2022, CSCW 2022, HCOMP 2020) and best paper honorable mention awards (CHI 2019, CHI 2016).

Date: Thursday, October 5, 2023
Time: 12:30pm-1:30pm ET

Talk Title: Mastering the Paper Review Process
Location: HBK 2105 and Zoom

Abstract: Even if you didn’t submit a paper to this year’s CHI conference, if you’re doing research, you probably know something about the review process. For most journals and conferences, submitted papers are read by 2-4 anonymous reviewers, who provide written feedback on the strengths and weaknesses of the paper and decide whether a paper should be accepted, rejected, or revised. But what should go into the review process? And how should you respond to reviews? In this session, we’ll discuss tips and tricks for being an effective reviewer, how to provide constructive criticism, and how to respond to reviewer comments. Bring your questions and experiences with reviewing, and learn more about the ups and downs of academic publishing.

Date: Thursday, September 28, 2023
Time: 12:30pm-1:30pm ET

Talk Title: Successful Aging in Digital Era
Speaker: Dr. Madina Khamzina, postdoctoral associate, Department of Family Science, School of Public Health, University of Maryland
Location: HBK 2105 and Zoom

Watch Here! | Slides Here!

Abstract: This talk discusses the opportunities and challenges of technology to support successful aging. The population of people aged 65 and older is growing faster than any other age group worldwide. While people are living longer, it’s crucial to ask whether those additional years are being lived healthier and happier. Successful aging has become a central priority at both societal and individual health levels. Technology holds the promise to significantly contribute to successful aging in various ways. For example, keeping people physically active, enabling independent living through fall detection and smart home technology, aiding in the early detection and management of diseases, as well as helping maintain social connections to reduce isolation. Keeping in mind that aging in the digital era presents its own set of challenges, we need to ensure that technologies are inclusive and accessible to everyone regardless of age. Addressing the specific needs and older adults’ factors is crucial in the
endeavor to reap the benefits of technology for successful aging.

Bio: Madina earned her Ph.D. degree from the University of Illinois at Urbana-Champaign in December 2022. She is currently a postdoctoral associate at the School of Public Health and is primarily focused on work with the University of Maryland Extension Services. While working in the Human Factors and Aging Lab in Illinois, she became passionate about the role of technology in supporting successful aging. She is a principal investigator for a research project the University of Maryland Extension that is aimed to assess the needs and challenges of broadband internet and technology adoption among older adults in Maryland.

Date: Thursday, September 21, 2023
Time: 12:30pm-1:30pm ET
Location: HBK 2105

This week we’ll do another round of our experimenting with “research speed dating”! If it’s anything like the last iteration of this in the Spring, it’ll be a fun time to hear from each other about what we’re brainstorming/working on, and give feedback in a lightweight, informal, low-stakes setup!

Date: Thursday, September 14, 2023
Time: 12:30pm-1:30pm ET
Location: HBK 2105

With the CHI deadline looming, we’ll use this week’s brown bag time slot for folks to take a break from writing to relax (a little), enjoy some pizza with colleagues, and get ready for the final push. So if you’re on campus, stop by HBK2105 to get a slice and chat with other HCIL members.

Date: Thursday, September 7, 2023
Time: 12:30pm-1:30pm ET

Talk Title: The Road Less Taken: Pathways to Ethical and Responsible Technologies
Speaker: Dr. Susan Winter, Associate Dean for Research, College of Information Studies, the University of Maryland
Location: HBK 2105 and Zoom

Watch Here!

Abstract: Technology is no longer just about technology – now it is about living. So, how do we have ethical technology that creates a better life and a better society? Technology must become truly “human-centered,” not just “human-aware” or “human-adjacent”. Diverse users and advocacy groups must become equal partners in initial co-design and in continual assessment and management of information systems with human, social, physical, and technical components. But we cannot get there without radically transforming how we think about, develop, and use technologies. In this chapter, we explore new models for digital humanism and discuss effective tools and techniques for designing, building, and maintaining sociotechnical systems that are built to be, and remain continuously ethical, responsible, and human-centered.

Bio: Dr. Susan Winter, Associate Dean for Research, College of Information Studies, the University of Maryland. Dr. Winter studies the co-evolution of technology and work practices, and the organization of work. She has recently focused on ethical issues surrounding civic technologies and smart cities, the social and organizational challenges of data reuse, and collaboration among information workers and scientists acting within highly institutionalized sociotechnical systems. Her work has been supported by the U.S. National Science Foundation and by the Institute of Museum and Library Services. She was previously a Science Advisor in the Directorate for Social Behavioral and Economic Sciences, a Program Director, and Acting Deputy Director of the Office of Cyberinfrastructure at the National Science Foundation supporting distributed, interdisciplinary scientific collaboration for complex data-driven and computational science. She received her PhD from the University of Arizona, her MA from the Claremont Graduate University, and her BA from the University of California, Berkeley.

!! There are hundreds of productivity apps and tools to help you get work done–far too many for any one person to go through and figure out what works best for them. In this week’s BBL, we want you to share the tools, apps, and tips you use to help you in your research, classwork, and writing. How do you stay organized? What helps you be productive? What are things that didn’t work for you? We’ll talk about what people like and don’t and run some quick demos during this BBL.

Fill out this form to share what you use.

Join us in the lab (HBK-2105) or on Zoom to hear about cool tools and to share the ones you use!

Date: Thursday, December 1, 2022
Time: 12:30pm-1:30pm ET

Talk Title: (Some) things I worry about in HCI/CSCW research
Speaker: Dan Cosley, Program Officer, National Science Foundation
Location: HBK 2105

Abstract: In this talk, rather than report out on some research that I’m involved with, I plan to do some meta-reflection on things that I worry reduce the contribution and impact of research in HCI, CSCW, and related areas. I tentatively plan to focus on four main issues, based both on work I’ve been involved with myself and on other studies I’ve seen:

  • Our Methods Make Us Dumb (Other People Know Things)
  • Whither the Artifact? (Goldilocks and the Three Stances)
  • Things Change (Tweet, Tweet… Musk!); and
  • Failure To Generalize (A Grounded Theory of X?)

I haven’t given a talk like this before, and many of the issues have already been observed in some form by people smarter than me, but I think there’s value in bringing them together and hope that talking about this will be useful for both HCI practice and HCI research. I plan to have the talk itself run a little short so we can have a more interactive discussion, so feel free to bring a few of your own worries along to share.

Bio: Dan Cosley is a permanent program officer at NSF as of September 2020, homed in the Human-Centered Computing program in CISE and associated with a number of other solicitations, with a mostly up to date list at https://www.nsf.gov/staff/staff_bio.jsp?lan=dcosley. Before that, he was an associate professor at Cornell in the Information Science department, doing both design-based and analytic research in the spaces of Human-Computer Interaction and Computer-Supported Cooperative Work. This includes work around designing user interfaces for recommender systems; modeling human information behaviors from computational traces; supporting crowdwork and online collaboration, and studying the power relationships involved; systems and models connecting social media, identity, and memory; and various other topics that he helped students work on along the way.

 

Date: Thursday, November 17, 2022
Time: 12:30pm-1:30pm ET

Talk Title: Participatory approaches to AI in digital health and well-being
Speaker: Lauren Wilcox, Senior Staff Research Scientist, Google
Location: HBK 2105

Abstract: Advances in computing technology continue to offer us new insights about our health and well-being. As mutually reinforcing trends make the use of  wearable and mobile devices routine, we now collect personal, health-related data at an unprecedented scale.  Meanwhile, the use of deep-learning-based health screening technologies changes relationships between caregivers and care recipients, with multitudinous implications for equity, privacy, safety, and trust . How can researchers take inclusive and responsible approaches to envisioning solutions, training data, and deploying AI/ML-driven solutions? Who should be involved in decisions about how to use ML/AI in digital health and well-being solutions, and even what solutions matter in the first place? In this talk, I will discuss participatory approaches to designing digital health and well-being technologies with patients, family members, and clinicians.  Starting with field studies in clinics exploring how people navigated use of a deployed, diagnostic AI system, and moving onto examples of responsibility AI practices, I will discuss participatory approaches and their importance throughout the technology design, development, and evaluation process.

Bio: Lauren Wilcox, PhD, is a Senior Staff Research Scientist in Responsible AI and Human-Centered Computing in Google Research. She brings sixteen years of experience conducting human-centered computing research in service of human health and well-being. Previously at Google Health, Wilcox led initiatives to align AI advancements in healthcare with the needs of clinicians, patients, and their family members. She also holds an Adjunct Associate Professor position in Georgia Tech’s School of Interactive Computing. Wilcox was an inaugural member of the ACM Future of Computing Academy. She frequently serves on the organizing and technical program committees for premier conferences in the field (e.g., ACM CHI). Wilcox received her PhD in Computer Science from Columbia University in 2013.

 

Date: Thursday, November 10, 2022
Time: 12:30pm-1:30pm ET

Talk Title: How and what kind of research we do in the Small Artifacts Lab (Hint: Design, Wearable, Fabrication, and Accessibility)
Speaker: Huaishu Peng, Assistant Professor, CS, UMD
Location: HBK 2105

Abstract: In this talk, I will give a brief overview of the HCI research we are conducting (or planning) in the Small Artifacts Lab. I will showcase several recent works concerning various HCI topics, e.g., design, fabrication, wearable computing, and accessibility, but all from a technical perspective. As examples, I will discuss how we designed a small wearable robot that can relocate itself on a user’s full body instead of staying only in one area of interest (e.g., a smartwatch on the wrist) and how the design opens new opportunities in both research and art; I will also talk about how we created a tangible artifact that supports blind developers to create the graphical layout of webpages on their own. Towards the end of the talk, I will time and discuss with the audience how technical innovation can drive HCI research.

Bio: Huaishu Peng is an Assistant Professor in the Computer Science department at the University of Maryland, College Park. He aims to advance interactive technologies by designing, prototyping, and evaluating novel artifacts that are personal, hands-on, and often small when it comes to the form factors. He is interested in the methods of building these personal artifacts (through design and interactive fabrication), the scenarios of using them (in mixed reality), and the users who can benefit from them (with assistive and enabling technology). His work has been published in CHI, UIST, and SIGGRAPH and won Best Paper Nominee. His work has also been featured in media such as Wired, MIT Technology Review, Techcrunch, and Gizmodo.

 

Date: Thursday, November 3, 2022
Time: 12:30pm-1:30pm ET

Talk Title: Designing Health Technology for the Intersection of Evidence and Everyday Life
Speaker: Elena Agapie, Assistant Professor, Informatics, UC-Irvine
Location: HBK 2105

Abstract: Pursuing healthy behaviors is a complex, long-term process that is difficult to maintain. Many technologies promise to support people in pursuing health goals, yet many such technologies fail to account for people’s everyday needs or incorporate evidence-based strategies. In this talk, I discuss the challenges that researchers encounter in designing technologies that use health evidence-driven techniques and accounting for people’s everyday life. I use human-centered design methods and create novel systems that address key challenges that people encounter in working on health goals: starting new behaviors while accounting for the complexities of everyday life and engaging with health goals long term. I discuss how technology can better support clinicians and peers in providing evidence-driven tailored support to clients, for supporting physical activity, and mental health therapy. 

Bio: Elena Agapie is an Assistant Professor in the Department of Informatics at the University of California, Irvine. She studies, designs, and builds technology to support people in pursuing positive health behaviors by drawing on people’s everyday experiences and evidence-based interventions. Agapie’s work has been published and received awards in top HCI venues including CHI, CSCW, and HCOMP. She received her Ph.D. in Human Centered Design and Engineering from the University of Washington, and Masters degree in Computer Science from Harvard University. Agapie worked on research projects in industry research labs including Microsoft Research, Fuji Xerox Palo Alto Research Lab, Intel, and NASA’s Jet Propulsion Lab. Her work is supported by the National Science Foundation and the National Institutes of Health.

 
sang won lee

Date: Thursday, October 27, 2022
Time: 12:30pm-1:30pm ET

Talk Title: Record, Reveal, and Share: Computer-mediated Perspective Sharing
Speaker: Sang Won Lee, Assistant Professor, CS, Virginia Tech
Location: HBK 2105

Abstract: This talk discusses ways to design computational systems that facilitate empathic communication and collaboration in various domains. My research agenda is a journey for me to create a framework we can use to understand the components we need to consider in using technologies to foster empathy. The framework will be introduced, and I will focus on the recent projects that suggest sharing perspectives as a prerequisite towards empathy and address technical barriers to sharing perspectives in emerging technologies.

Bio: Sang Won Lee is an Assistant Professor in the Department of Computer Science at Virginia Tech. His research aims to understand how we can design interactive systems that facilitate empathy among people. His research vision of computer-mediated empathy comes from his computer music background, thriving to bring music’s expressive, collaborative, and empathic nature to computational systems. He creates interactive systems that can facilitate understanding by providing ways to share perspectives, preserve context in computer-mediated communication, and facilitate self-reflection. He has applied these approaches to various applications, including creative writing, informal learning, writing, and programming.

 

Date: Thursday, October 20, 2022
Time: 12:30pm-1:30pm ET

Talk Title: Unobtrusive Machine-Readable Tags for Identifying, Tracking, and Interacting with Real-World Objects
Speaker: Doğa Doğan, Ph.D. candidate, MIT
Location: HBK 2105

Abstract: Ubiquitous computing requires that mobile and wearable devices are aware of our surroundings so as to augment the real world with contextual information that enriches our interactions with them. For this to work, the objects around us need to carry machine-readable tags, such as barcodes and RFID labels, that describe what they are and communicate this information to devices. While barcodes are inexpensive to produce, they are typically obtrusive, less durable, and less secure than other tags. Regardless of their type, most conventional tags are added to objects post hoc as they are not part of the original design.

I propose to replace this post-hoc augmentation process with tagging approaches that extract objects’ integrated hidden features and use them as machine-detectable tags to make the real world more informative. In this talk, I will introduce three projects: (1) InfraredTags are invisible fiducial markers embedded into 3D printed objects using infrared-transmitting filaments, and detected using cheap infrared cameras. (2) G-ID marks different 3D printed copies of the same object by using unique printing (“slicing”) settings, which result in unobtrusive, machine-detectable surface artifacts. (3) SensiCut is a smart laser cutting platform that leverages speckle imaging and deep learning to distinguish visually similar workshop materials. It adjusts designs based on the chosen material and warns users against hazardous ones. I will show how these methods assist users in creative tasks and enable new interactive applications for augmented reality (AR), object traceability, and user identification.

Bio: DoğaDoğan is a Ph.D. candidate at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and currently an intern at Adobe Research, where he builds novel identification and tagging techniques. At CSAIL, he works with Stefanie Mueller as part of the HCI Engineering Group. Doğa’s research focuses on the fabrication and detection of unobtrusive physical tags embedded into everyday objects and materials. His work has been nominated for best paper and demo awards at CHI, UIST, and ICRA. He is a past recipient of the Adobe Research Fellowship and Siebel Scholarship. Prior to MIT, Doğa conducted research in the Laboratory for Embedded Machines and Ubiquitous Robots at UCLA, and the Physical Intelligence Department of the Max Planck Institute for Intelligent Systems. His website: https://www.dogadogan.com/.

 

Date: Thursday, October 13, 2022
Time: 12:30pm-1:30pm ET

Talk Title: Toward an Equitable Computer Programming Practice Environment for All
Speaker: Carl Haynes-Magyar, Presidential Postdoctoral Fellow, Carnegie Mellon University
Location: HBK 2105

Abstract: Traditional introductory computer programming practice has included writing pseudocode, code-reading and tracing, and code-writing. These problem types are often time-intensive, frustrating, cognitively complex, in opposition to learners’ self-beliefs, disengaging, and demotivating—and not much has changed in the last decade.  Pseudocode is a plain language description of the steps in a program. Code-reading and tracing involve using paper and pencil or online tools such as PythonTutor to trace the execution of a program, and code-writing requires learners to write code from scratch.  In contrast to these types of programming practice problems, mixed-up code (Parsons) problems require learners to place blocks of code in the correct order and sometimes require the correct indentation and/or selection between a distracter block and a correct code block.  Parsons problems can increase the diversity of programmers who complete introductory computer programming courses by improving the efficiency with which they acquire knowledge and the quality of knowledge acquisition itself.  This talk will feature experiments designed to investigate the problem-solving efficiency, cognitive load, pattern application and acquisition, and cognitive accessibility of adaptive Parsons problems. The results have implications for how to generate and sequence them.

Bio: Carl C. Haynes-Magyar is a Presidential Postdoctoral Fellow at Carnegie Mellon University’s School of Computer Science in the Human–Computer Interaction Institute. Carl’s master’s work included evaluating curriculums based on their ability to develop a learner’s proficiencies for assessment and assessing the relationship between perceived and actual learning outcomes during web search interaction. His doctoral work involved studying the design of learning analytics dashboards (LADs) to support learners’ development of self-regulated learning (SRL) skills and investigating how people learn to program using interactive eBooks with adaptive mixed-up code (Parsons) problems. His postdoctoral work is a continued investigation into computing education that involves creating an online programming practice environment called Codespec. The goal is to scaffold the development of programming skills such as code reading and tracing, code writing, pattern comprehension, and pattern application across a gentle slope of different problem types. These types range from block-based programming problems to writing code from scratch. Codespec will support learners, instructors, and researchers by providing help-seeking features, generating multimodal learning analytics, and cultivating IDEAS: inclusion, diversity, equity, accessibility, sexual orientation and gender awareness. Carl has published several peer-reviewed articles at top venues such as the Conference on Human Factors in Computing Systems (CHI). He has taught as an instructor for courses on organizational behavior, cognitive and social psychology, human-computer interaction, learning analytics, educational data science, and data science ethics. He has been nominated for awards related to instruction and diversity, equity, and inclusion. He is a member of AAAI, ACM SIGCHI and SIGCSE, ALISE, and ISLS. Carl received his Ph.D. at the University of Michigan School of information in 2022, and a master’s degree in Library and Information Science with honors from Syracuse University’s School of Information Studies (iSchool) in 2016.

 

Date: Thursday, October 6, 2022
Time: 12:30pm-1:30pm ET

Talk Title: Assistive Smartwatch Application to Support Neurodiverse Adults with Emotion Regulation
Speaker: Vivian Motti, Assistant Professor, Department of Information Sciences and Technology, George Mason University
Location: HBK 2105

Abstract: Emotion regulation is an essential skill for young adults, impacting their prospects for employment, education and interpersonal relationships. For neurodiverse individuals, self-regulating their emotions is challenging. Thus, to provide them support, caregivers often offer individualized assistance. Despite being effective, such an approach is also limited. Wearables have a promising potential to address such limitations, helping individuals on demand, recognizing their affective state, and also suggesting coping strategies in a personalized, consistent and unobtrusive way.  In this talk I present the results of a user-centered design project on assistive smartwatches for emotion regulation. We conducted interviews and applied questionnaires to formally characterize emotion regulation. We involved neurodiverse adults as well as parents, caregivers, and assistants as active participants in the project. After eliciting the application requirements, we developed an assistive smartwatch application to assist neurodiverse adults with emotion regulation. The app was implemented, tested and evaluated in field studies. I conclude this talk discussing the role of smartwatches to deliver regulation strategies, their benefits and limitations, as well as the users’ perspectives about the technology.

Bio: Vivian Genaro Motti is an Assistant Professor in the Department of Information Sciences and Technology at George Mason University where she leads the Human-Centric Design Lab (HCD Lab). Her research focuses on Human Computer Interaction, Ubiquitous Computing, Assistive Wearables, and Usable Privacy. She is the principal investigator for a NIDILRR-funded project on assistive smartwatches for neurodiverse adults. Her research has been funded by NSF, TeachAccess, VRIF CCI, and 4-VA.

 

Date: Thursday, September 29, 2022
Time: 12:30pm-1:30pm ET

Talk Title: Inclusion Efforts at Vanguard
Speaker: Oxana Loseva, Senior UX Researcher, Vanguard
Location: HBK 2105

Abstract: A detailed look at how Vanguard fosters inclusion of research participants with various disabilities. We will discuss how to build a panel of participants with different disabilities, the work that is being conducted by them at Vanguard, and the work a contractor with Down Syndrome has done during her 5-month tenure with Vanguard.

Bio: Oxana has an undergraduate degree in Service Design from Savannah College of Art and Design. While working on her bachelor’s she started working with folks with disabilities and exploring the physical accessibility of spaces. She went on to earn a Master’s in Design Research from Drexel University where she focused on developing a game for people with cognitive disabilities. She works at Vanguard as a Sr. UX Researcher and when she is not working on her game that teaches people with cognitive disabilities how to manage money, she spends time with her pup Pepper and takes her hiking around PA.

 

Date: Thursday, September 22, 2022
Time: 12:30pm-1:30pm ET

Talk Title: Anytime Anywhere All At Once: Data Analytics in the Metaverse
Speaker: Niklas Elmqvist, Professor, iSchool, UMD
Location: HBK 2105

Abstract: Mobile computing, virtual and augmented reality, and the internet of things (IoT) have transformed the way we interact with computers. Artificial intelligence and machine learning have unprecedented potential for amplifying human abilities. But how have these technologies impacted data analysis, and how will they cause data analysis to change in the future? In this talk, I will review my group’s sustained efforts of going beyond the mouse and the keyboard into the “metaverse” of analytics: large-scale, distributed, ubiquitous, immersive, and increasingly mobile forms of data analytics augmented and amplified by AI/ML models. I will also present my vision for the fundamental theories, applications, design studies, technologies, and frameworks we will need to fulfill the vast potential of this exciting new area in the future.

Bio: Niklas Elmqvist (he/him/his) is a full professor in the iSchool (College of Information Studies) at University of Maryland, College Park. He received his Ph.D. in computer science in 2006 from Chalmers University in Gothenburg, Sweden. Prior to joining University of Maryland, he was an assistant professor of electrical and computer engineering at Purdue University in West Lafayette, IN. From 2016 to 2021, he served as the director of the Human-Computer Interaction Laboratory (HCIL) at University of Maryland, one of the oldest and most well-known HCI research labs in the United States. His research area is information visualization, human-computer interaction, and visual analytics. He is the recipient of an NSF CAREER award as well as best paper awards from the IEEE Information Visualization conference, the ACM CHI conference, the International Journal of Virtual Reality, and the ASME IDETC/CIE conference. He was papers co-chair for IEEE InfoVis 2016, 2017, and 2020, as well as a subcommittee chair for ACM CHI 2020 and 2021. He is also a past associate editor of IEEE Transactions on Visualization & Computer Graphics, as well as a current associate editor for the International Journal of Human-Computer Studies and the Information Visualization journal. In addition, he serves as series editor of the Springer Nature Synthesis Lectures on Visualization. His research has been funded by both federal agencies such as NSF, NIH, and DHS as well as by companies such as Google, NVIDIA, and Microsoft. He is the recipient of the Purdue Student Government Graduate Mentoring Award (2014), the Ruth and Joel Spira Outstanding Teacher Award (2012), and the Purdue ECE Chicago Alumni New Faculty award (2010). He was elevated to the rank of Distinguished Scientist of the ACM in 2018.

 

cody buntain

Date: Thursday, September 15, 2022
Time: 12:30pm-1:30pm ET

Talk Title: Ideological Trajectories in Recommendation Systems for News Consumption
Speaker: Cody Buntain, Assistant Professor, iSchool, UMD
Location: HBK 2105

Abstract: While originally developed to increase diversity in product recommendations and show individuals personalized content, recommendation systems have increasingly been criticized for their opacity, potential to radicalize vulnerable users, and incentivizing anti-social content. At the same time, studies have shown that modified recommendation systems can suppress anti-social content across the information ecosystem, and platforms are increasingly relying on such modifications for soft content-moderation interventions. These contradictions are difficult to reconcile as the underlying recommendation systems are often dynamic and commercially sensitive, making academic research on them difficult. This paper sheds light on these issues in the context of political news consumption by building several recommendation systems from first principles, populated with real-world engagement data from Twitter and Reddit. Using domain-level ideology measures, we simulate individuals’ ideological trajectories through recommendations for news sources and examine whether standard recommendation approaches drive individuals to more partisan content and under what circumstances such radicalizing trajectories may emerge. We end with a discussion of personalization’s impact in consuming political content, and implications for instrumenting deployed recommendation systems for anti-social effects.

Bio: Dr. Cody Buntain is an assistant professor in the College of Information Studies at the University of Maryland and a research affiliate for NYU’s Center for Social Media and Politics, where he studies online information and social media. His work examines how people use online information spaces during crises and political unrest, with a focus on information quality, preventing manipulation, and enhancing resilience. His work in these areas has been covered by the New York Times, Washington Post, WIRED, and others. Prior to UMD, he was an assistant professor at the New Jersey Institute of Technology and a fellow at the Intelligence Community Postdoctoral Fellowship.

 

Date: Thursday, September 8, 2022
Time: 12:30pm-1:30pm ET

Talk Title: Aphasia Profiles and Implications for Technology Use
Speaker: Kristin Slawson, Clinical Associate Professor, University of Maryland Hearing and Speech Clinic, and Michael Settles
Location: HBK 2105

Abstract: Conservative estimates suggest that 2.5 million people in the US have aphasia, yet few people have ever heard of the condition. Aphasia is a poorly understood, “invisible disability” that specifically impacts use of language in all forms. People with aphasia are more likely than other stroke survivors to experience social isolation, loss of independence, and significantly lower levels of employment. These immediate consequences have negative ripple effects on the mental and physical health outcomes of survivors and their family members. This talk aims to increase awareness of specific aphasia profiles in hopes of exploring how technology can be adapted to help people with aphasia maintain their prior level of work, social engagement, and independence to the greatest degree possible. 

Bio: Kristin Slawson is a Speech-Language Pathologist and a Clinical Associate Professor in Hearing and Speech Sciences. As a brain injury specialist, she is particularly interested in the functional impact of brain injuries on cognitive-linguistic abilities and implications of these changes on maintenance of social connections and return to school and work. 

Bio: Michael Settles is a 2022 ASHA Media Champion Award for his work advocating for aphasia awareness.  He is featured in a special exhibit on aphasia and word finding at the Planet Word Museum in Washington, DC. He is an advocate for expanded use of technology to support communication needs of people with aphasia. 

Check out slides from Kristin’s presentation here.

 

Date: Thursday, September 1, 2022
Time: 12:30pm-1:30pm ET
Location: HCIL (HBK 2105)

Welcome back for fall 2022 semester! Join us, have some pizza, and meet the faculty and students who are part of the lab.

 


Spring 2022

 

H

Fall 2021 Semester


 

Spring 2021 Semester

Fall 2020 Semester