BBL Speaker Series

Join us each Thursday during the fall and spring semesters as we present interesting speakers on topics ranging from current areas of interest in HCI, software demos/reviews, study design, proposed research topics and more. The BBL is the one hour a week where we all come together and provide HCIL members the opportunity to build collaborations, increase awareness of each other’s activities, and generally just have a bit of fun together.

When:  Every Thursday during the semester from 12:30pm – 1:30pm ET
What: Speakers and other social/networking events (with pizza)
Where: HCIL lab space (Hornbake Building, South Wing, Room 2105)
Can’t make it in person? Register for the Zoom stream

If you would like to give (or suggest) a future BBL talk, send email to HCIL Director Jessica Vitak (jvitak@umd.edu) with your proposed talk title, a brief abstract, and your bio.

Miss a talk that you were interested in? Check our YouTube channel to see if it was recorded. Most are, some are not (it’s based on speakers’ preference).


Fall 2022 Semester

Date: Thursday, December 1, 2022
Time: 12:30pm-1:30pm ET

Talk Title: (Some) things I worry about in HCI/CSCW research
Speaker: Dan Cosley, Program Officer, National Science Foundation
Location: HBK 2105

Abstract: In this talk, rather than report out on some research that I’m involved with, I plan to do some meta-reflection on things that I worry reduce the contribution and impact of research in HCI, CSCW, and related areas. I tentatively plan to focus on four main issues, based both on work I’ve been involved with myself and on other studies I’ve seen:

  • Our Methods Make Us Dumb (Other People Know Things)
  • Whither the Artifact? (Goldilocks and the Three Stances)
  • Things Change (Tweet, Tweet… Musk!); and
  • Failure To Generalize (A Grounded Theory of X?)

I haven’t given a talk like this before, and many of the issues have already been observed in some form by people smarter than me, but I think there’s value in bringing them together and hope that talking about this will be useful for both HCI practice and HCI research. I plan to have the talk itself run a little short so we can have a more interactive discussion, so feel free to bring a few of your own worries along to share.

Bio: Dan Cosley is a permanent program officer at NSF as of September 2020, homed in the Human-Centered Computing program in CISE and associated with a number of other solicitations, with a mostly up to date list at https://www.nsf.gov/staff/staff_bio.jsp?lan=dcosley. Before that, he was an associate professor at Cornell in the Information Science department, doing both design-based and analytic research in the spaces of Human-Computer Interaction and Computer-Supported Cooperative Work. This includes work around designing user interfaces for recommender systems; modeling human information behaviors from computational traces; supporting crowdwork and online collaboration, and studying the power relationships involved; systems and models connecting social media, identity, and memory; and various other topics that he helped students work on along the way.

Date: Thursday, November 17, 2022
Time: 12:30pm-1:30pm ET

Talk Title: Participatory approaches to AI in digital health and well-being
Speaker: Lauren Wilcox, Senior Staff Research Scientist, Google
Location: HBK 2105

Abstract: Advances in computing technology continue to offer us new insights about our health and well-being. As mutually reinforcing trends make the use of  wearable and mobile devices routine, we now collect personal, health-related data at an unprecedented scale.  Meanwhile, the use of deep-learning-based health screening technologies changes relationships between caregivers and care recipients, with multitudinous implications for equity, privacy, safety, and trust . How can researchers take inclusive and responsible approaches to envisioning solutions, training data, and deploying AI/ML-driven solutions? Who should be involved in decisions about how to use ML/AI in digital health and well-being solutions, and even what solutions matter in the first place? In this talk, I will discuss participatory approaches to designing digital health and well-being technologies with patients, family members, and clinicians.  Starting with field studies in clinics exploring how people navigated use of a deployed, diagnostic AI system, and moving onto examples of responsibility AI practices, I will discuss participatory approaches and their importance throughout the technology design, development, and evaluation process.

Bio: Lauren Wilcox, PhD, is a Senior Staff Research Scientist in Responsible AI and Human-Centered Computing in Google Research. She brings sixteen years of experience conducting human-centered computing research in service of human health and well-being. Previously at Google Health, Wilcox led initiatives to align AI advancements in healthcare with the needs of clinicians, patients, and their family members. She also holds an Adjunct Associate Professor position in Georgia Tech’s School of Interactive Computing. Wilcox was an inaugural member of the ACM Future of Computing Academy. She frequently serves on the organizing and technical program committees for premier conferences in the field (e.g., ACM CHI). Wilcox received her PhD in Computer Science from Columbia University in 2013.

Date: Thursday, November 10, 2022
Time: 12:30pm-1:30pm ET

Talk Title: How and what kind of research we do in the Small Artifacts Lab (Hint: Design, Wearable, Fabrication, and Accessibility)
Speaker: Huaishu Peng, Assistant Professor, CS, UMD
Location: HBK 2105

Abstract: In this talk, I will give a brief overview of the HCI research we are conducting (or planning) in the Small Artifacts Lab. I will showcase several recent works concerning various HCI topics, e.g., design, fabrication, wearable computing, and accessibility, but all from a technical perspective. As examples, I will discuss how we designed a small wearable robot that can relocate itself on a user’s full body instead of staying only in one area of interest (e.g., a smartwatch on the wrist) and how the design opens new opportunities in both research and art; I will also talk about how we created a tangible artifact that supports blind developers to create the graphical layout of webpages on their own. Towards the end of the talk, I will time and discuss with the audience how technical innovation can drive HCI research.

Bio: Huaishu Peng is an Assistant Professor in the Computer Science department at the University of Maryland, College Park. He aims to advance interactive technologies by designing, prototyping, and evaluating novel artifacts that are personal, hands-on, and often small when it comes to the form factors. He is interested in the methods of building these personal artifacts (through design and interactive fabrication), the scenarios of using them (in mixed reality), and the users who can benefit from them (with assistive and enabling technology). His work has been published in CHI, UIST, and SIGGRAPH and won Best Paper Nominee. His work has also been featured in media such as Wired, MIT Technology Review, Techcrunch, and Gizmodo.

Date: Thursday, November 3, 2022
Time: 12:30pm-1:30pm ET

Talk Title: Designing Health Technology for the Intersection of Evidence and Everyday Life
Speaker: Elena Agapie, Assistant Professor, Informatics, UC-Irvine
Location: HBK 2105

Abstract: Pursuing healthy behaviors is a complex, long-term process that is difficult to maintain. Many technologies promise to support people in pursuing health goals, yet many such technologies fail to account for people’s everyday needs or incorporate evidence-based strategies. In this talk, I discuss the challenges that researchers encounter in designing technologies that use health evidence-driven techniques and accounting for people’s everyday life. I use human-centered design methods and create novel systems that address key challenges that people encounter in working on health goals: starting new behaviors while accounting for the complexities of everyday life and engaging with health goals long term. I discuss how technology can better support clinicians and peers in providing evidence-driven tailored support to clients, for supporting physical activity, and mental health therapy. 

Bio: Elena Agapie is an Assistant Professor in the Department of Informatics at the University of California, Irvine. She studies, designs, and builds technology to support people in pursuing positive health behaviors by drawing on people’s everyday experiences and evidence-based interventions. Agapie’s work has been published and received awards in top HCI venues including CHI, CSCW, and HCOMP. She received her Ph.D. in Human Centered Design and Engineering from the University of Washington, and Masters degree in Computer Science from Harvard University. Agapie worked on research projects in industry research labs including Microsoft Research, Fuji Xerox Palo Alto Research Lab, Intel, and NASA’s Jet Propulsion Lab. Her work is supported by the National Science Foundation and the National Institutes of Health.

Date: Thursday, October 27, 2022
Time: 12:30pm-1:30pm ET

Talk Title: Record, Reveal, and Share: Computer-mediated Perspective Sharing
Speaker: Sang Won Lee, Assistant Professor, CS, Virginia Tech
Location: HBK 2105

Abstract: This talk discusses ways to design computational systems that facilitate empathic communication and collaboration in various domains. My research agenda is a journey for me to create a framework we can use to understand the components we need to consider in using technologies to foster empathy. The framework will be introduced, and I will focus on the recent projects that suggest sharing perspectives as a prerequisite towards empathy and address technical barriers to sharing perspectives in emerging technologies.

Bio: Sang Won Lee is an Assistant Professor in the Department of Computer Science at Virginia Tech. His research aims to understand how we can design interactive systems that facilitate empathy among people. His research vision of computer-mediated empathy comes from his computer music background, thriving to bring music’s expressive, collaborative, and empathic nature to computational systems. He creates interactive systems that can facilitate understanding by providing ways to share perspectives, preserve context in computer-mediated communication, and facilitate self-reflection. He has applied these approaches to various applications, including creative writing, informal learning, writing, and programming.

Date: Thursday, October 20, 2022
Time: 12:30pm-1:30pm ET

Talk Title: Unobtrusive Machine-Readable Tags for Identifying, Tracking, and Interacting with Real-World Objects
Speaker: Doğa Doğan, Ph.D. candidate, MIT
Location: HBK 2105

Abstract: Ubiquitous computing requires that mobile and wearable devices are aware of our surroundings so as to augment the real world with contextual information that enriches our interactions with them. For this to work, the objects around us need to carry machine-readable tags, such as barcodes and RFID labels, that describe what they are and communicate this information to devices. While barcodes are inexpensive to produce, they are typically obtrusive, less durable, and less secure than other tags. Regardless of their type, most conventional tags are added to objects post hoc as they are not part of the original design.

I propose to replace this post-hoc augmentation process with tagging approaches that extract objects’ integrated hidden features and use them as machine-detectable tags to make the real world more informative. In this talk, I will introduce three projects: (1) InfraredTags are invisible fiducial markers embedded into 3D printed objects using infrared-transmitting filaments, and detected using cheap infrared cameras. (2) G-ID marks different 3D printed copies of the same object by using unique printing (“slicing”) settings, which result in unobtrusive, machine-detectable surface artifacts. (3) SensiCut is a smart laser cutting platform that leverages speckle imaging and deep learning to distinguish visually similar workshop materials. It adjusts designs based on the chosen material and warns users against hazardous ones. I will show how these methods assist users in creative tasks and enable new interactive applications for augmented reality (AR), object traceability, and user identification.

Bio: DoğaDoğan is a Ph.D. candidate at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and currently an intern at Adobe Research, where he builds novel identification and tagging techniques. At CSAIL, he works with Stefanie Mueller as part of the HCI Engineering Group. Doğa’s research focuses on the fabrication and detection of unobtrusive physical tags embedded into everyday objects and materials. His work has been nominated for best paper and demo awards at CHI, UIST, and ICRA. He is a past recipient of the Adobe Research Fellowship and Siebel Scholarship. Prior to MIT, Doğa conducted research in the Laboratory for Embedded Machines and Ubiquitous Robots at UCLA, and the Physical Intelligence Department of the Max Planck Institute for Intelligent Systems. His website: https://www.dogadogan.com/.

Date: Thursday, October 13, 2022
Time: 12:30pm-1:30pm ET

Talk Title: Toward an Equitable Computer Programming Practice Environment for All
Speaker: Carl Haynes-Magyar, Presidential Postdoctoral Fellow, Carnegie Mellon University
Location: HBK 2105

Abstract: Traditional introductory computer programming practice has included writing pseudocode, code-reading and tracing, and code-writing. These problem types are often time-intensive, frustrating, cognitively complex, in opposition to learners’ self-beliefs, disengaging, and demotivating—and not much has changed in the last decade.  Pseudocode is a plain language description of the steps in a program. Code-reading and tracing involve using paper and pencil or online tools such as PythonTutor to trace the execution of a program, and code-writing requires learners to write code from scratch.  In contrast to these types of programming practice problems, mixed-up code (Parsons) problems require learners to place blocks of code in the correct order and sometimes require the correct indentation and/or selection between a distracter block and a correct code block.  Parsons problems can increase the diversity of programmers who complete introductory computer programming courses by improving the efficiency with which they acquire knowledge and the quality of knowledge acquisition itself.  This talk will feature experiments designed to investigate the problem-solving efficiency, cognitive load, pattern application and acquisition, and cognitive accessibility of adaptive Parsons problems. The results have implications for how to generate and sequence them.

Bio: Carl C. Haynes-Magyar is a Presidential Postdoctoral Fellow at Carnegie Mellon University’s School of Computer Science in the Human–Computer Interaction Institute. Carl’s master’s work included evaluating curriculums based on their ability to develop a learner’s proficiencies for assessment and assessing the relationship between perceived and actual learning outcomes during web search interaction. His doctoral work involved studying the design of learning analytics dashboards (LADs) to support learners’ development of self-regulated learning (SRL) skills and investigating how people learn to program using interactive eBooks with adaptive mixed-up code (Parsons) problems. His postdoctoral work is a continued investigation into computing education that involves creating an online programming practice environment called Codespec. The goal is to scaffold the development of programming skills such as code reading and tracing, code writing, pattern comprehension, and pattern application across a gentle slope of different problem types. These types range from block-based programming problems to writing code from scratch. Codespec will support learners, instructors, and researchers by providing help-seeking features, generating multimodal learning analytics, and cultivating IDEAS: inclusion, diversity, equity, accessibility, sexual orientation and gender awareness. Carl has published several peer-reviewed articles at top venues such as the Conference on Human Factors in Computing Systems (CHI). He has taught as an instructor for courses on organizational behavior, cognitive and social psychology, human-computer interaction, learning analytics, educational data science, and data science ethics. He has been nominated for awards related to instruction and diversity, equity, and inclusion. He is a member of AAAI, ACM SIGCHI and SIGCSE, ALISE, and ISLS. Carl received his Ph.D. at the University of Michigan School of information in 2022, and a master’s degree in Library and Information Science with honors from Syracuse University’s School of Information Studies (iSchool) in 2016.

Date: Thursday, October 6, 2022
Time: 12:30pm-1:30pm ET

Talk Title: Assistive Smartwatch Application to Support Neurodiverse Adults with Emotion Regulation
Speaker: Vivian Motti, Assistant Professor, Department of Information Sciences and Technology, George Mason University
Location: HBK 2105

Abstract: Emotion regulation is an essential skill for young adults, impacting their prospects for employment, education and interpersonal relationships. For neurodiverse individuals, self-regulating their emotions is challenging. Thus, to provide them support, caregivers often offer individualized assistance. Despite being effective, such an approach is also limited. Wearables have a promising potential to address such limitations, helping individuals on demand, recognizing their affective state, and also suggesting coping strategies in a personalized, consistent and unobtrusive way.  In this talk I present the results of a user-centered design project on assistive smartwatches for emotion regulation. We conducted interviews and applied questionnaires to formally characterize emotion regulation. We involved neurodiverse adults as well as parents, caregivers, and assistants as active participants in the project. After eliciting the application requirements, we developed an assistive smartwatch application to assist neurodiverse adults with emotion regulation. The app was implemented, tested and evaluated in field studies. I conclude this talk discussing the role of smartwatches to deliver regulation strategies, their benefits and limitations, as well as the users’ perspectives about the technology.

Bio: Vivian Genaro Motti is an Assistant Professor in the Department of Information Sciences and Technology at George Mason University where she leads the Human-Centric Design Lab (HCD Lab). Her research focuses on Human Computer Interaction, Ubiquitous Computing, Assistive Wearables, and Usable Privacy. She is the principal investigator for a NIDILRR-funded project on assistive smartwatches for neurodiverse adults. Her research has been funded by NSF, TeachAccess, VRIF CCI, and 4-VA.

Date: Thursday, September 29, 2022
Time: 12:30pm-1:30pm ET

Talk Title: Inclusion Efforts at Vanguard
Speaker: Oxana Loseva, Senior UX Researcher, Vanguard
Location: HBK 2105

Abstract: A detailed look at how Vanguard fosters inclusion of research participants with various disabilities. We will discuss how to build a panel of participants with different disabilities, the work that is being conducted by them at Vanguard, and the work a contractor with Down Syndrome has done during her 5-month tenure with Vanguard.

Bio: Oxana has an undergraduate degree in Service Design from Savannah College of Art and Design. While working on her bachelor’s she started working with folks with disabilities and exploring the physical accessibility of spaces. She went on to earn a Master’s in Design Research from Drexel University where she focused on developing a game for people with cognitive disabilities. She works at Vanguard as a Sr. UX Researcher and when she is not working on her game that teaches people with cognitive disabilities how to manage money, she spends time with her pup Pepper and takes her hiking around PA.

Date: Thursday, September 22, 2022
Time: 12:30pm-1:30pm ET

Talk Title: Anytime Anywhere All At Once: Data Analytics in the Metaverse
Speaker: Niklas Elmqvist, Professor, iSchool, UMD
Location: HBK 2105

Abstract: Mobile computing, virtual and augmented reality, and the internet of things (IoT) have transformed the way we interact with computers. Artificial intelligence and machine learning have unprecedented potential for amplifying human abilities. But how have these technologies impacted data analysis, and how will they cause data analysis to change in the future? In this talk, I will review my group’s sustained efforts of going beyond the mouse and the keyboard into the “metaverse” of analytics: large-scale, distributed, ubiquitous, immersive, and increasingly mobile forms of data analytics augmented and amplified by AI/ML models. I will also present my vision for the fundamental theories, applications, design studies, technologies, and frameworks we will need to fulfill the vast potential of this exciting new area in the future.

Bio: Niklas Elmqvist (he/him/his) is a full professor in the iSchool (College of Information Studies) at University of Maryland, College Park. He received his Ph.D. in computer science in 2006 from Chalmers University in Gothenburg, Sweden. Prior to joining University of Maryland, he was an assistant professor of electrical and computer engineering at Purdue University in West Lafayette, IN. From 2016 to 2021, he served as the director of the Human-Computer Interaction Laboratory (HCIL) at University of Maryland, one of the oldest and most well-known HCI research labs in the United States. His research area is information visualization, human-computer interaction, and visual analytics. He is the recipient of an NSF CAREER award as well as best paper awards from the IEEE Information Visualization conference, the ACM CHI conference, the International Journal of Virtual Reality, and the ASME IDETC/CIE conference. He was papers co-chair for IEEE InfoVis 2016, 2017, and 2020, as well as a subcommittee chair for ACM CHI 2020 and 2021. He is also a past associate editor of IEEE Transactions on Visualization & Computer Graphics, as well as a current associate editor for the International Journal of Human-Computer Studies and the Information Visualization journal. In addition, he serves as series editor of the Springer Nature Synthesis Lectures on Visualization. His research has been funded by both federal agencies such as NSF, NIH, and DHS as well as by companies such as Google, NVIDIA, and Microsoft. He is the recipient of the Purdue Student Government Graduate Mentoring Award (2014), the Ruth and Joel Spira Outstanding Teacher Award (2012), and the Purdue ECE Chicago Alumni New Faculty award (2010). He was elevated to the rank of Distinguished Scientist of the ACM in 2018.

Date: Thursday, September 15, 2022
Time: 12:30pm-1:30pm ET

Talk Title: Ideological Trajectories in Recommendation Systems for News Consumption
Speaker: Cody Buntain, Assistant Professor, iSchool, UMD
Location: HBK 2105

Abstract: While originally developed to increase diversity in product recommendations and show individuals personalized content, recommendation systems have increasingly been criticized for their opacity, potential to radicalize vulnerable users, and incentivizing anti-social content. At the same time, studies have shown that modified recommendation systems can suppress anti-social content across the information ecosystem, and platforms are increasingly relying on such modifications for soft content-moderation interventions. These contradictions are difficult to reconcile as the underlying recommendation systems are often dynamic and commercially sensitive, making academic research on them difficult. This paper sheds light on these issues in the context of political news consumption by building several recommendation systems from first principles, populated with real-world engagement data from Twitter and Reddit. Using domain-level ideology measures, we simulate individuals’ ideological trajectories through recommendations for news sources and examine whether standard recommendation approaches drive individuals to more partisan content and under what circumstances such radicalizing trajectories may emerge. We end with a discussion of personalization’s impact in consuming political content, and implications for instrumenting deployed recommendation systems for anti-social effects.

Bio: Dr. Cody Buntain is an assistant professor in the College of Information Studies at the University of Maryland and a research affiliate for NYU’s Center for Social Media and Politics, where he studies online information and social media. His work examines how people use online information spaces during crises and political unrest, with a focus on information quality, preventing manipulation, and enhancing resilience. His work in these areas has been covered by the New York Times, Washington Post, WIRED, and others. Prior to UMD, he was an assistant professor at the New Jersey Institute of Technology and a fellow at the Intelligence Community Postdoctoral Fellowship.

Date: Thursday, September 8, 2022
Time: 12:30pm-1:30pm ET

Talk Title: Aphasia Profiles and Implications for Technology Use
Speaker: Kristin Slawson, Clinical Associate Professor, University of Maryland Hearing and Speech Clinic, and Michael Settles
Location: HBK 2105

Abstract: Conservative estimates suggest that 2.5 million people in the US have aphasia, yet few people have ever heard of the condition. Aphasia is a poorly understood, “invisible disability” that specifically impacts use of language in all forms. People with aphasia are more likely than other stroke survivors to experience social isolation, loss of independence, and significantly lower levels of employment. These immediate consequences have negative ripple effects on the mental and physical health outcomes of survivors and their family members. This talk aims to increase awareness of specific aphasia profiles in hopes of exploring how technology can be adapted to help people with aphasia maintain their prior level of work, social engagement, and independence to the greatest degree possible. 

Bio: Kristin Slawson is a Speech-Language Pathologist and a Clinical Associate Professor in Hearing and Speech Sciences. As a brain injury specialist, she is particularly interested in the functional impact of brain injuries on cognitive-linguistic abilities and implications of these changes on maintenance of social connections and return to school and work. 

Bio: Michael Settles is a 2022 ASHA Media Champion Award for his work advocating for aphasia awareness.  He is featured in a special exhibit on aphasia and word finding at the Planet Word Museum in Washington, DC. He is an advocate for expanded use of technology to support communication needs of people with aphasia. 

Check out slides from Kristin’s presentation here.

Date: Thursday, September 1, 2022
Time: 12:30pm-1:30pm ET
Location: HCIL (HBK 2105)

Welcome back for fall 2022 semester! Join us, have some pizza, and meet the faculty and students who are part of the lab.


Spring 2022

Fall 2021 Semester


Spring 2021 Semester

Fall 2020 Semester