A Motion to Consider: New Evidence for the Value of Animation in Visual Analysis

By Brian Ondov and Niklas Elmqvist

Animation in data visualization has been through it all, from initial hype, to explosion of applications, to skepticism and debunking. If nothing else, though, the topic of animation is persistent, perennially asserting its presence both in design and in the scientific community. In fact, a landmark study casting doubt on the efficacy of animation will receive a 10-year Test of Time award at the upcoming IEEE VIS conference, emphasizing the enduring relevance of this mechanism, even in the face of negative results.

Motion seems speaks to us at a fundamental level, perhaps because it is among the most primitive and fundamental elements of vision. Perception of motion originates in the retina itself, and even motion that is outside our visual field can trigger an innate response to look toward it. But can animation have perceptual value beyond just catching our attention?

As is often the case in science, we stumbled upon an interesting result almost by accident. Our intent was to investigate the usefulness of different layouts to support comparison, with an especially curious eye toward symmetric arrangements, such as population pyramids. For the sake of completeness, we threw in animation as a bit of a wild card. To our surprise, in our first experiment, animation won the day, enabling participants to perform the task better even than with superimposed displays (which were intended as a control!).

So what is going on here? Are these results incongruent with other experiments? Not necessarily. While it is tempting to use perceptual studies to label visual techniques as “good” or “bad”, the reality is of course more complicated. For example, our setup involved only a small number of moving shapes, and required the ranking of very subtle differences between data points. Both of these factors likely made it easier to track and compare elements of the scene. We think this allowed participants to bring a powerful perceptual ability to bear: the estimation of how fast an object is moving.

What does this mean going forward? Well, for one, our study was highly controlled and narrowly focused, and there is plenty more to be done to explore all the factors that may be at play. More broadly, though, we may benefit simply from adopting a more nuanced view the value of animation for conveying information. One thing that is clear is the continued importance of answering these questions as the field progresses. Whether it’s used to enhance cognition or just for splash, animation probably isn’t going anywhere.

More information about this work:

Ondov, Brian, Nicole Jardine, Niklas Elmqvist, and Steven Franconeri. “Face to Face: Evaluating Visual Comparison.” IEEE Transactions on Visualization and Computer Graphics (2018).

To learn more, contact Brian Ondov at ondovb@umd.edu.

For video and demos, visit: https://hcil.umd.edu/visualcomparison/

To learn more about the HCIL, please visit: http://hcil.umd.edu/

A Moment of Reflection: How Symmetry Can Help Us Interpret Charts

By Brian Ondov and Niklas Elmqvist

Symmetry has a long history of study in perceptual psychology, from the Gestalt movement to modern, computerized experiments with flashing point clouds. Researchers have even tested whether this basic element of visual organization is still perceived by astronauts in microgravity (it is!). Much less studied, though, is how our innate ability to recognize symmetrical shapes and scenes might affect how we see data in visualizations.

Perhaps recognizing the power of symmetry intuitively, demographers have long used it to juxtapose the male and female components of population pyramids, beginning in the late 19th Century. Here at the HCIL, though, we arrived at symmetry from a somewhat different angle, while tackling the problem of how to compare two sunburst charts. Nonetheless, we saw an opportunity to provide more experimental evidence for the technique (and, in fact, for comparative displays more generally).

We asked two main questions: (1) does symmetry help pick out a “biggest mover” between two datasets (top, left), and (2) does it help identify overall similarity (top, right)?

The results were promising: for the first task, the symmetrical arrangement indeed allowed participants to identify more accurately which bar changed the most, compared to a typical side-by-side view. This supports the idea that we can see not only whether a shape is symmetrical, but also which parts are or are not. Symmetry, though, was not the top performer here—the task was even easier using a superimposed display, and easier still with animation instead of two static views (the latter was a pleasantly incidental finding).

Where symmetry really shined, though, was in the second task, involving overall similarity. Here, the symmetrical arrangement outperformed all others, including superimposing and animating. This is perhaps not so surprising, given what we know about perception. After all, when arranged in this way, more similar data sets will create more symmetrical shapes. Still, this provides some empirical evidence that had been missing, which is pretty exciting for those of us that traffic in data. On top of that, as you may have noticed, the charts in this task look a lot like the population pyramids mentioned earlier. This is a nicely mutual validation (a symmetry, if you will): it’s both experimental support for a long-used technique and practical corroboration of our controlled experiment.

Of course, these were very focused studies, with the number of variables intentionally limited. In the future, we can ask a host of other questions to tease out where, when, and why symmetry works—and doesn’t work—in data visualization. Naturally, the technique will not be appropriate for all situations, especially if more than two datasets need to be compared. If nothing else, these results have given us something to reflect on.

More information about this work:

Ondov, Brian, Nicole Jardine, Niklas Elmqvist, and Steven Franconeri. “Face to Face: Evaluating Visual Comparison.” IEEE Transactions on Visualization and Computer Graphics (2018).

To learn more, contact Brian Ondov at ondovb@umd.edu.

For video and demos, visit: https://hcil.umd.edu/visualcomparison/

To learn more about the HCIL, please visit: http://hcil.umd.edu/

 

Tutorial on Social Media Analytics During Crises – 33rd HCIL Symposium

TUTORIAL: Social Media Analytics During Crises

Thursday, May 26, 2016

Workshop on Social Media during Crises

A tutorial during the
33rd Human-Computer Interaction Lab Symposium
University of Maryland

Overview

This tutorial will build practical experience in using Python + Jupyter Notebooks to analyze and discover insights from social media during times of crisis and social unrest. We demonstrate how temporal, network, sentiment, and geographic analyses on Twitter can aid in understanding and enhance storytelling of contentious events. Examples of events we might cover include protests in Ferguson, MO, the Boston Marathon Bombing, and the Charlie Hebdo Attack. Demonstrations will include hands-on exercises in extracting tweets by location, sentiment analysis, network analysis to visualize groups taking part in the discussion, and detecting high-impact moments in the data. Most of the work will be performed in the Jupyter notebook framework to aid in repeatable research and support dissemination of results to others.

Prerequisites

Tutorial participants are expected to have some prior experience in programming. Proficiency in Python is preferred but not essential as Python is a straightforward language to learn given prior experience with C/C++, Java, Perl, etc.

Tutorial content will be built on the IPython/Jupyter notebook framework, which does not come standard on most platforms and is generally installed via the command line, so a familiarity with console applications is also preferred.

Organizers

Questions: Please contact Cody Buntain (cbuntain@cs.umd.edu)

Agenda (Subject to Change)

The precise timing is not set yet but it will likely follow this format:

  • 08:15am – Symposium Registration, Breakfast
  • 09:00am – Symposium Plenary Talks (more information)
  • 1:00pm-1:15pm – Tutorial Introduction
  • 1:15pm-3:00pm – Tutorial: Session I
    • Topic 1: Introducing the Jupyter Notebook
    • Topic 2: Data sources and collection
    • Topic 3: Parsing Twitter data
    • Topic 4: Simple frequency analysis
  • 3:00pm-3:20pm Coffee Break
  • 3:20pm-4:30pm – Tutorial: Session II
    • Topic 5: Geographic information systems
    • Topic 6: Sentiment analysis
    • Topic 7: Topic modeling
    • Topic 8: Network analysis
  • 4:30pm-4:45pm – Tutorial Conclusion
  • 05:00pm – Symposium Demos, Posters, Reception
  • 06:30pm – Symposium Ends