Alex Kale
Assistant Professor of Computer Science at UChicago.
Uncertainty visualization, data cognition, HCI.
Assistant Professor of Computer Science at UChicago.
Uncertainty visualization, data cognition, HCI.
Hello! I am an Assistant Professor in the Department of Computer Science and the Data Science Institute at the
University of Chicago. I create and evaluate tools for helping people think with data, specializing in how
data visualization can be used to support value judgments and reasoning with uncertainty. I lead the University
of Chicago's Data Cognition Lab, where we build analysis software
that enables users to externalize their thinking when grappling with data-driven judgments.
Before starting at UChicago, I earned my PhD at the University of Washington (UW) Information School where I worked with Jessica Hullman. During graduate school, I collaborated on statistical tools with members of the Midwest Uncertainty Collective at Northwestern University CS, the Interactive Data Lab at UW CS&E, and the Interpret ML team at Microsoft Research. I also earned my MS in Information Science at UW in 2020 and my BS in Psychology at UW in 2015.
I create and evaluate software to help people think with data. Visualizations and software often mediate our interactions with data, in part because these media facilitate efficiency in thinking and communication. However, our current approaches to thinking with data often fail to account for the cognitive mechanisms that guide people's interpretations of data, such as heuristics and other dynamic processes of the mind which underly human judgment and decision making. As a result, tools for reasoning with data leave us open to the failure modes of human cognition, especially around applications involving uncertainty and statistical reasoning. My research aims to address these problems by creating tools that explicitly represent the user's cognitive process and by pursuing a more theoretically grounded and empirically rigorous science behind the design of software tools for data science and visualization.
I am recruiting PhD students and postdoctoral fellows! Prospective research mentees with an interest in building analysis software and studying how people use data visualizations should contact me. Applicants should have excellent written and verbal communcation skills in addition to experience with some of the following: software development using JavaScript; data analysis and modeling using R or Python; experimental design; domain knowledge in human-computer interaction, perceptual & cognitive psychology, economics, statistics, or data-intensive systems. You can learn more about my lab by visiting our website.
Here are some representative publications. See my CV and research statement from 2021 to learn more.
EVM: Incorporating Model Checking into Exploratory Visual Analysis
VIS 2023
Alex Kale, Ziyang Guo, Xiao Li Qiao, Jeffrey Heer, and Jessica Hullman
MetaExplorer: Facilitating Reasoning with Epistemic Uncertainty in Meta-analysis
CHI 2023
Alex Kale, Sarah Lee, Terrance Goan, Elizabeth Tipton, and Jessica Hullman
Causal Support: Modeling Causal Inferences with Visualizations
VIS 2021, Honorable Mention Award 🏆
Alex Kale, Yifan Wu, and Jessica Hullman
Visual Reasoning Strategies for Effect Size Judgments and Decisions
VIS 2020, InfoVis Best Paper Award 🏆
Alex Kale, Matthew Kay, and Jessica Hullman
Boba: Authoring and Visualizing Multiverse Analyses
VIS 2020
Yang Liu, Alex Kale, Tim Althoff, and Jeffrey Heer
Adaptation and Learning Priors in Visual Inference
Position Paper
VIS 2019
Alex Kale and Jessica Hullman
Capture & Analysis of Active Reading Behaviors for Interactive Articles on the
Web
EuroVis 2019
Matt Conlen, Alex Kale, and Jeffrey Heer
These are my recent talks in addition those accompanying conference papers.
University of Wisconsin at Madison 2023 - Seminar on Systematic Review and Meta-Analysis
Summarizing what can be learned from bodies of scientific literature requires difficult judgments about which study results can be meaningfully
compared, and whether it makes sense to aggregate evidence in a meta-analysis. Numerous tools for assessing quality of evidence offer guidance
on identifying sources of epistemic uncertainty, such as common threats to internal or external validity of study results. However, existing
software for systematic review and meta-analysis does little to emphasize how epistemic uncertainty should inform analytic choices for synthesizing
findings. I present MetaExplorer, a prototype web application designed to provide a guided process for reasoning about epistemic uncertainty in
meta-analysis. I also summarize findings from interviews with research synthesis methodologists and practitioners in biomedical science, education,
computer science, and statistics. This work highlights the cognitive pitfalls, technical hurdles, and inconsistent standards across research communities
that pose challenges to addressing epistemic uncertainty in research synthesis. I reflect on opportunities for future software development and invite the
audience to join me in discussion.
University of Chicago Booth School of Business 2023 - Behavioral Science Seminar Series
When visualization researchers and practitioners talk about the value of data visualization, one of the most commonly cited use cases is helping people make informed decisions.
It is often assumed that people making decisions will benefit from mere information exposure, that if one can read data off of a chart, they will be able to use that information
for the purpose of decision making. However, relatively little behavioral research on data visualization has applied theories and models from economics to investigate how well
chart users can make utility-optimal decisions when relying on different displays. I present an experiment that investigates how well various forms of uncertainty visualization
support decision making, specifically an incentivized choice about whether to pay for an intervention. In this study, I looked not only at which visualization designs best support
decisions but also how well those same representations support chart users in estimating the effect size of the intervention versus the status quo. Through a combination of
hypothesis testing, descriptive behavioral modeling, and qualitative analysis of users' self-reported chart interpretation strategies, I explain the effectiveness of various
visual representations in terms of the heuristics or visual reasoning strategies that people apply when making decisions with uncertainty visualizations. I discuss how incorporating
economic formalisms and knowledge of visual reasoning strategies into visualization research can help us build data analysis software that is better aligned with natural tendencies
of human judgment and decision making.
SIPS 2022 - Workshop: Multiverse Analyses - Introduction and Applications
Research and data science involve myiad decisions about how to collect, analyze, and report on data. These decisions impact what gets measured
and how it gets interpreted, potentially influencing the downstream conclusions that are drawn from data. For more rigorous analysis, we need
software tools that enable analysts to express a set of possible decisions and skeptically examine how robust their results are to different combinations
of choices. I present Boba, a tool for multiverse analysis created in collaboration with researchers in the University of Washington Interactive Data Lab.
Boba consists of both (1) a domain specific language for authoring multiverse analyses in Python, R, and other scripting languages, and (2) an interactive
visualization tool for exploring results of multiverse analyses which runs in a web browser. In this workshop presentation, I walk through a few example analyses
demonstrating how Boba and tools like it can improve the rigor of data analysis.
SDSS 2020 - User Testing Statistical Graphics
Conventional statistical graphics tend to emphasize point estimates and omit uncertainty information. However, given that research in
visualization, psychology, and behavioral economics shows that people often satisfice (i.e., use heuristics that deviate from the optimal strategy)
when reasoning with uncertainty, users of data visualizations may not recognize or correctly interpret uncertainty. I argue that we need to understand
users' potential reasoning strategies in order to design graphical interfaces that steer users toward more systematic ways of reasoning with uncertainty.
In this talk, I present empirical evidence on how users satisfice, both when reading individual charts and when conducting analysis, and
I discuss ways we are designing statistical graphics and interfaces for data analysis that anticipate users' tendency to satisfice.
In my role as an Assistant Professor of in Computer Science and the Data Science Institute at the University of Chicago, I teach courses on data visualization, introductory programming, and advanced topics in data science.
You can learn more about my teaching by reading my teaching statement from 2021.