Welcome! Research in the Markant Lab examines interactions between learning, memory, and decision making.
How do people take action to navigate uncertain or changing environments?
How do learners monitor and control their own learning experience?
How do learning and memory influence the tendency to take risks or explore new behaviors?
We use behavioral experiments and computational modeling to investigate cognitive mechanisms involved in effective learning and decision making.
In addition to basic research in cognitive science, the lab explores implications of these processes for learning and decision making in other contexts, including real-world instructional environments and among populations of learners with diverse cognitive abilities.
Learn more about our research by reading the posts below or by browsing our list of publications.
Do people change their beliefs when they see a data visualization like a scatterplot? Communicating science and making evidence-based arguments often involve data visualizations like the one below. It can be tempting to think that “the data speaks for itself”—that it paints a clear, relatively unambiguous picture about the relationship between a set of variables. But the mere presentation of statistical evidence, no matter how strong, does not guarantee that people will change their minds, particularly when they have strong preexisting beliefs that run counter to the data.1
In a new VIS 2020 paper, my colleagues and I examined how people update their beliefs about statistical relationships when viewing scatterplots. We developed some new methods for eliciting beliefs about these relationships and used computational modeling to evaluate the impact of different types of scatterplot visualizations on belief updating. This project adds to a growing number of studies that aim to better understand (and model) how people learn through interactive data visualizations, including cases where such visualizations fail to persuade.
A favorite paper of mine which reviews the many ways people avoid updating their beliefs in the face of contradictory evidence is Chinn and Brewer (1993). ↩
It’s well-known in the behavioral sciences that people differ in their attitude toward taking risks. Some individuals are risk-seekers who like to roll the dice, while others are risk-averse because they prefer to play it safe.
Past studies of risk attitudes have typically focused on well-defined choices in which the set of possible actions are predetermined by the researcher and the possible outcomes of each action are known. For example, a person might face a choice between:
Option A (safe): earning $5
Option B (risky): earning a lottery ticket with a 5% chance of winning $100
An individual’s risk preference is thought to influence whether they will go with the safe or risky option in this kind of well-defined decision.
But many real-world choices are ill-defined, in that a set of choice options is never explicitly provided to you. Instead, you have to generate possible courses of action for yourself. Do risk preferences affect how people generate actions in ill-defined, uncertain situations, just as they affect choices between predetermined options?
This is the question behind an ongoing project in the lab (with Meagan Padro and Mitra Mostafavi) and the topic of a poster that will be presented at CogSci 2020. To learn more, click on the image below to get the PDF of the poster. If you attend CogSci swing by the (virtual) poster session on August 1, 11:00-12:40 EDT!
In some prior work, my colleagues and I have found that active control—being able to dictate the content or pacing of information—leads to enhanced episodic memory for materials experienced during study.12 Let’s say that I’m your instructor and I have a set of definitions on flashcards that I want you to learn. If I give you (the student) more control over the selection and pacing of flashcards, it’s likely that you’ll have better memory later on compared to conditions where you don’t have control.
But as an instructor, I don’t just want my students to memorize a set of independent definitions. I also want them to integrate those concepts together to form some coherent knowledge about the domain. For example, I don’t just want my research methods students to be able to define different types of validity; I also want them to be able to relate them to each other and the broader goals of experimental methods. In contrast to the first goal of forming memories of independent sets of items, it is less clear to what extent having control over learning leads to enhanced integration of study experiences into conceptual knowledge.
The project described in this poster aims to understand the effects of active control on relational knowledge formation. It uses a common relational reasoning task (transitive inference) to disentangle enhancements to memory for individual items from enhanced integrative encoding. The results so far suggest that having control over the selection of items leads to improved integrative encoding over a passive condition lacking such control. Critically, however, the benefit of active control only appears among people with higher working memory capacity. This provides another piece of evidence that the benefits of “active learning” may not be so universal, but instead depend on students having the cognitive resources to maintain and integrate disparate study experiences.
Click on the image below to get the PDF of the poster:
Imagine that you’re on Amazon trying to decide which of two products to purchase. One way to learn about the options before you buy is to look at reviews from other people who have bought the same items. Each review gives you a glimpse into the relative value of each option, and you can explore each item (i.e., keep on reading reviews) for as long as you like until you feel ready to hit the buy button.
So, how many reviews do you decide to read for each product? How does the variability in the ratings affect how long you explore—for example, is there a difference between seeing a string of solid 4-star reviews as opposed to a mixture of 5-star and 3-star reviews? How does your exploration change when searching for a relatively mundane product like a power adapter as opposed to a major purchase like a computer?1
At Psychonomics this week I’m presenting some new results from a project where we try to understand how people adapt their exploration in response to these kinds of environmental factors, including the variability in the outcomes they experience and the rewards that are at stake. The experiment described in the poster below was designed to test the predictions of a sequential sampling model that accounts for this adaptive exploration and its effects on how people make choices. Sequential sampling models are widely used to model the relationship between choices and response times in a number of decision making tasks2, and our results suggest that similar mechanisms can account for the way that people sample experiences from the environment prior to making a choice.
Click on the image below to get the PDF of the poster:
I’m using product reviews as an illustrative example of experience-based decision making in the wild (our experiment was more stripped down), but some recent research has looked exactly at how sampling review ratings to learn about products impacts choice (see here). ↩
See this recent blog post for a nice introduction to sequential sampling models as applied to choice RT. ↩