The projects listed here are either currently active, recently completed, or, perhaps, long dormant (but let us know if you are interested – we often revive ideas from previous research).

Moral cognition and fiction films project. This project is funded by the Templeton Religion Trust, and investigates how engagement with fictional characters in films fosters moral understanding. Drawing from psychology, philosophy, and neuroscience, the project explores the concept of cinema’s “reflective afterlife”—the cognitive and emotional processes that continue long after viewing. The first empirical study tests which capacities, like theory of mind, predict viewers’ engagement with film characters, spontaneous reflection, and moral understanding post-viewing. A second study uses EEG to examine neural responses during film viewing, aiming to link specific cognitive patterns to moral reflection and memory. Through an interdisciplinary approach, we aim to bridge the gap between the perceptual experience of cinema and its impact on deeper moral cognition. By combining empirical research with philosophical and film studies insights, the project seeks to enhance our understanding of how cinematic storytelling can shape moral beliefs and behaviors over time.

Moments: Systems of technological support in nursing. The nursing project, funded by the NSF, is an interdisciplinary collaboration between the Levin Lab and labs within Vanderbilt’s Departments of Engineering, Teaching & Learning, and the School of Nursing. The project aims to apply a multimodal learning analytics approach to gain a deeper understanding of nursing students’ experiences in simulation-based education. By leveraging these insights, we are developing AI-powered, technology-enhanced tools to improve students’ overall learning experience and outcomes. Within this broader scope, our lab focuses on several sub-projects, such as investigating how eye gaze can predict students’ cognitive states and performance in clinical simulations.

The eye movements in cyberlearning project. This project is supported by an NSF grant and the goal is to explore how eye movements might help us understand learning during screen-captured instructional videos. In some experiments, we have undergraduates view screen-captured instructional videos while we record their eye movements. We have been analyzing the degree to which eye movements can be used to predict learning from the videos (this turns out to be easier said than done), and we have also been testing for more general relationships between eye movements and *any* between-participant similarities in the answers to content questions (this is more doable…). We have also been collecting eye movement data in schools while students interact with a teachable agent science learning system. This subproject is in collaboration with Gautam Biswas at Vanderbilt’s Institute for Software Integrated Systems. Finally, in collaboration with Duane Watson, we have been exploring the impact of increasing the speed of instructional videos. We have observed that it is possible to speed these videos considerably with only minimal impact on learning, so you’d think this might be a great way of making them more efficient. The trouble is, people hate the speeded videos, so we have been exploring ways of speeding the videos that do not lessen user preference. For example, we have created videos that are speeded by cutting back between-phrase pauses and selectively speeding speed that was slow to start with, or seems unimportant. More news soon on whether this works!

Research exploring dynamic event perception. In this work, we are exploring the representational basis of event perception. In one project, we have been testing people’s ability to perceive reversals during event sequences. We find that these reversals are never detected spontaneously. Even when viewers are purposely looking for them, these events are difficult to detect, and if viewers are asked to do a simultaneous interference task, the reversals are essentially impossible to detect. We are current working on a manuscript describing these experiments. In a second project, current grad student Lewis Baker is exploring how spatial disruptions affect visual property representations in the context of short films. He finds that disruptions induced by violations of a basic film-editing heuristic referred to as the 180 rule increase property comparisons in working memory. In an in-progress paper, we argue that this reflects the operation of a “spatial trigger” that may serve as one of the perceptual bases of event perception.

Concepts about agency and visual cognition in technological contexts. In this work we have been exploring people’s understanding of different sorts of minds (e.g. people, computers, and robots), and how this understanding affects their interaction with technology. This work has been supported by a grant from the National Science Foundation, and a full description of it can be found here. One of our most important tasks in this work has been to develop a means of measuring basic concepts about agents, and to this end we have developed the behavioral prediction questionnaire which can assess explicit attributions of agency without requiring respondents to use problematic and abstruse terms such as “goal”, “agent”, or “intention”. Other projects have explored how people demonstrate actions for agents, how children think about agents, and how concepts about agents change during HRI (human-robot interaction).