The projects listed here are either currently active, recently completed, or, perhaps, long dormant (but let us know if you are interested – we often revive ideas from previous research).
The eye movements in cyberlearning project. This project is supported by an NSF grant and the goal is to explore how eye movements might help us understand learning during screen-captured instructional videos. In some experiments, we have undergraduates view screen-captured instructional videos while we record their eye movements. We have been analyzing the degree to which eye movements can be used to predict learning from the videos (this turns out to be easier said than done), and we have also been testing for more general relationships between eye movements and *any* between-participant similarities in the answers to content questions (this is more doable…). We have also been collecting eye movement data in schools while students interact with a teachable agent science learning system. This subproject is in collaboration with Gautam Biswas at Vanderbilt’s Institute for Software Integrated Systems. Finally, in collaboration with Duane Watson, we have been exploring the impact of increasing the speed of instructional videos. We have observed that it is possible to speed these videos considerably with only minimal impact on learning, so you’d think this might be a great way of making them more efficient. The trouble is, people hate the speeded videos, so we have been exploring ways of speeding the videos that do not lessen user preference. For example, we have created videos that are speeded by cutting back between-phrase pauses and selectively speeding speed that was slow to start with, or seems unimportant. More news soon on whether this works!
The minds and numbers project. In this project we are exploring two basic kinds of cognition in the context of natural events: theory of mind, and numerical cognition. Using a narrative film we created specially for this project, we will be collecting brain imaging data while children view a series of number and theory of mind events that occur in the context of a story about a stupid factory (called “Dessert Boxsz!”) where a bunch of young people work at boring jobs. We will be testing the degree to which brain activations during each kind of event are similar to those observed in the typical non-narrative lab event, the correlation between individual differences on cognitive skill in each of these domains is correlated with brain activations, and how brain activations can reveal the cognitive strategies necessary when events involve both of these kinds of cognition.
Research exploring dynamic event perception. In this work, we are exploring the representational basis of event perception. In one project, we have been testing people’s ability to perceive reversals during event sequences. We find that these reversals are never detected spontaneously. Even when viewers are purposely looking for them, these events are difficult to detect, and if viewers are asked to do a simultaneous interference task, the reversals are essentially impossible to detect. We are current working on a manuscript describing these experiments. In a second project, current grad student Lewis Baker is exploring how spatial disruptions affect visual property representations in the context of short films. He finds that disruptions induced by violations of a basic film-editing heuristic referred to as the 180 rule increase property comparisons in working memory. In an in-progress paper, we argue that this reflects the operation of a “spatial trigger” that may serve as one of the perceptual bases of event perception.
Concepts about agency and visual cognition in technological contexts. In this work we have been exploring people’s understanding of different sorts of minds (e.g. people, computers, and robots), and how this understanding affects their interaction with technology. This work has been supported by a grant from the National Science Foundation, and a full description of it can be found here. One of our most important tasks in this work has been to develop a means of measuring basic concepts about agents, and to this end we have developed the behavioral prediction questionnaire which can assess explicit attributions of agency without requiring respondents to use problematic and abstruse terms such as “goal”, “agent”, or “intention”. Other projects have explored how people demonstrate actions for agents, how children think about agents, and how concepts about agents change during HRI (human-robot interaction).
Face perception. We have done research exploring how people recognized same-race and cross-race faces, and more recently have been working with Emma Cohen of Oxford University on a project in which we are testing the degree to which basic perceptual distortions in lightness perception may change cross culturally.