Concepts About Agency

How do we know what people think about the kinds of intelligence inherent to different agents such as humans, computers and robots? One way of exploring this question might be to ask people whether they think an agent is “intelligent”, has “goals”, or can “think”. However, there are several drawbacks with this simple approach. Not only is it difficult to know what, exactly, people mean when they use these terms, but people may even rely on different shades of meaning for the same terms when they refer to computer intelligence vs. human intelligence. Therefore, our approach has been to explore attributions of agency by asking people to make predictions about the behavior of agents in specific settings.

 

 

 

 

 

 

 

For example, in one setting, participants are shown the picture in Figure 1a, and are asked to imagine that a computer, a person, or a robot has acted upon the Duck. Then, participants are shown picture 1b, which depicts the same two objects, but with their location swapped. The question is, what will the agent do now? We hypothesized that participants’ decision would be predicated on the degree to which they attribute goals to the agent. An agent with real goals similar to those a human might have would act upon the same object, even though it is in a new location. Conversely, if the agent could not really be said to have goals, participants should predict that they agent would just reach to the same old location, even though it contained a new object.

We have combined this scenario with several others like it, and found that participants strongly distinguish between computers and people by making many more goal-oriented response predictions for people. More interesting, we have observed that people make no more goal-oriented responses for robots than for computers, despite the obviously anthropomorphic appearance of some of the robots. However, when people are asked to focus their attention on a series of actions in which the robot makes choices, they begin to make more goal-oriented responses for the robots. We have also observed that older participants distinguish less strongly between machines and humans than younger participants, and we have validated the behavioral prediction scenarios by demonstrating that they are correlated with more simple attributions of “goals”.

A key element of our current project is to use this behavioral prediction measure on agency attribution to explore how concepts about agents might change with experience. For example, in one recent project we have been measuring the degree to which specific situations invoke cognitive dissonance (e.g. cognitive conflict) about an agent, and have found that dissonance predicts changes in behavioral predictions. In addition we are exploring how these attributions might be linked with children’s ability to use a computer-based teachable agent system. For more information on this, see the description of the “Betty’s Brain” project.

References

Levin, D.T., Saylor, M.M., Varakin, D.A., Gordon, S.M., Kawamura, K, & Wilkes, D.M. (2006). Thinking about thinking in computers, robots, and people. Proceedings of the 5th Annual International Conference on Development and Learning, 5, 49.

Levin, D.T., Saylor, M.M., Killingsworth, S.S., Gordon, S., & Kawamura, K. (in review). Tests of concepts about different kinds of minds: Predictions about the behavior of comptuers, robots, and people.

 

Levin, D.T., Killingsowrth, S.S., and Saylor, M.M. (2008). Concepts about the capabilities of computers and robots: a test of the scope of adults’ theory of mind. Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction. New York, NY.

Levin, D.T., Saylor, M.M., Lynn, S.D. (in review). Distinguishing first-line defaults and second-line conceptualization in reasoning about humans, robots, and computers.