The AIVAS Lab does research at the intersection of artificial intelligence and cognitive science, in the area of computational cognitive systems. Most of our research involves studying how visual mental imagery contributes to learning and intelligent behavior, both in humans and in AI systems. Many of our research directions were heavily inspired by the writings of Dr. Temple Grandin and other individuals on the autism spectrum.
Most of the research that we do follows two main pathways. First, we build and study AI systems as a way to understand how people think, for “neurotypical” individuals as well as for individuals with atypical cognitive conditions such as autism. Second, we use findings from cognitive science to advance the state of the art in AI, especially to develop new AI techniques for solving complex problems using visual imagery.
What are Visual Analogical Systems?
The term analogical means that something has an organized relationship with something else–like an analogy. In AI and cognitive science, analogical representations refer to a way of portraying information that retains a correspondence with the real world. For example, an image of a cat is analogical because the 2D spatial information contained in the image corresponds to what the cat looks like in real life. On the other hand, the word “cat” is not an analogical representation, because it has no such correspondence. Most of the AI systems that we build use visual analogical representations as the core data structures that support learning, problem solving, and other intelligent behaviors.