Maithilee Kunda is an assistant professor of computer science and computer engineering at Vanderbilt University. Her work in artificial intelligence, in the area of cognitive systems, looks at how visual thinking contributes to learning and intelligent behavior, with a focus on applications for individuals on the autism spectrum. She currently directs Vanderbilt’s Laboratory for Artificial Intelligence and Visual Analogical Systems, and is a founding investigator in Vanderbilt’s Frist Center for Autism and Innovation. She holds a B.S. in mathematics with computer science from MIT and a Ph.D. in computer science from Georgia Tech. In 2016, she was recognized as a visionary on the MIT Technology Review’s global list of 35 Innovators Under 35, and in 2020, her research was featured on CBS 60 Minutes with correspondent Anderson Cooper.
mkunda [at] vanderbilt [dot] edu
2301 Vanderbilt Place
Nashville, TN 37235-1679, USA
Ainooson, J., and Kunda, M. (2017). A computational model for reasoning about the Paper Folding task using visual mental images. In Proceedings of the 39th Annual Meeting of the Cognitive Science Society, London, UK.
Eliott, F. M., Stassun, K., and Kunda, M. (2017). Visual data exploration: How expert astronomers use flipbook-style visual approaches to understand new data. In Proceedings of the 39th Annual Meeting of the Cognitive Science Society, London, UK.
Kunda, M., El-Banani, M., and Rehg, J. (2016). A computational exploration of problem-solving strategies and gaze behaviors on the Block Design task. In Proceedings of the 38th Annual Meeting of the Cognitive Science Society, Philadelphia, PA.
Kunda, M., and Ting, J. (2016). Looking around the mind’s eye: Attention-based access to visual search templates in working memory. Advances in Cognitive Systems, 4, 113–129.
Kunda, M., McGreggor, K., and Goel, A. K. (2013). A computational model for solving problems from the Raven’s Progressive Matrices intelligence test using iconic visual representations. Cognitive Systems Research, 22-23, pp. 47-66.
Kunda, M., and Goel, A. K. (2011). Thinking in Pictures as a cognitive account of autism. Journal of Autism and Developmental Disorders, 41 (9), pp. 1157-1177.
A more complete list of publications can be found here.
Computation and Cognition. Computational approaches to understanding human cognition, including research design and methods for integrating models with theory and observation. Topics include knowledge representation, concept formation, reasoning and search, analogy, mental imagery, and connectionism, as well as multidisciplinary perspectives on mind, brain, behavior, and society. (Previous names: Introduction to Cognitive Science. Taught at Georgia Tech: Summer 2013, Fall 2013, Summer 2015. Taught at Vanderbilt: Spring 2016, Fall 2017.)
Imagery-based Artificial Intelligence. Mathematical and computational techniques for imagery-based artificial intelligence (AI). Topics include imagery-based knowledge representations, imagery-based reasoning and problem solving approaches, and machine learning in imagery-based systems, as well as cognitive science findings related to human visual mental imagery in autism, education, and scientific discovery. (Previous names: Computational Mental Imagery. Previously taught: Fall 2016, Fall 2018.)
Artificial Intelligence. Principles and programming techniques of artificial intelligence. Strategies for searching, representation of knowledge and automatic deduction, learning, and adaptive systems. Survey of applications. (Taught: Fall 2019.)
Advanced Artificial Intelligence. Discussion of state-of-the-art and current research issues in heuristic search, knowledge representation, deduction, and reasoning. Related application areas include: planning systems, qualitative reasoning, cognitive models of human memory, user modeling in ICAI, reasoning with uncertainty, knowledge-based system design, and language comprehension. (To be taught: Spring 2020.)
Introduction to Machine Learning. Fundamentals of machine learning (ML), with a focus on supervised learning and reinforcement learning. Topics include decision trees, neural networks, instance-based learning, boosting, temporal difference learning, and also data privacy, human subjects research protections, and impacts of ML on society. (Catalog name: Projects in AI. Previously taught: Spring 2017, Spring 2018, Spring 2019.)
People often ask me for recommendations on how to learn about AI. Here are two resources on machine learning that are good for beginners:
1. Machine Learning by Charles Isbell and Michael Littman
This is a free online course offered through Udacity and Georgia Tech’s online masters program that I think gives a great conceptual introduction to machine learning. It is divided into three sections: supervised, unsupervised, and reinforcement learning. The lectures are also quite entertaining (Isbell and Littman are practically a two-man comedy team).
2. Neural Networks and Deep Learning by Michael Nielsen
This free, online textbook is a very effective introduction to neural networks, especially (but not only) if you are starting from absolutely zero knowledge about them. If you do go through this book, I strongly encourage doing all of the exercises, including playing around with the Python code.
Of course, there is a LOT more to AI than machine learning. (If you are at all surprised by this statement, then you should pay particular attention to this section!) For an insightful window into other areas of AI, I recommend the following:
3. Knowledge-Based AI: Cognitive Systems by Ashok Goel and David Joyner
This is another Udacity class offered by Georgia Tech. (Ashok Goel was my PhD advisor, and David Joyner is one of my PhD “siblings.”) The introduction alone gives an excellent birds-eye view of the big “conundrums” that drive AI research, and how different areas of AI attempt to frame and solve these conundrums in different ways. (This was also the course in which one of the TAs was the infamous Jill Watson….)
4. Mind Design II, edited by John Haugeland
Reading this book was one of the most influential intellectual journeys I ever took. Starting with Turing’s classic paper on “Computing Machinery and Intelligence,” going through key ideas from thinkers like Newell & Simon, Dennett, and Searle (the famous Chinese room argument), on to Rumerlhart’s visions of connectionist representations, and so forth. (You can also find many of the individual papers from this collection online, in their original published form.)