- Chris Ketchum has poem accepted for publication Jun 13, 2019
- Roxanne Rashedi has abstract accepted for oral presentation at AME 2019 Jun 1, 2019
- Maithilee Kunda gives invited talk at NAS Colloquium May 2, 2019
- Two papers accepted to CogSci 2019 Apr 11, 2019
- VU spotlight on Ben Scheer’s work with orangutans Feb 22, 2019
- Roxanne Rashedi presents poster at IES PI meeting Jan 9, 2019
- Temple Grandin visits our Imagery-Based AI class Nov 29, 2018
- Vanderbilt’s new Frist Center for Autism and Innovation Nov 8, 2018
- Ellis Brown, Fernanda Eliott, and Xiaotian Wang present work at ACS 2018 Aug 20, 2018
- Paper on visual imagery in AI published in journal Cortex Jul 28, 2018
- New $1.4M IES grant for teaching social skills to students on the autism spectrum Jul 1, 2018
- Tengyu Ma and Sean Cha present Toybox at CVPR workshop Jun 18, 2018
- Toybox dataset online! Jun 15, 2018
- Two papers accepted to CogSci 2018 Apr 13, 2018
- Maithilee Kunda presents lecture at AAAS headquarters Dec 4, 2017
Category Archives: News
Chris Ketchum has had one of his poems accepted for publication in Five Points, a journal of literature and art published by Georgia State University. The poem is titled, “Disordering,” and will appear in the Winter 2019 issue. Congratulations, Chris!
Our extended abstract, “Reasoning Together: Promoting Mutual Understanding in Technology Design for Individuals with Autism” has been accepted for oral presentation at Association for Moral Education’s 45th annual conference. Authors are Roxanne Rashedi and Maithilee Kunda. Congratulations, Roxanne!
Maithilee Kunda gave an invited talk on “Imagery-based AI” at the National Academy of Sciences Sackler Colloquium on, “The Brain Produces Mind by Modeling.” The colloquium was organized by Richard Shiffrin, Danielle Bassett, Sophie Deneve, Nikolaus Kriegeskorte, and Josh Tenenbaum. … Continue reading
We have two papers that will be presented at CogSci 2019 this summer in Montreal, Canada: Technology-Based Cognitive Enrichment for Animals in Zoos: A Case Study and Lessons Learned. Benjamin J. Scheer, Fidel Cano Renteria, and Maithilee Kunda. (See here … Continue reading
Vanderbilt has just done a nice writeup and video about lab member Ben Scheer, highlighting his work on our “orangutan app” for Zoo Atlanta and many other projects. Way to go, Ben! Read the full story (and see the video) … Continue reading
Our newest postdoctoral research fellow, Roxanne Rashedi, presented a poster on our new IES-funded project for teaching theory of mind and social skills to adolescents on the autism spectrum, at the IES PI meeting in Washington, DC. Poster title: Visual … Continue reading
We were very excited to host Dr. Temple Grandin at our class in Imagery-Based AI: Dr. Grandin also gave a keynote at a conference on Envisioning the Future of Human-Technology Partnerships (organized in part by Vanderbilt’s new Frist Center for … Continue reading
We are excited to be a part of Vanderbilt’s newly created Frist Center for Autism and Innovation: https://my.vanderbilt.edu/autismandinnovation/ This new center aims to support and develop the neurodiverse talents of individuals with autism, especially in relation to workforce and employment … Continue reading
We had a strong showing at this year’s Advances in Cognitive Systems conference, which took place at Stanford University. Ellis Brown gave an oral presentation on our work modeling human visual attention and salience for moving targets. Fernanda Eliott presented … Continue reading
Our paper surveying computational models of visual mental imagery in different areas of AI research (visuospatial reasoning, language understanding, etc.) has just been published in the journal Cortex, as part of a special issue on “The Eye’s Mind – visual … Continue reading
The lab has received a new $1.4M grant from the Institute of Education Sciences (IES) to develop a new educational technology platform for teaching theory of mind and social skills to middle school students on the autism spectrum. Maithilee Kunda … Continue reading
Tengyu Ma and Sean Cha presented our newly released Toybox dataset at the 4th CVPR Workshop on Vision Meets Cognition: Functionality, Physics, Intentionality and Causality. The published short paper can be found here. (Sean, left, and Tengyu, right… Great job!)
Our Toybox dataset has been posted online: https://aivaslab.github.io/toybox/ This is the culmination of hundreds of hours of effort from many of our lab members. A tip of the hat to Xiaohan Wang, whose ideas catalyzed this work and who led … Continue reading
We have two papers accepted to the Annual Meeting of the Cognitive Science Conference: “Shapes in Scatterplots: Comparing Human Visual Impressions and Computational Metrics,” by Joe Eilbert, Zameese Peters, Fernanda Eliott, Keivan Stassun, and Maithilee Kunda (oral presentation, acceptance rate … Continue reading
Maithilee Kunda was invited to Washington, DC, to speak for the annual winter lecture of the American Association for the Advancement of Science (AAAS) program of Dialogue on Science, Ethics, and Religion (DoSER). The event was titled, “Of Minds and … Continue reading
Lab member Ben Scheer made a virtual reality “music video” experience for a recent single by Nashville musician Kate Tucker. See discussion of Ben’s work at the end of this article: TVD Premiere: Kate Tucker, “In Your Arms” Single and … Continue reading
Our paper on, “Thinking in PolAR Pictures: Using Rotation-Friendly Mental Images to Solve Leiter-R Form Completion” has been accepted for presentation at the 2018 AAAI conference. The paper is authored by Josh Palmer and Maithilee Kunda. Congratulations, Josh!
Our first paper on our new dataset, the Egocentric, Manual, Multi-Image (EMMI) dataset was presented by Maithilee Kunda at the Egocentric Perception, Interaction, and Computing (EPIC) workshop at the ICCV computer vision conference in Lido, Italy. Link to abstract and … Continue reading
Ellis Brown gave an oral presentation about our work on using computational cognitive systems to model human visual information salience at the 2017 national conference of the American Indian Science and Engineering Society (AISES). Congratulations, Ellis! Full citation: Brown, E. … Continue reading
Maithilee Kunda was invited to be a speaker and panelist for a half-day symposium about AI and society hosted by the American Association for the Advancement of Science (AAAS) program of Dialogue on Science, Ethics, and Religion (DoSER) at a … Continue reading
We have been awarded a research grant through the National Science Foundation’s Science of Learning program. The grant is titled, “Learning Visuospatial Reasoning Skills from Experience” and is led by Maithilee Kunda, in collaboration with Bethany Rittle-Johnson (Psychology & Human … Continue reading
Zameese Peters gave an oral presentation about his research on “Building a Visual Long Term Memory for Artificial Intelligence Enabled Data Exploration in Astronomy” at the 2017 Leadership Alliance National Symposium in Hartford, Connecticut. Zameese also presented his work at … Continue reading
Fernanda and James each presented a poster at the Cognitive Science conference in London, UK: Fernanda Eliott: Visual data exploration: How expert astronomers use flipbook-style visual approaches to understand new data James Ainooson: A computational model for reasoning about the … Continue reading
Maithilee Kunda is co-PI for the new Vanderbilt Center for Autism and Innovation, funded through a Vanderbilt Trans-Institutional Programs (TIPS) pilot award. The website of the new center can be found at: https://my.vanderbilt.edu/autismandinnovation/ See a writeup of AIVAS Lab contributions … Continue reading
Binula placed second in the Vanderbilt summer research symposium for high school students, presenting a poster on his research on developing a new “Wearable Camera for Analysis of Human Visual Attention.”
Fernanda Eliott served as a panelist for a panel on “Putting Computer Science to Use in the Industry and University Settings”, at the STEM Consortium’s 2017 STEM Think Tank and Conference in Nashville, TN.
With Maithilee Kunda as PI, the lab has received one of Vanderbilt’s Discovery Grants for 2017. The title of the project is, “New Explorations in Visual Object Recognition.” This research has been led by lab member Xiaohan Wang. The full … Continue reading
Maithilee Kunda was an invited speaker at the Workshop on Egocentric Vision: From Science to Real-World Applications, held in Bloomington, Indiana. The title of her talk was, “Looking and thinking: What wearable cameras can reveal about visual mental imagery.” The … Continue reading
Fernanda Eliott attended the inaugural LATTICE symposium in Seattle, WA, for “Launching Academics on the Tenure-Track: An Intentional Community in Engineering.”
We have had two papers accepted for poster presentation at this year’s CogSci conference, led by James Ainooson and Fernanda Eliott, respectively. The papers are: Ainooson, J., and Kunda, M. (2017). A computational model for reasoning about the Paper Folding … Continue reading
Maithilee Kunda, together with colleagues Isabelle Soulières (University of Quebec at Montreal), Agata Rozga (Georgia Tech), and Ashok Goel (Georgia Tech) have published an article in the journal Intelligence, titled, “Error patterns on the Raven’s Standard Progressive Matrices Test.” https://doi.org/10.1016/j.intell.2016.09.004 … Continue reading
Noel Warford presented a poster on his research on “A New Test of Visual and Verbal Thinking” at the Oberlin Celebration of Undergraduate Research.
Maithilee Kunda gave a short talk about her research in AI and visual thinking at the MIT Tech Review’s Emtech conference. A video of the talk can be seen here: http://events.technologyreview.com/video/watch/maithilee-kunda-vanderbilt-innovator/
Maithilee Kunda is a co-PI on a collaborative NSF INCLUDES project, led by Overtoun Jenda at Auburn University, titled, “South East Alliance for Persons with Disabilities in STEM (SEAPD-STEM).”
Maithilee Kunda was named to the MIT Tech Review’s annual list of 35 Innovators Under 35, in the category of “Visionary,” based on her research on visual-thinking in AI that is inspired by studies of visual thinking in autism. For … Continue reading
A great end-of-summer lab outing to Grand Old Golf. We saw examples of setting the learning rate too high, one-shot learning, optimization through quantum tunneling, and even deep networks!
Maithilee Kunda will be teaching a new graduate course at Vanderbilt this fall called “Computational Mental Imagery.” Catalog description: Computational basis of visual mental imagery in human cognition and in artificial intelligence (AI) systems. Topics include knowledge representations and operations … Continue reading
Mohamed El-Banani is presenting a poster at this year’s Cognitive Science conference in Philadelphia this week. The title of the research paper is, “A Computational Exploration of Problem-Solving Strategies and Gaze Behaviors on the Block Design Task.” (The full paper … Continue reading
Xiaohan Wang is attending the 15th Neural Computation and Psychology Workshop in Philadelphia this week. The title of this year’s workshop is, “Contemporary Neural Network Models: Machine Learning, Artificial Intelligence, and Cognition.”
Brandt Plomaritis is presenting a research poster at the Vanderbilt Student Research Summer Symposium. The title of the poster is, “Random Walks as a Model of Exposure to Examples during Infant Learning.”
A research paper, co-authored by Maithilee Kunda and Julia Ting, has just been published in the journal Advances in Cognitive Systems. The title of the article is, “Looking Around the Mind’s Eye: Attention-Based Access to Visual Search Templates in Working … Continue reading
The first official lab meeting of the summer is taken outdoors to the much-loved SATCO (San Antonio Taco Company).
Maithilee Kunda is organizing an event called, “Neuro-diverse: A Symposium on Autism, Neuroscience, and Perceptual Thinking.” The symposium will be held today on the Vanderbilt campus. More information, including invited speakers, program, time, and location, can be found here.
Maithilee Kunda is at the University of East Anglia, Norwich, UK, this week to speak at a conference called The Eye’s Mind: Visual Imagination, Neuroscience and the Humanities. The title of her talk is, “Visual imagination – A view from … Continue reading
Work by Maithilee Kunda, Mohamed El-Banani, and Jim Rehg has been accepted for poster presentation at CogSci 2016. The title of the paper is, “A Computational Exploration of Problem-Solving Strategies and Gaze Behaviors on the Block Design Task.”
Maithilee Kunda has joined Vanderbilt as a new assistant professor of computer science and computer engineering in the EECS Department.