AI Cognition Lab (AICL): How do AIs Think?

The AI Cognition Lab (AICL) is a group of collaborating researchers and educators who strive to understand and to convey how AIs think, and how AIs can be designed to think better. Our focus has been on characterizing generative AI “thinking”, but with an eye towards combining this reflexive (as in reflex, analog to subconscious, System 1 human thought), with deliberative AI (analog to conscious, System 2 thought) for more robust, reliable, and safer AI.

Recent AICL work on generative (reflexive) AI

  • Roberts, J, Moore, K., Wilenzick, D., Fisher, D. (2024). Using Artificial Populations to Study Psychological Phenomena in Neural Models. The 38th International Conference on the Advancement of Artificial Intelligence. (earlier draft on arxiv).
  • Moore, K., Roberts, J., Pham, T., Ewaleifoh, O., Fisher, D. (2024). “The Base-Rate Effect on LLM Benchmark Performance: Disambiguating Test-Taking Strategies from Benchmark Performance.” In Conference on Empirical Methods in Natural Language Processing (EMNLP). Miami, Florida (arXiv preprint).
  • Roberts, J., Moore, K., Pham, T., Ewaleifoh, O., Fisher, D. (2024). Large Language Model Recall Uncertainty is Modulated by the Fan Effect. In SIGNLL Conference on Computational Natural Language Learning (CoNNL). Miami, Florida (arXiv preprint).
  • Roberts, J., Moore, K., & Fisher, D. (2025). Do Large Language Models Learn Human-Like Strategic Preferences? In ACL 2025 Workshop for Research on Agent Language Models (REALM), Vienna, Austria, July 31, 2025. (arXiv preprint).
  • Moore, K., Roberts, J. Pham, T., & Fisher, D. (2025, in press). “Chain of Thought Still Thinks Fast: APriCoT. Helps with Thinking Slow” In Proceedings of COGSCI2025. (arXiv preprint).

Early foundational work on deliberative AI cognition and cognitive modeling