Description
We are building a foundation model of human cognition building on our earlier work [1]. For this, we have already collected a large-scale data set by transcribing nearly 200 psychological experiments into a text-based form. This data set reaches an unprecedented scale, consisting of nearly 20 million human choices from about 300,000 participants, covering domains such as decision-making, planning, problem-solving, memory, reasoning, and many others. We now want to finetune LLMs on this data set to create a foundation model of human cognition. Ideally, the resulting model should then not only simulate, predict, and explain human behavior in a single domain but offer a truly unified take on the human mind. The potential applications of such a model are immense, ranging from using it for (1) human-in-the-loop studies, (2) generating new hypotheses about information processing in the brain, (3) prototyping of behavioral experiments, and (4) automated discovery of cognitive theories.
[1] Binz, M., & Schulz, E. (2023). Turning large language models into cognitive models. arXiv preprint arXiv:2306.03917.