Theory of Mind and Artificial Intelligence

Jung Hwan Kim
4 min readJun 23, 2022

The sense of reality is cultivated from an early age. Infants learn to be aware of one’s sensations and develop the ability to differentiate what goes on internally and what goes on externally. In reality testing, the ego separates the internal world from the external world. The ego is able to acknowledge that its fears, hopes, and imagination are internal. Such abilities extend to understanding how others think and behave. Understanding that others have unique emotions, beliefs and desires is termed theory of mind. The theory of mind is developed in stages, starting by understanding that others have different desires and beliefs, then understanding that others have diverse views on the same subject.

In the field of Computer Science, there have been attempts to replicate the cognitive abilities of human beings, namely with Artificial Intelligence. Implementation of cognitive abilities in Artificial Intelligence can be divided into ‘cold’ cognition’ and ‘hot’ cognition. Cold cognition refers to processes that require less emotion, allowing tasks like image classification or object detection. Hot cognition involves emotional and social capabilities, including theory of mind. Demand for hot cognition in machines has increased due to application of Artificial Intelligence in daily life, as people become more reliant on devices, such as autonomous cars, to make decisions.

Theory of mind allows one to understand others by attributing mental states. Theory of mind includes understanding others’ desires, intent, beliefs of another person. The need for such capabilities is needed for Artificial Intelligence to be safely deployed in real life situations, and further enhance human-computer interaction.

The paper ‘Knowing me, knowing you: theory of mind in AI’ highlights how intents of different agents are crucial for Artificial Intelligence (Cuzzolin, 2020). The paper gives an example: “children walking on the pavement towards school may spot an ice cream van across the road and decide to dart across the road to get their ice cream. No predictive system functioning purely on past observed motion could be accurate and trustworthy enough in such complex environments, without considering context and the nature of the other agents involved.” The example outlines the importance of hot cognition for Artificial Intelligence. Understanding the intentions and desires of other agents will lead to a more reliable, safer autonomous vehicle. The question arises, though: Can computers cultivate theory of mind or hot cognition?

In order to understand other’s desires and intentions, one first acknowledges the desires and intentions of oneself, and then attempts to simulate others’ mental processes. This process is accumulated over the years. As in the example above, one may recollect a time when one’s younger brother ran across the road with haste, so one can prepare for abrupt movements.

Hubert L. Dreyfus argues that simulating such cognitive processes on computers has limitations (1965). Dreyfus claims that for devices to have human-like intelligence, they would have to accumulate intelligence in a ‘human-like being in the world’. Thus, it would require machines to have physical bodies like our own, and social interactions just like human beings. This argument aligns with the theory of mind and embodied cognition, where psychologists say that boundaries between internal and external states are necessary in the development of an agent’s cognitive capacity (Shapiro, 2021).

The internal state being what is within the body, and the external state being the surrounding environment. Without a physical body or social interactions necessary to acquire hot cognition, understanding intentions and desires for computers would be limited.

References:

[1] Cuzzolin, F., Morelli, A., Cîrstea, B., & Sahakian, B. J. (2020). Knowing me, knowing you:theory of mind in AI. Psychological medicine, 50(7), 1057–1061.https://doi.org/10.1017/S0033291720000835

[2] Dreyfus, Hubert L., Alchemy and Artificial Intelligence. Santa Monica, CA: RAND Corporation, 1965. https://www.rand.org/pubs/papers/P3244.html. Also available in print form.

[3] Shapiro, Lawrence, and Shannon Spaulding, “Embodied Cognition”, The Stanford Encyclopedia of Philosophy (Winter 2021 Edition), Edward N. Zalta (ed.), URL=<https://plato.stanford.edu/archives/win2021/entries/embodied-cognition/>.

--

--

Jung Hwan Kim

I’m a student studying computer science and cognitive science at Yonsei University. My interest is in NLP, data science, and finance.