Towards multi-agent emergent communication as a building block of human-centric AI.
The ability to cooperate through language is a defining feature of humans. As the perceptual, motory and planning capabilities of deep artificial networks increase, researchers are studying whether they also can develop a shared language to interact. In this talk, I will highlight recent advances in this field but also common headaches (or perhaps limitations) with respect to experimental setup and evaluation of emergent communication. Towards making multi-agent communication a building block of human-centric AI, and by drawing from my own recent work, I will discuss approaches on making emergent communication relevant for human-agent communication in natural language.
Angeliki Lazaridou is a senior research scientist at DeepMind. She obtained her PhD from the University of Trento, where she worked on predictive grounded language learning. Currently, she is working on interactive methods for language learning that rely on multi-agent communication as a means of alleviating the use of supervised language data.