Plenary and Invited Talks

Plenary and Invited Talks
Embodied AI: Beyond ChatGPT

The success of the large language models and ChatGPT demonstrates the incredible power of learning systems based on big data combined with computing power. However, current large AI models lack grounding of their knowledge on the real world, resulting in side effects such as hallucination. We argue that, to achieve real human-level intelligence, the AI agent should be embodied and embedded in the real world to acquire grounded knowledge from sensorimotor interaction with the world. The embodied agent can also learn fully autonomously through the physical action followed by self-feedback from sensory perception. The embodiment combined with ChatGPT also allows for physical execution of natural language commands, opening a whole new world of applications beyond the generation of mere texts and images.

Byoung-Tak Zhang is POSCO Chair Professor of Computer Science and Engineering and Director of the AI Institute, Seoul National University (SNU). He has served as President of the Korean Society for Artificial Intelligence (2010-2013) and of the Korean Society for Cognitive Science (2016-2017). He received his PhD (Dr. rer. nat.) in computer science from University of Bonn, Germany in 1992 and his BS and MS in computer science and engineering from Seoul National University, Korea in 1986 and 1988, respectively.

Before joining Seoul National University in 1997, he has worked as Research Fellow at the German National Research Center for Information Technology (GMD, now Fraunhofer Institutes) in Sankt Augustin/Bonn for 1992-1995.

He has been Visiting Professor at MIT CSAIL and Brain and Cognitive Sciences Department, Cambridge, MA, for 2003-2004, Samsung Advanced Institute of Technology (SAIT) for 2007-2008, BMBF Excellence Centers of Cognitive Technical Systems (CoTeSys, Munich) and Cognitive Interaction Technology (CITEC, Bielefeld) for the Winter of 2010-2011, and Princeton Neuroscience Institute (PNI) for 2013-2014.

He currently serves as Associate Editor of Journal of Cognitive Science, Applied Intelligence, BioSystems, and the IEEE Transactions on Evolutionary Computation (1997-2010).

Perceptrons Revisited

The perceptron model has endured as the basic building block of state-of-the-art neural networks for object classification, segmentation, scene understanding and multimodal representations. How can we understand how the representations of sensor input signals are transformed by deep neural networks? I show how statistical insights can be gained by analyzing the high-dimensional geometrical structure of these representations as they are reformatted in neural network hierarchies. The perceptron model can be derived via an optimality principle by considering a binary neuron with two states representing firing and quiescent states embedded in a feedback environment adapting to regulate a discounted quadratic cost over time. This framework can be generalized to larger discount factors with representative solutions derived for state spaces with varying dimensions.

Daniel D. Lee has been a Professor of Electrical and Computer Engineering at Cornell Tech and Cornell University, New York, NY, USA (URL:

Towards an Understanding of Information Processing Mechanisms in the Human Brain

Professor Ichiro Kobayashi
Department of Information Sciences, Ochanomizu University

Dr. Ichiro Kobayashi has been a Professor of Advanced Sciences at Ochanomizu University since 2011. He has also been an invited researcher at the Artificial Intelligence Research Center (AIRC) of the National Institute of Advanced Industrial Science and Technology (AIST) since 2017, and a visiting researcher at the Center for Information and Neural Networks(CiNet)of the National Institute of Information and Communications Technology (NICT) since 2023.

His research interests include developing computational intelligence that can think and reason in language, and he works on neuroscience for human language use in the cognitive world and robotics for language use in the real world. He specializes in artificial intelligence, natural language processing, machine learning, and functional linguistics.

He received his Ph.D. degree from Tokyo Institute of Technology in 1995. He was a research assistant professor at the Faculty of Economics, Hosei University in 1995, an associate professor at the same university in 1996, and an associate professor at the Faculty of Sciences, Ochanomizu University in 2003. He was a visiting researcher at the Brain Science Institute, RIKEN from 2000 to 2005, at the German Research Center for Artificial Intelligence in 2007, and at the Center for the Study of Language and Information (CSLI) of Stanford University from 2007 to 2008.

In recent years, it has been found that convolutional neural networks are homologous to the brain processing of visual information in mammals, and this has led to a growing number of attempts to elucidate the mechanisms of human brain activity with the aid of deep learning models. In this plenary talk, Dr. Kobayashi will introduce the decoding of information in the brain and the investigation of information processing mechanisms in the brain by using deep learning models. In the former, a method for decoding brain activity under visual stimuli with language and a method for reconstructing images will be presented. In the latter, the localization and representation of information in the human brain under visual and verbal stimuli will be examined to investigate the relationship between these modalities in the human brain, as well as how time and emotion are processed in the human brain.