Scientists and tech companies are on a quest to build AI with something closer to the general intelligence of humans. Large language models (LLMs), which power the likes of ChatGPT, can seem human-like, but they work in very different ways to the beings that created them. In order to create a superintelligent world, how can modern AI models be improved to make them better at reasoning and understanding the world? Are LLMs the right type of technology to pursue? Or do scientists need to get more creative?
This is the final episode in our two-part series on artificial general intelligence. Last week, we sought to define what is a slippery concept. This week: the technological and ethical challenges that need to be solved in building the truly human-like AI models.
Host: Alok Jha, The Economist’s science and technology editor. Contributors: Steven Pinker of Harvard University; Gary Marcus, professor emeritus at New York University; Yoshua Bengio of the University of Montréal; and The Economist’s Abby Bertics.
Transcripts of our podcasts are available via economist.com/podcasts.
Listen to what matters most, from global politics and business to science and technology—subscribe to Economist Podcasts+.
For more information about how to access Economist Podcasts+, please visit our FAQs page or watch our video explaining how to link your account.