Despite its dazzling power, ChatGPT, the famous AI that has been making headlines in recent months, does not understand what it is saying. A group of very reputable researchers is convinced that, for it to gain meaning, it is necessary to appeal to the concept of “topos”, invented in the 1960s.
Hervé Poirier, editor-in-chief of the scientific magazine Epsiloon, talks to us today about one of the most delusional mathematical concepts, which could make artificial intelligences really intelligent.
franceinfo: What is this story?
Herve Poirier: You have of course heard of ChatGPT, this AI that knows how to discuss all subjects, and do homework for you. Very impressive, of course, but let’s remember: this algorithm understands nothing of what it is doing. Ditto for the more powerful version, GPT4: these artificial neural networks that are revolutionizing automatic help do not have access to the meaning of things. ChatGPT can talk to you for hours about zoology, but don’t know what a cat is.
However, all the experts are convinced of this: this access to meaning, this semantics, is played out at the level of the way in which the connections between neurons are organized. And one of the models made us dizzy, in writing. The director of the advanced technologies laboratory of Huawei France, and two great mathematical luminaries are wondering if the solution would not come from the deepest and most abstract concept ever imagined: the “topos”.
The concept was forged by Alexandre Grothendieck, the greatest mathematical genius of the 20th century, who died alone in 2014 in a small town in Ariège. For him, it was the culmination of his work, the quintessence of his thought. The topos, roughly speaking, translate geometric questions into a more familiar language, that of set theory. This makes it possible to generalize the notion of space, well beyond intuition. To do, for example, geometry on whole numbers, whereas this set is made up only of isolated points (1, 2, 3…). Well, even for a professional mathematician, this very dense concept is not easy to grasp.
And it could be used for AI?
That’s the idea yes. With the notion of topos, we could imagine how the information is organized in the silicon network, or better: structure it in such a way as to give it the meaning of things. The idea remains speculative. The experts we interviewed are waiting to see. But the vertigo is there: what if topos could revolutionize the most disruptive of technologies? What if an AI could figure out what a cat is, thanks to the most abstract of all mathematical concepts? Dizzying, right?