Chatbots: Still Dumb After All These Years
Gary Smith: In 1970, Marvin Minsky, recipient of the Turing Award ("the Nobel Prize of Computing"), predicted that within "three to eight years we will have a machine with the general intelligence of an average human being." Fifty-two years later, we're still waiting. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world. Blaise Aguera y Arcas, the head of Google's AI group in Seattle, recently argued that although large language models (LLMs) may be driven by statistics, "statistics do amount to understanding." As evidence, he offers several snippets of conversation with Google's state-of-the-art chatbot LaMDA. The conversations are impressively human-like, but they are nothing more than examples of what Gary Marcus and Ernest Davis have called an LLM's ability to be "a fluent spouter of bullshit" and what Timnit Gebru and three co-authors called "stochastic parrots."
Read more of this story at Slashdot.