Why Itll Be Hard To Tell If AI Ever Becomes Conscious
There are many stories of people trying to bring inanimate objects to life, selling tricks and gimmicks as "magic." But the desire to entrust human consciousness to machines has never been reconciled with reality.
Creating awareness in artificial intelligence systems is the dream of many technologists. Large language models are the latest in our quest for intelligent machines, and some people (always) claim to see glimpses of consciousness in our conversations with them. The fact is that machine consciousness is a very controversial topic. Many experts say that science fiction is doomed to failure, while others maintain that it is on the brink of the abyss.
For MIT Technology Review, neuroscientist Grace Hawkins explores what the study of human consciousness can teach us about artificial intelligence and the moral dilemmas that AI consciousness raises. Read more here.
We don't fully understand human consciousness , but neuroscientists have some clues about how it manifests in the brain, Grace says. Obviously, AI systems do not have brains, so traditional methods of measuring brain activity for vital signs cannot be used. But neuroscientists have very different theories about what consciousness looks like in artificial intelligence systems. Some see it as a "software" feature of the brain, while others attribute it directly to the physical hardware.
There have even been attempts to create experiments for artificial intelligence consciousness. Susan Schneider, director of the Center for Future Minds at Florida Atlantic University, and Edwin Turner, a Princeton physicist, developed a method that required the AIA agent to hand over any conscious information he had gathered during the test before the test. . . Importantly, this step is not limited to parroting statements about consciousness during training, as in the case of the big language model.
The evaluator asks the AI questions that it can only answer if it knows itself. Can he understand the plot of Freaky Friday, where a mother and daughter switch bodies and separate their minds from their physical being? Can he understand the concept of dreaming or can he dream himself? Is it possible to imagine reincarnation or life after death?
Of course, this test is not perfect. It requires the subject to be able to use language, so babies and animals, presumably sentient beings, cannot pass the test. Language-based AI models, on the other hand, will be exposed to understanding consciousness in the big Internet data they are trained on.
So how do we know if an AI system is active? A group of neuroscientists, philosophers and AI researchers, including Turing Award winner Joshua Bengio, published a white paper that draws on theories from various fields to suggest practical ways to explore AI consciousness. If the theories are correct, they provide a report card for a series of indicators, such as dynamic target pursuit and interaction with the external environment, that indicate AI consciousness. It's unclear if any of the current systems check the box, or if they ever will.
This is what we know. Large language models are extremely good at predicting what the next word in a sentence is. They are also great at connecting things in ways that sometimes surprise us and make it easy to believe in the illusion that these computer programs can do something more. But we know very little about the inner workings of AI language models. Until we learn more about how and why these systems arrive at their results, it's hard to say that the results of the models are more than just fancy math.