Probably not, but models like Google’s PaLM are doing inexplicable stuff
Last summer, a Google engineer went public about his belief that the chatbot LaMDA might be sentient. This had the internet buzzing.
A mainstay in science fiction, sentient machines have an immeasurably large capacity to fascinate the public. Perhaps it’s our tendency to anthropomorphize everything we see. Or perhaps it’s the fear of being controlled by some entity that is more powerful than us. Or a mixture of both.
LaMDA’s chat responses got many people puzzled. After training on vast amounts of data scraped from the internet, a robot managed to talk about poetry, feelings, and its fear of death.
It just so happens that people speak about poetry, feelings, and fear of death on the internet. Staggering though LaMDA’s answers might seem, when you google them you soon realize that they’re just a Frankensteinian mix-and-match of various blogs, forums, and other text documents.
Already, large language models are at the core of products like Project December, a software which is able to imitate your loved ones even after their death. But they’re just simulations. All they do is guess which words might come next, given some input. No consciousness to be seen here.
LaMDA, a chatbot trained on 137 billion parameters, was one of the frontrunners of its time. The word was is worth stressing here, because Google has already created an even larger model before the summer’s end.
Google’s PaLM has 540 billion parameters and is much more than a chatbot. Partly because of the sheer number of parameters, it can solve many different problems, even ones that it hasn’t specifically trained for. It’s one of the most significant steps towards artificial general intelligence in recent years.
As The Atlantic reports, “if you ask it a question in Bengali, it can answer in both Bengali and English. If you ask it to translate a piece of code from C to Python, it can do so. It can summarize text. It can explain jokes.”
LaMDA was just a chatbot. All it did was chat. PaLM can do so much more than LaMDA; it’s perhaps unfair that it is hardly getting a fraction of the press coverage that LaMDA did.
And then there’s the thing that baffled Google’s researchers: PaLM can reason. When you ask PaLM a question, it doesn’t just spit out the answer. It also tells you why this answer is correct. This question-and-answer, provided by Google, illustrates this:
Question: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?
Answer (old AI model): The answer is 11.
Answer (PaLM): Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer is 11.
To date, researchers have no idea why PaLM is able to provide reasons. It’s quite conceivable that there was just a certain number of reasoning answers in the training dataset, and PaLM inferred how to reason from this. This question-and-answer was not a singular case; here’s another example:
Question: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have?
Answer (old AI model): The answer is 27. [This is incorrect.]
Answer (PaLM): The cafeteria had 23 apples originally. They used 20 to make lunch. So they had 23 — 20 = 3. They bought 6 more apples, so they have 3 + 6 = 9. The answer is 9.
To be clear, this is not a proof of consciousness. Quite far from it.
Philosophically it seems clear that general intelligence is somewhat tied to consciousness. If we define consciousness as the capability of having subjective experience and thought, it does seem clear, or at least plausible, that only advanced species (such as humans) are conscious.
What these advanced species — for the most part humans and perhaps some intelligent animals — have in common is that they do more than the basic three tasks of survival, namely eating, mating, and pooping. In other words, they’re able to perform more than three tasks, which requires a more general intelligence.
We can thus regard some form of general intelligence as a probable necessity for consciousness. On the other hand, already today there are machines which can help out in all kinds of tasks. This doesn’t make these machines conscious.
In other words, we need to get some form of general intelligence in order to create consciousness. That is, if it even is possible to create consciousness, and if we’re able to prove it. But those are questions that could fill an entire philosophy class.
On the other hand, AI systems like PaLM with some general intelligence aren’t necessarily conscious. Google engineers maintain that there is zero indication that PaLM might be conscious. They also say, however, that this will be thoroughly studied and tested before a product based on PaLM ever hits the market.
It’s not impossible that there is some threshold, let’s say 100 trillion parameters (this number is totally arbitrary), from which onwards AI systems become indistinguishable from conscious humans. However, the bigger danger, for the moment at least, is that humans might treat not-conscious AI systems as if they were conscious. This would have incalculably profound consequences for how we view and interact with the world, and how we shape society and its rules.
Ethically, machines and consciousness pose difficult questions. But for now, humans are advised to think “I’m anthropomorphizing” rather than screech “It’s alive!” in the moments when they’re feeling stunned by the behavior of an AI system.
Become a Medium member for full access to my content.