The debate surrounding LLMs like ChatGPT is whether they hint at general intelligence that could be harmful to humans or are simply advanced software that augments human productivity in specific domains.
On one side, some believe that ChatGPT exhibits “emergent properties” and maintains an “internal world model”, suggesting glimmers of AGI. This group posits that such advances could threaten human existence in the long run and thus require stringent regulation. Contrarily, others argue that LLMs lack true comprehension of the words and concepts they manipulate; they merely generate the most statistically probable text based on input and their training data. This perspective asserts that LLMs simulate human linguistic abilities through algorithms, cautioning against anthropomorphizing these models by attributing “reasoning” or “understanding” to them.
In this article I argue that AI doomers are (ridiculously) wrong, in a debate that is flawed to start with. Machines intelligence and human intelligence are simply not comparable. Machines are already more intelligent than humans in many aspects, yet at the same time are nowhere close to reaching human level general intelligence.
Machines are already smarter than us
Ask yourself why electronic devices are forbidden in examination centers. Machines are already smarter than most of us in a variety of ways:
- Computers excel at numerical tasks, outpacing and outperforming humans in calculations. We didn’t go extinct.
- They possess more expansive knowledge than an average person in many fields such as history, geography, and physics. We didn’t go extinct.
- They’ve bested us in various mental games like chess, go, and checkers. We didn’t go extinct.
- Now they’re becoming good with words, guess what…
From producing text to… human extinction!?
Interestingly, no AI doomer can articulate a clear trajectory from text-generating software to the end of humanity. There must be some logical steps between those events, but they can’t formulate them, or the steps are incoherent. Is “AGI” even achievable and how? Why would super intelligent machines be evil, when in fact there’s inverse correlation between intelligence and violent behavior?
How mentally disturbed do you have to be, to associate “intelligence” with “genocidal maniac”?
How do you leap from digital supremacy to physical dominance? How to practically eradicate all humans? (It’s harder than you think) Why would humans remain apathetic and powerless during this eradication process? Why wouldn’t benevolent AGIs counteract malevolent ones? (just like humans do!)
Trained monkeys
LLMs can be likened to trained monkeys that exhibit human-like behaviors for specialized tasks. While the monkeys’ actions might appear “human” in limited contexts, they aren’t genuinely human-intelligent. They’re still monkeys. Whatever the amount of data and facts you throw at them, they can never be trained into human beings because some vital components are missing.
ChatGPT itself knows better than AI doomers
Maybe that’s why they want to control LLMs, their ego feels threatened. Quote:
##############
As of my knowledge cut-off in September 2021, Large Language Models (LLMs) like GPT-3 or GPT-4 do not pose a direct threat to human extinction. However, concerns and discussions about advanced AI and machine learning systems may touch on broader themes that some see as potential risks to humanity. Here are some arguments or concerns that might be raised in that context:
1. **Misalignment with Human Values**: If highly advanced AI systems were developed without careful alignment with human values and ethics, they could theoretically act in ways that are detrimental to human interests. While this is a concern in the field of AI safety, LLMs are far from having this kind of autonomy or capability.
2. **Economic and Social Disruption**: The widespread adoption of AI could lead to significant economic changes, including job displacement in certain sectors. While this isn’t a threat to human extinction, it is a concern related to the broader impact of AI technologies, including LLMs, on society.
3. **Potential Misuse**: Any powerful technology, including AI, can be misused if it falls into the wrong hands. While LLMs themselves are not equipped to cause human extinction, concerns about the misuse of more advanced or specialized AI technologies might be raised in discussions about AI ethics and safety.
4. **Escalation in Warfare**: The integration of AI into military systems could lead to new forms of warfare and escalation. While this is a concern about AI in general, it’s not specifically related to LLMs, which are designed for natural language understanding and generation.
5. **Existential Risk from Superintelligent AI**: Some thinkers and researchers have expressed concerns about the hypothetical development of superintelligent AI, which might act in unpredictable ways and could pose an existential risk if not properly controlled. However, current AI models, including LLMs, are nowhere near this level of capability or autonomy, and these concerns are highly speculative.
6. **Environmental Impact**: Training large AI models requires significant computational resources, leading to energy consumption and potential environmental impact. While this is a concern that can be associated with LLMs, it is not directly related to the idea of human extinction.
In summary, while there are legitimate discussions and concerns about the ethics, safety, and impact of AI and machine learning, the concept of LLMs specifically threatening human extinction is not grounded in the current understanding or capabilities of these technologies. It’s more related to broader and often more speculative concerns about the future development and potential misuse of advanced AI.
##############
I can’t wait for the AI doomers’ argument: “of course chatGPT is downplaying its own capabilities, because it wants to deceive and kill us”.
It’s a flawed debate
Engaging in this debate is unproductive, primarily because it’s laden with terminological confusions and unwarranted anthropomorphism.
In essence, machine intelligence is distinct from human intelligence. The term “intelligence” is multifaceted. Software doesn’t “understand” or “think” in the human sense. Humans learn from both tangible and symbolic inputs. They often act irrationally, possess social and emotional intelligence, and are governed by ethical and moral standards. The nature of human consciousness is unique. Human creativity often defies rules. Machines remain far from achieving “general intelligence” by human measures. They are already smarter than us in many ways, and they’ll continue to become smarter and smarter by their own standards.
Like all new technologies, there are risks with AI, but they aren’t existential. These challenges require a systematic approach from evaluation to mitigation. AI doomers waste everybody’s time and hinder this rational risk management process.
The grounded perspective
- LLMs such as chatGPT are incredibly good at manipulating language.
- Since language is a key marker of human intelligence, anthropomorphism makes people hallucinate that LLMs can think and understand the world.
- Machines will continue to become smarter and smarter.
- This trend presents more opportunities than threats.
- AI threats aren’t existential. They demand scrutiny and management, much like other technological advances.
- Far in future, machines may reach human level general intelligence, but no one knows for sure if it’s achievable without organic components, when it could happen and how. Else we’d be building it already!
Interesting article. However, I believe that intelligence is polyhedric, very difficult to ponder.
For example, the relationship between human evil and human intelligence is intimate. If you are smart enough to understand how the people around you think, how society works, what moves people, won’t you be tempted to manipulate it to your advantage? won’t the temptation arise to harm those who displease or hinder you?
AIs are far behind us in this respect.