The Beginning of the Story: A Small Island of Knowledge
Long ago, in a village, rumors began to spread about an unknown land beyond the sea. Some explorers set out in small boats and discovered a tiny island near the coastline. They confidently proclaimed that they had seen everything about the ‘New Continent.’ “Beyond there is nothing but pebbles and sand!”
The villagers believed them. They no longer dreamed of great voyages to the unknown land, and the possibility was forgotten in their memories. The fragmentary knowledge of the ‘small island’ they discovered effectively closed the door to exploring the vast continent.
Today, our attempts to predict the future of artificial intelligence (AI) might not be much different. We have only just arrived at the shores of the vast continent called AI, yet we may already be deluding ourselves into thinking we know everything. And where might this illusion lead us?
The Trap of Confidence: Why Do We Get It Wrong?
In psychology, there is an interesting theory called the ‘Dunning-Kruger effect.’ It states that people who know little about a subject tend to overestimate their abilities. It’s like a person standing at the foot of a mountain mistakenly thinking the summit is just around the corner.
A similar phenomenon is happening in the world of AI.
- Beginners’ Certainty: People who have just started learning about AI or are fascinated by isolated success stories often make overly optimistic or pessimistic predictions about AI’s future. Extreme forecasts like “AI will solve all problems!” or “AI will destroy humanity!” are common.
- Experts’ Caution: On the other hand, true experts who have studied AI for decades tend to be more cautious in their predictions. They understand how complex AI is and how many unforeseen variables exist. Like experienced climbers humbled by the mountain’s grandeur and dangers.
In 2015, many AI experts predicted that AI would not beat a human Go champion until at least 2027. Yet just one year later, in 2016, AlphaGo defeated Lee Sedol 9-dan. This is a prime example of how easily our predictions can miss the mark.
The Dangerous Future Caused by Misguided Predictions
You might think, “So what if predictions are a bit off?” But premature predictions and blind faith can lead to far more dangerous consequences than expected. Here is a very realistic story.
Story: The Betrayal of the ‘Perfect Hiring’ AI
In 2024, the innovative IT company ‘FutureTech’ ambitiously introduced an AI hiring system called ‘Neuron-Match,’ predicting it had ‘completely eliminated human bias.’ This system was designed to select the best talent by learning from 20 years of successful employee data. Everyone cheered, believing a new era of fair hiring had begun.
Advertisement
That year, Minjun, a developer candidate with top-notch coding skills and impressive awards, applied to FutureTech. Despite his qualifications, he repeatedly failed the document screening stage. The reason was unknown to him.
The secret lay in the data ‘Neuron-Match’ had learned from. Over the past 20 years, the IT industry had been male-dominated, and most successful employee data reflected that. The AI unknowingly learned that ‘being male’ was a key indicator of a successful employee. The algorithm even gave extra points for male-centric hobbies like baseball club activities. Minjun’s resume listing ‘President of the knitting club’ might have been a disadvantage.
The premature prediction of ‘bias-free perfect hiring’ ultimately resulted in a dangerous reality of ‘automatically reproducing past discrimination.’ FutureTech missed out on top talent, and AI became a tool that reinforced societal biases.
1. Investments Going Down the Wrong Path
Like Minjun’s story, rosy future predictions pour enormous resources and talent into unrealistic goals. Many past technology forecasts promised ‘imminent commercialization’ but eventually faded away during the ‘AI Winter.’ Statistics showing an 80% failure rate for AI adoption projects reveal how unprepared we are to leap into the future. A wrong map can lead us off a cliff instead of to our destination.
2. AI Growing on Our Own Biases
The data we use to teach AI the world contains our biases intact. AI predicting the future based on past data can amplify these biases. As with ‘Neuron-Match,’ AI trained on discriminatory data about gender or race can make unfair decisions in hiring or loan approvals. This is not just a technical error but a dangerous spark that intensifies social conflicts.
3. Illusions That Make Us Ignore Real Problems
Focusing on grand and sensational predictions like ‘superintelligence surpassing humans (Singularity)’ can cause us to overlook urgent issues we must address now. Fake news generated by AI, algorithm-driven manipulation of public opinion, massive unemployment, and environmental problems caused by enormous energy consumption are realities already before us. Before debating distant utopias or dystopias, shouldn’t we put out the fires at our feet first?
Holding a Compass, Not a Map
We may be the first generation exploring the unknown continent called AI. What we hold in our hands should not be a completed map but a compass pointing the way.
Instead of limiting our possibilities or stepping onto dangerous paths with hasty predictions, we must proceed carefully, step by step. Constantly questioning, thinking critically, and most importantly, humbly recognizing ‘what we do not know.’
Advertisement
Knowing becomes dangerous precisely when we start overestimating our knowledge. The future of AI is not predetermined. It is shaped by our choices and responsible exploration. To avoid getting lost in the fog, we must not blindly trust maps but hold onto the compass in our hands—ethical and philosophical reflections guiding us in the right direction.