Today, let’s take a deeper look at a special friend who has come close to us—artificial intelligence (AI). AI now seems like an all-around genius who can do anything. It turns what we once only imagined into reality and solves complex problems effortlessly. But is it really okay to entrust our future to this smart friend?
This question is like handing a magic wand to a young boy. The boy could use the wand to heal the sick or carry heavy loads to help people. But he might also annoy a calm bee in play or accidentally tear a friend’s clothes. The important thing is not just the ability to ‘use’ the wand, but the wisdom and responsibility to know ‘how to use it’.
We have been breathlessly cheering for what AI “can do,” but now we must pause and seriously ask what AI “should do.”
Today, through two vivid stories, let’s explore how this question connects to our lives.
First Story: The Betrayal of ‘Alpha Interviewer,’ the AI Dreaming of the Perfect Talent
Hannah, head of HR at the innovative IT company FutureTech, was filled with high hopes. The day had come to launch the ambitious AI recruitment solution she had introduced: the ‘Alpha Interviewer.’ “Now, the fairest hiring process based solely on data, free from human bias or emotion, will begin!” she confidently announced before the executives.
Alpha Interviewer’s learning ability was truly remarkable. It absorbed resumes, cover letters, performance reviews, and even university club activities of the top 500 “ace” employees who had delivered the best results at FutureTech over the past decade. Based on this data, Alpha Interviewer created a ‘success DNA’ model and analyzed thousands of applicants’ documents within hours to select the final interview candidates.
The results were astonishing. The candidates recommended by Alpha Interviewer demonstrated excellent skills in interviews and quickly adapted to the company after joining. Everyone praised Hannah’s decision, marveling at AI’s efficiency.
But about a year later, Hannah noticed the company atmosphere subtly changing. Most new employees shared similar backgrounds, and a distinct culture formed among them, creating a subtle divide with existing staff. Curious, Hannah reviewed the data again and could hardly believe her eyes. 85% of new hires through Alpha Interviewer were men in their 20s from four-year universities in the metropolitan area, with an unusually high number from a specific sports club.
Alpha Interviewer had no ill intent. It simply followed past data too faithfully. Among the past “ace” employees at FutureTech, men overwhelmingly dominated, and founding members from certain sports clubs had supported each other to secure key positions—a “history of their own.” AI had learned all this as the ‘formula for success.’ Competent female applicants, hidden talents from regional universities, and creative people with different experiences were quietly filtered out by AI’s biased criteria.
Advertisement
Hannah was shocked as if struck on the head. The technology she believed to be the fairest had actually become a ‘megaphone’ amplifying humanity’s deepest biases. AI certainly has the ability (can) to analyze data, but can we say this AI did the right thing (should)?
Second Story: The Two Faces of the City’s Hero, the ‘Guardian Angel Drone’
“Please… help me find my Minjun!”
Sanghoon, a father who lost sight of his 6-year-old son Minjun for a moment at an amusement park, was plunged into despair as if he had lost the world. Just then, a police officer approached and reassured him: “Sir, just tell me a photo of your child and the color of the clothes he wore today. The ‘Guardian Angel Drone’ will start the search immediately.”
The ‘Guardian Angel Drone’ was the new hope for urban security. Hundreds of drones linked in real time with city CCTV systems, using input information to locate people with cutting-edge technology. Within ten minutes, Sanghoon’s phone rang: “Your son has been found in front of the cotton candy stand. Please rest assured!”
Sanghoon hugged Minjun, tears streaming down his face. He sincerely thanked the scientists who created the drone and firmly believed this technology should be used more widely. It was a warm, perfect moment where technology saved a family.
Months later, Sanghoon happened to watch an investigative report. The screen showed the ‘Guardian Angel Drone’ he had seen as a hero. But this time, the drone was not searching for a lost child. It was identifying faces of people attending a peaceful demonstration by a certain group, recording their movements one by one. The report even revealed that the technology was secretly used in a large shopping mall to analyze customer movement patterns and induce impulse purchases.
Sanghoon was confused. The technology that had helped find his child could also become a shackle of surveillance for someone else. Should we really accept the risk (should not) of all citizens being monitored for the convenience (can) of finding lost children? How do we control the dark possibilities hidden behind the good intentions of technology?
The Real Question Begins Now
The stories of the ‘Alpha Interviewer’s betrayal’ and the ‘Guardian Angel Drone’s two faces’ are not mere imagination. They are realities already happening worldwide or soon to confront us. The AI we trusted to be efficient may unknowingly raise walls of discrimination, and the technology we believed would protect safety may become eyes of surveillance that restrict freedom. This chills us.
Technology itself is neither good nor evil; it is a value-neutral ’tool.’ But AI is unlike any tool humanity has created before. It learns on its own, predicts, and even makes decisions beyond human sight. Whether this powerful tool becomes a ‘weapon’ or a ‘gift’ for humanity depends entirely on the hands of us humans who use it.
Advertisement
We are the first generation building a vast new city called the ‘AI era.’ If we focus only on how high and fast we can build with dazzling technology (can), we may forget the people who will live inside. Now, we must consider the ‘design philosophy (should)’ of this city together.
First, the creators of technology must become the city’s ’ethical architects.’ Beyond just coding and building algorithms, they must anticipate the social impact of their creations and embed ‘ethical safeguards’ to prevent discrimination and bias. Just as earthquake resistance is a basic standard in construction, fairness and transparency must be the default when developing AI.
Second, governments and society must become the city’s ‘wise urban planners.’ They must establish fair rules to prevent technology from being monopolized by specific companies or powers. Clear legal frameworks are needed to ensure ‘Guardian Angel Drones’ are used only for finding children and to hold companies accountable when AI like ‘Alpha Interviewer’ makes discriminatory decisions. Just as parks and plazas provide rest for all, a social foundation must be built so everyone can enjoy the benefits of technology.
And most importantly, we citizens must become the city’s ‘awake owners.’ We are not passive consumers of AI services. We must question how the apps we use collect and utilize our data. When AI services feel unfair, we must speak up confidently and demand better directions. Only when the owners’ eyes shine can the city develop healthily for all.
AI can be a lighthouse illuminating humanity’s future or a flame that burns everything down. At this crossroads, the choice is ultimately not the technology itself but us. Before asking AI “What can you do?” we must ask ourselves the more fundamental question: “What kind of world do we want to create through AI?”
How we shape this tool called AI to carve out a future for all depends on the thoughts and choices of each one of us.