Humans Reduced to Pixels, Death Beyond the Screen
The story begins in 2024, in front of a monitor illuminating a refugee camp in Gaza. On the desk of a young Israeli soldier in his twenties, instead of a family photo, there is a cold screen. His mission is not to run across dusty battlefields. Sitting in a cool control room chair, he watches countless dots and lines on the screen—pixels labeled as ’targets’ by artificial intelligence (AI).
An AI called ‘Lavender’ compiled a list of 39,000 individuals suspected to be Hamas operatives, while another AI named ‘The Gospel’ pinpointed the buildings where they might be hiding with pinpoint accuracy. The soldier has only 20 seconds. He simply verifies the AI’s judgment ‘with human eyes’ and presses the approval button to complete his mission. With a single click, an apartment that was once someone’s warm home vanishes like dust on the map. There are no explosions or screams. Only the data throughput on the screen measures his performance.
This is no longer a story of the distant future. On the vast plains of Ukraine, AI drones hunt each other, while in the heart of Silicon Valley, the world’s top talents develop faster and more precise methods to kill—namely, the ‘Kill Chain’ algorithm. The very foundations of how humanity wages war are shaking.
The paradigm of war has evolved from spears and swords to gunpowder, then nuclear weapons, and now it seeks to push out the last variable—‘humans’—from the system. We stand at the massive threshold of the era of ‘algorithm wars’ where AI becomes the main actor in warfare.
This new war is not merely about stronger weapons. It shakes the speed, scale, and most importantly, the ‘ethics’ of war. When human fingers leave the trigger and are replaced by the rapid calculations of silicon chips, what do we gain and what do we lose? Is it acceptable to entrust the weight of countless lives hidden behind the cold term ‘collateral damage’ to AI’s probability calculations?
This article is a journey to find answers to these chilling questions. We must face squarely how the ‘algorithm of death’ we designed is knocking on our door and consider what to do before we open it wide. The future of humanity depends on what moral shackles we place on this terrifying technology.
Chapter 1: Code, the New God of the Battlefield
They say the history of war is the history of technology—farther, more precise, more destructive. But the change we face now is different: the ‘decision-maker’ is slipping from human hands.
1.1 Killing Machines Beyond the ‘Human-in-the-Loop’
To understand AI warfare, we first need to know the concept of ‘Human-in-the-Loop.’ Depending on how much humans intervene in weapon systems, there are three stages:
- Human-in-the-Loop
- The most traditional method. Humans directly select targets and decide whether to fire. AI acts as a helpful assistant, aiding aiming or providing information. Imagine a fighter pilot aiming at an enemy plane with AI assistance and pressing the missile button themselves.
- Human-on-the-Loop
- AI finds targets and proposes attacks, but humans supervise and can veto. Israel’s ‘Iron Dome’ defense system is a good example. The system automatically detects incoming rockets and prepares interception, but the final launch approval is given by a human in the control room. However, as missiles flood in, the human role increasingly becomes a formal supervisor.
- Human-out-of-the-Loop
- The most controversial final stage. AI independently identifies and destroys targets without human intervention. These are ‘Lethal Autonomous Weapons Systems (LAWS),’ commonly called ‘killer robots.’ Once activated, they decide and eliminate ‘enemies’ on their own without further human commands.
Advertisement
The problem is that the speed of war is becoming so fast that human intervention is impossible. Imagine a ‘drone swarm’ attack with hundreds or thousands of drones attacking like a swarm of bees. Can a human judge and stop each one? Defenders inevitably must respond with AI swarms as well. This leads to a ‘war of machines’ at speeds beyond human control, with humans reduced to bystanders who press the start button and watch the results.
1.2 Ukraine: A Massive AI War Testing Ground
The war in Ukraine, which began in 2022, has become a massive testing ground compressing the future face of warfare, often called the ‘first AI war.’
Especially, the Ukrainian military is changing the war’s course using the AI platform ‘Gotham’ by the American data analytics company Palantir. Palantir calls itself a ‘technology partner for Western democracy’ and has actively engaged in the war. Their technology weaves all scattered information into a massive ‘Kill Web.’
Recon drone footage, civilian satellite images, intercepted communications, even Russian army videos posted on TikTok—all are analyzed in real time by AI. Information that used to be analyzed separately by different units is now combined on one platform to predict Russian troop locations, size, and next moves. Commanders, like gods, oversee the battlefield and receive recommendations for optimal attack points and timing. Thanks to this, Ukraine has been able to resist the overwhelming Russian forces effectively.
But behind this amazing technology lies uncomfortable questions. If a civilian hospital was located at a bombardment site recommended by Palantir’s algorithm, who bears responsibility? If an AI drone attacked a school bus mistaken for a tank, whose fault is it? A program error, or the humans who decided to use that program? Behind Ukraine’s desperate resistance, such terrifying questions shadow the conflict.
Chapter 2: The Gospel of ‘Lavender,’ The Human Tragedy Reduced to Numbers
If the potential of AI warfare was tested in Ukraine, its horrific reality was proven in Gaza. The Israel Defense Forces (IDF) crossed a dangerous line by turning human lives into statistical probabilities under the name of ‘efficient war.’
2.1 Birth of the Mass Assassination Factory
Investigative media revealed the existence of two AI systems, ‘Lavender’ and ‘The Gospel.’ Testimonies from former intelligence officers were shocking.
- Lavender
- This AI’s sole mission is to find ‘everyone suspected of being a Hamas operative.’ Lavender learned from vast data including phone records, social media activity, and membership in certain group chats of 2.3 million Gaza residents to score ‘Hamas connection likelihood.’ Those exceeding a certain score automatically go on the ‘kill list.’ The IDF approved the system knowing it had about a 10% error rate, meaning 1 in 10 could be innocent civilians. The list of 39,000 could include around 3,900 innocent lives.
- The Gospel
- While Lavender decides ‘who’ to kill, The Gospel determines ‘where’ to destroy. This AI instantly generates hundreds of targets among numerous buildings and facilities.
The combination of these two systems completely changed IDF’s operational methods. Unlike the past, when efforts were made to minimize civilian casualties, now even lower-level operatives were targeted by intentionally attacking their ‘homes’ where families stayed. There was even a horrifying AI called ‘Where’s Daddy?’ that tracked the moment a target entered a house and alerted airstrike units.
2.2 Statistical Collateral Damage and the Paralysis of Humanity
The introduction of AI systems broke down standards for civilian casualties. Previously, if one or two civilian casualties were expected to catch a terrorist, operations might be canceled. Now, to eliminate a lower-level operative, up to 15–20 civilian casualties were pre-approved, and for high-ranking commanders, over 100 civilian casualties were accepted.
Advertisement
This means the tragedy of war shifted from an ‘ethical dilemma’ to a ‘statistical calculation.’ A former officer confessed:
“Everything was statistical and neat. I pressed the button, and the machine told me the mission was complete. I killed many people, but because I didn’t see the blood, I felt less guilty.”
This confession shows how algorithmic war destroys the human mind. Erasing pixels on a screen is a psychologically different experience from killing real people. The ‘algorithm’ screen inserted between the act of killing and its result blurs responsibility and guilt, paralyzing humanity. The cases of Lavender and The Gospel demonstrate that AI is a terrifying ‘game changer’ rewriting the rules of war.
Chapter 3: Silicon Valley’s Dark Entry, Hands That Raise the Monster
The monster called algorithm war cannot be raised by the military alone. Behind it are Silicon Valley’s big tech companies that once vowed to make the world a better place. Their idealistic slogans often lose their way before the temptation of massive defense budgets.
3.1 The End of ‘Don’t Be Evil’
In 2018, Google joined the U.S. Department of Defense’s drone video analysis AI project ‘Project Maven’ but faced fierce internal opposition. Engineers’ ethical resistance, saying “Google should not be involved in war business,” eventually forced the company to withdraw.
But the void left by Google was quickly filled by companies like Palantir. Founded with CIA investment and aiming to cooperate with defense and intelligence agencies, Palantir was Silicon Valley’s ‘black sheep.’ CEO Alex Karp criticized Silicon Valley’s culture of avoiding defense cooperation, saying their mission was to provide the best technology to protect Western democracy. For him, technology was a powerful weapon to uphold values. (There are reports Google is now quietly opening the door to defense cooperation.)
Recently, even OpenAI, famous for ‘ChatGPT,’ quietly removed the ban on ‘military and war’ use from its terms, leaving open the possibility of cooperation with the Department of Defense. Ultimately, a huge trend is forming where everyone follows the path pioneered by Palantir.
3.2 AI as a Pawn in the Tech Hegemony Competition
The fundamental reason big tech companies are deeply involved in defense is the U.S.-China tech hegemony competition. China mobilizes its big tech companies under state leadership for military modernization. In this context, the U.S. Department of Defense pressures companies, saying, “If the best companies don’t participate, we will have to fight China’s AI army with technically inferior weapons.”
Big tech faces a dilemma: participate in defense projects and earn huge profits but be criticized for complicity in ‘banality of evil,’ or refuse and risk being labeled unpatriotic and falling behind in competition.
Advertisement
The problem is that in this massive flow, the best technologies meant to enrich human life are turning into the sharpest spear to destroy humanity.
Chapter 4: Ghosts in the Machine, The Evaporation of Human Ethics
The most dangerous change brought by algorithm wars is not the form of weapons but the transformation inside the ‘human’ who conducts war. As machines take center stage in judgment, concepts of responsibility and ethics scatter like mist.
4.1 The Black Hole of the ‘Accountability Gap’
Imagine an autonomous lethal drone mistakenly attacks a wedding hall, thinking it’s a terrorist gathering, killing many civilians. Who is responsible for this horrific tragedy?
- Programmer? They would say they only wrote code according to given conditions.
- Manufacturer? The company would claim they made the product to military specifications and bear no operational responsibility.
- Commander? They would say they deployed the drone but could not predict its specific actions.
- Drone (AI)? It is impossible to hold a machine accountable.
In the end, no one fully takes responsibility, creating an ‘Accountability Gap.’ To prosecute war crimes, ‘intent’ must be proven, but AI’s decision-making process is often a ‘black box’ even developers cannot fully explain. This gap gives soldiers a dangerous excuse: ‘I just followed the system’s orders,’ paralyzing human moral reflection.
4.2 ‘Emotionless Killing’ and the Paradox of PTSD
Remote operators conducting war from safe bases through screens experience killing as unreal, like a video game. This ‘psychological distancing’ can reduce guilt in the short term.
Paradoxically, they also suffer severe post-traumatic stress disorder (PTSD). The gap between killing people on screen by day and dining with family as if nothing happened by night destroys their mental health. When AI recommends targets and ‘proves’ the legitimacy of attacks with data, humans delegate moral judgment to machines and gradually become mere parts of the system.
4.3 Future Dilemma: Is Ethical AI Possible?
As a solution, there are attempts to create ‘ethical AI’ that learns laws of war and rules of engagement to minimize civilian casualties.
But how can profoundly human value judgments like balancing ‘military advantage’ and ‘civilian harm’ be coded into algorithms? This may be a dangerous arrogance trying to solve the millennia-old ‘trolley dilemma’ with a few lines of code. We must fundamentally question whether we want ‘more ethical ways to kill’ or ‘ways not to kill at all.’
Chapter 5: Unleashed Flames, Toward Humanity’s Final War
The advancement of AI war technology is not just about stronger weapons but opening a ‘Pandora’s box’ that could drive humanity toward uncontrollable catastrophe.
5.1 Machine-Speed War and the Terror of ‘Flash War’
Future wars could escalate to full-scale conflict in minutes or seconds without human intervention.
Advertisement
For example, imagine a country’s AI early warning system detects a heat source suspected to be an enemy missile launch. The information passes to an AI command system, which judges it as a clear attack and automatically executes a retaliatory strike without human approval. But the initial heat source might have been a wildfire or meteor. This entire process happens within minutes, leaving leaders no time to assess. Wars proceeding at speeds beyond human control are called ‘Flash Wars.’
This fear drives nations into an ‘AI arms race.’ The logic “If we don’t do it, the enemy will” recalls the Cold War nuclear arms race, but AI is far cheaper and easier to proliferate than nuclear weapons, making it even more dangerous.
5.2 Spread of AI Technology and New Terrorism
AI technology is no longer exclusive to great powers. Terror groups can now use dozens of small suicide drones equipped with facial recognition to target individuals or groups for urban terrorism. The horrific future depicted in AI scholars’ warning video ‘Slaughterbots’ has become reality.
Also, fake news and propaganda using deepfake technology can undermine a nation’s social trust and paralyze democracy. The battlefield now extends into each of our ‘brains’ and ‘minds.’
5.3 The Unleashed Monster, Lack of International Norms
Despite clear dangers, international efforts to control AI weapons remain sluggish. The United Nations has discussed for years but failed to reach binding agreements due to opposition from major developers. In this regulatory vacuum, AI weapon technology advances at uncontrollable speed. Our remaining time is short.
A Question Thrown to Machines, A Time for Humans to Answer
We stand at a critical crossroads in human history. The era of algorithm wars approaches as an unavoidable future, with destructive potential leading to dehumanized war and uncontrollable catastrophe.
Machines ask us: “How far will you allow me to go?”
This is not a technological issue but a philosophical question about human values and dignity. Will we succumb to the sweet temptation of efficiency and speed, handing over the heaviest ethical decision—killing—to cold machine logic?
The answer is our collective responsibility. Before it’s too late, scientists, ethicists, and citizens must come together to set a ‘clear red line’ on AI weapon development. A binding international treaty banning the development and use of lethal autonomous weapons beyond direct human control must be the first step.
Advertisement
AI is not a value-neutral tool. If we teach AI ‘efficient killing,’ it will become the most terrifying destructive tool in history. But if we teach AI the value of life and the importance of peace, it could become humanity’s greatest invention for prosperity.
The choice is in our hands. We must remember the dignity of the human beyond the pixels on the screen. Before the algorithm of death fully opens our door, we must firmly lock it in the name of humanity. The future of mankind depends on what answer we give this machine now.