By Joseph Mazur
Technology and infrastructure change the world unnoticeably every day, and quite distinctively every thirty years so let us imagine together, if we can, how the world will be seen ninety years from today. Wars, locked to accelerating technological changes, are morphing away from their conventional hundred-year-old strategies into tactics that could lessen brutality and death and yet increase the worries of civilian safety. Whatever happens, future wars will be unrecognizable.
“Hard questions arise when we enter the realm of semi-autonomous and potentially autonomous robotic machines that are at the same time lethal… Moreover, regardless of how smart a robot becomes, it is not clear whether it would ever be ethical for a machine to kill a human.”
—Elinor Sloan, Robotics and Military Operations1
If you sleep for ninety years and wake in 2114, you will not see cars, at least not vehicles that resemble cars. Autonomous “levitators” would speed over “roads” that don’t look like the roads we know of today. Your cell phone, if we then still call it that, will be just one neurochip connected to cochlear nerves and another to the tongue and jaw. You’d whisper, “Taxi, please” whenever you’d wish to hail one of those levitating vehicles to take yourself where you wish to go. Addictions to social media feeds will be far worse than what they now appear to be. Suppose the generations of the century are not vigilant. In that case, facts will be tethered to a noisy 24-hour-a-day world news cycle fed directly from feedback loops fashioned by competing media companies manipulating mixed commercial and political biases — that is, if there will still be politics. Imagine the future when the brain is plugged into apps for short-term memory when needed. With embedded technology you will not need to know the time of day; the neurochips within you will know your daily schedule and warn you of each upcoming event of the day at the appropriate hour, or possibly down to minutes, to give you a chance to get ready. More advanced still, Siri, Alexa, or likely some other digital assistant will be a genie in the mind that will give you instant information. if, for instance, you ask about the newest weapons of war, not just saying “Here is what I found on Wikipedia” but feeding you what to believe to be a truthful piece of information.
Of course, I’m being fanciful with my futuristic imagination. Like it or not though, there will be change enough to make us feel as if we had entered a posthuman unrecognisable world. The way I see it, there will be changes that we cannot anticipate, and they may be for better or worse. We, the indigenous planet occupiers, will not notice the slow shifts in human accomplishments; the failures and triumphs will blend as time passes with acceptance of human desires, cravings, and ignorance. We cannot destroy the world; all we can do is destroy our existence, vis-à-vis our brilliance of knowing how to break an atom in two. Wars will continue to be wars, but when time moves on will they obey the common laws as they inevitably change from human to autonomous control that has no morals? For the short answer, we probe into four morally intractable facets of changes: the advancements of autonomous weapons, lasers, cyber-ware, and military outsourcing.
But we are still in 2024 with one change we are not noticing: The unmanning of war
We cannot sleep now and wake in 2114 but we can anticipate a likely future where wars are fought from a distance using digital warriors. The ultimate questions are, therefore, who benefits, who will do the fighting, and how lethal will wars be ninety years from now when they will be on digitally smart battlefield theatres in the air, sea, on the ground, and in space? Here, in 2024, it seems that, aside from a few notable scholars in military affairs, there are no political or military leaders expressing concerns about momentously significant technology changes that are shifting warfare paradigms.
Are there concerns? Surely there are! Some autonomous warfare strategies will save lives, but have we properly probed the moral and social ramifications? Let us remember that wars, and how we fight within them, affect science, technology, economics, politics, culture, welfare, the foundations of humanity itself, and who we are as a people. Perhaps, because future wars could be less lethal, some good can come from autonomous battling. But have we thought about possible dangers that could come from new tools of warfare that could be used by criminal players, crime syndicates, militias, and terrorists? Have we thought about what could go wrong with weapons that could be left with AI decision-making independence? Surely, the military establishment has reviewed these questions and answered them in favour of their natural biases, but shouldn’t some overseeing checks be coming from more impartial control groups before funding commitments go too far for eventual international law restrictions?
Robotic Advancement
Not long ago, Isaac Asimov, H.G. Wells, and Jules Verne brought us to the exotic science fiction of drones with a lucid grasp of their power and how they will transform warfare.2 Drones have been with us for a long time. The first unmanned military aircraft was deployed in WWI. Radio command signals from UK Royal Air Force ground operations piloted unmanned biplanes.3 Following that war, small monoplanes were converted to drones that were launched by autopilot from warships. Remote-controlled experimental flying machines have been continuously improving since the early years of the last century. The Soviets had their partially automated TT-26 tanks in 1940, and the Germans had their Goliath, a remote-controlled demolition vehicle in 1944. Neither were effective in WWII trench war battles. The first mobile robot skilled in ground warfighting came in the 1960s under the guidance of master plans and strategies for future wars.4 Advanced technology since the Iraq, Afghanistan, and Ukraine wars has significantly improved precision and distant remote control of small and large aircraft. We are now in the 2020s with a new generation of war systems changing the experience as several advanced countries improve remotely controlled planes, ships, and tanks so that soldiers can remotely target their enemies from a distance halfway around the world.
Fighting from a distance is not a new idea. As one myth goes, in the third century BC war with Rome, Syracusans took scientific advice from Archimedes to use bronze reflecting mirrors concentrating sun rays to set ablaze Roman warships. By the late Middle Ages and the end of the Hundred Years War, thanks to chemistry that brought gunpowder and firearms to the battlefields, weaponry rapidly advanced to the concept of the cannon that could lob shells filled with gunpowder almost a mile. The 19th century brought the internal combustion engine that gave us the tractor and automobile, one of the game-changers of warfare, alongside the steam engine, electricity, and radio communications. The most underappreciated yet brilliantly invented tool is the hydraulic pump that is now so commonly used for muscle-power work by cranes and backhoes that replace men digging trenches and hauling inhumanly heavy loads.
Advances in weaponry have not stopped and by technological spiraling, never will. Every new weapon since the trench warfare of WWI has been to kill from a greater distance than was possible before. That war brought us the reconnaissance biplanes, submarines, tanks, and the Zeppelin airship that carried and dropped bombs.5 WWII produced rockets and aircraft carriers, radar, and, should we mention, nuclear bombs? The Vietnam War gave us the gunner helicopter that could fly low and hover over purportedly suspicious things that moved on the ground. And now we have GPS for precision guidance, electronic sensors, minuscule surveillance cameras, air, and sea drones robotically calibrated to hit an enemy far beyond any visible scope with pinpoint accuracy, and automation equipment that was unimaginable at the beginning of this century. So, we must ask ourselves what the next advancements are, for both offence and defence, and whether they will bring fewer deaths in battle.
If we can guide drones remotely, we can do the same with tanks, planes, and robot infantry. The US Army has a fully operational BMP-1 tank equipped with remote-controlled cannons. It has iRobots and PacBots, ground robots with cameras, sensors, dexterous arms, and tank threads that can climb stairs. And ever since the last phases of the Afghanistan War, the US Air Force Predators, armed with 500-pound precision bombs, were controlled by pilots living on army bases in the United States. As Gary Fabricius, a Predator squadron commander said, “You see Americans killed in front of your eyes and then have to go to a PTA meeting. You are going to war for 12 hours, shooting weapons at targets, directing kills on enemy combatants, and then you get in the car, drive home, and within 20 minutes you are sitting at the dinner table talking to your kids about their homework.”30
By that account, it might seem that killer-drone pilots should be far enough away from the battle zone to be safe. Physically safe, yes. Yet even a Predator squadron soldier experiencing killing in virtual separation from enemies has combat PTSD levels equal to, and perhaps higher than, those who battled from trenches on the front lines. That’s partly because virtual pilots know that they are killing. When they sink a ship, sailors die. When they release a bomb, someone is likely killed. And sometimes — too many times — their drones kill the wrong people.
Today’s Predators are just overtures of what they will become. Prototypes always have upgrades. We don’t have to imagine what those upgrades will be, they are already beyond pre-manufactured modelling. The latest versions are not only used for surveillance, but they are also ready to perform ground attacks, spot submarines far below ocean surfaces, and defeat human-piloted fighter jets in air-to-air combat.6,7 The Turkish Bayraktar TB2 drones, armed with four air-to-ground missiles, were used in the Ukraine War. They can fly at 136mph at an altitude of 18,000 feet. The US MQ-9 Reaper drones (recently flying over Gaza searching for hostages in the Israel-Hamas war) manufactured by General Atomics, an American energy and defense corporation headquartered in San Diego, California, is the latest generation of unmanned aerial systems (UAS). They fly by satellite communications, carrying up to 3,000 pounds of weapons including Hellfire missiles and 500-pound laser- or GPS-guided bombs. With dog-fighting capacity, they could knock out human-piloted MiGs.8 In case you are wondering if drones could replace manned fighter planes, the X-45 is a fighter drone that costs an inconsequential $1.8 billion and could compete with an F-35 costing $40 billion.
Humans have many failings but so do machines
Humans are driven by motivation and limited by fatigue, but they can also be goaded by anger and limited by intoxication. Machines with satellite-sharp computer vision can process information to make quick decisions, but GPS blackout technology can jam their missions. Humans can learn, but so can machines as both go through the innovation cycle to get better at what they do. Take Switchblade 600, for instance, a reconnaissance drone-guided missile looking for a target. It is called an “extended range loitering missile,” a drone that hovers until it completes an accuracy check.9 So far, it needs a human to choose its targets remotely before using its AI to establish a clean shot and finish the job. The next stage in drone development for AeroVironment, Switchblade’s maker, estimated to come as soon as next year, will be to remove humans from decision-making. Another example is the fully automatic Turkish STM Kargu-2 drone. A UN report claims that it has been used in the Second Libyan Civil War (2014-2020) conflict to kill an unknown number of combatants.10 These autonomous systems “were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.”11
The US Air Force is experimenting with XQ-58A Valkyrie, a pilotless aircraft run entirely by artificial intelligence that can carry missiles to enemy targets over three thousand miles from its base and return without any human intervention. It is intended to be part of a “fleet of traditional fighter jets, giving human pilots a swarm of highly capable robot wingmen to deploy in battle. Its mission is to marry artificial intelligence and its sensors to identify and evaluate enemy threats and then, after getting human sign-off, to move in for the kill.12 Any instrument of war should be tempered by deep concerns about how much autonomy to grant to a lethal weapon. “It’s a very strange feeling,” test pilot Major Ros Elder said while flying an F-15 fighter alongside an XQ-58A, “I’m flying off the wing of something that’s making its own decisions. And it’s not a human brain.”13
Scary as autonomous weapons are, the new toy of war does not need autonomy to do its job, though that feature will come.
Following a test of DragonFire, a powerful long-range laser weapon that can bring down an advanced fighter jet by cutting through its wings, UK Defence Secretary, Grant Shapp said, “This type of cutting-edge weaponry has the potential to revolutionize the battlefield by reducing the reliance on expensive ammunition.”14 DragonFires will be just one of many counteroffensive weapons of the future battlefield. Its accuracy is stunning; working at the speed of light, DragonFires are able to strike a £1 coin at half a kilometer. These non-explosive devices can work from anywhere without having to be reloaded, and cost about ten of those £1 coins for each firing. By comparison, the US-made Patriot interceptor missile system costs four times the price of one DragonFire but has a better capability of taking down fast-moving targets. Shimon Fhima, Director of Strategic Programmes in the UK Ministry of Defence tells us, “The DragonFire trials at the Hebrides demonstrated that our world-leading technology can track and engage high-end effects at range. In a world of evolving threats, we know that our focus must be on getting capability to the warfighter and we will look to accelerate this next phase of activity.”15
Some future wars are already in progress. With the improvement of intensive control of airspace, it could change the battlefield for all unmanned aircraft vehicles (UAVs). After the recent Iran attempt to launch a massive attack on Israel, deploying 170 drones and 150 ballistic and cruise missiles, nearly all were taken down by defence interceptors, not lasers. It was a 1,000-mile direct attack from Iranian territory.
Tamara Qiblawi, Senior Investigator for CNN, began her report on Iran’s attempted attack on Israel by saying, “[T]he world held its breath as weapons whizzed through the night sky. They were balls of fire hovering overhead as onlookers across three different countries filmed images that seemed to harken the start of a cataclysmic war.” She then went on to say, “the operation amounted to little more than a terrifying fireworks display,”16 Though Qiblawi suggested that Iran’s intention was a show of retaliation without inciting an exhausting war, Iran’s intention could not have been to fail. Its poor display of weaponry in a show-off battleground theatre will have significantly diminished future international sales of its weapons.
Iran is a major producer of drones and hopes to be a supplier on the international stage. If defence interceptors can take massive drone swarms down with the success seen on April 14, 2024, Iran’s hopes for drone sales must be dropping. Now, with DragonFires ready to be tested on the real battlefields of Ukraine, unless Iran’s drones are vastly improved to avoid interception, those relatively expensive drones will be wasted. Provided that DragonFire works as expected, the laser may also become more capable of hitting fast-moving targets than interceptor missiles, which, for some, cost $1 million per launch and have the disadvantage of being unable to travel at the speed of light.
As if that isn’t enough of unmanned weapon advancement, the US has its RQ-4 Global Hawk, “the flying albino whale,” with a 130-foot wingspan that can fly 60,000 feet above sea level with a preplanned autopilot route range of 8,700 miles.17 It can take off, fly 3,000 miles, and return by itself. Beyond that, there is the Polecat, something different, a bomber drone with “a fully automatic flight control and mission-handling system” able to take off and land without human instruction.18 Soon there will be unmanned solar and liquid hydrogen-powered [unmanned aircraft systems] (UASs), almost the length of a football field.
What about the idea of battlefields in the air? By that I mean command centres that stay aloft at high altitudes. There are plans for airships larger than football fields to be parked in the air like supermassive hummingbirds at a sugar-laced feeder “as high as one hundred thousand feet up, for weeks, months or years, serving as communications relay, spy satellite, hub for ballistic missile defense system, floating gas station, or even airstrip for other planes and drones.”19
What?! That’s right, you heard it here, a battlefield in the sky. What seems to be science fiction is technology moving from fiction to real drawing boards. But those massive, unmanned airships might soon have tiny relatives, “itty-bitty, teeny-weeny UAVs”.20 They are insect-size air vehicles, nanobots weighing less than 10 grams that could hover in one place for a minute or two, have a range of 1,000 meters with a speed of 10 meters per second, quite fast for something that small. Peter Singer, Professor of Practice at Arizona State University Center on the Future of War and the School of Politics and Global Studies tells us that some micro-unmanned aerial vehicles can be the size of dragonflies or maple tree seeds. They are planned to become a new wave of spy drones carrying sensors and cameras powered by chemical rockets. They can self-recharge using electromagnets while hovering over lightbulbs or electrical outlets, climb pipes, and peek into windows to gather intelligence. How’s that for something that must have taken fantastic imagination for anyone living long before 2024?
Huh? Yes, believe it, but whatever else to be known is classified. Someday, these tiny things will become self-propelled missiles that could “deconstruct a target from the inside out.”21 Beware, though. The guiding talent and funding of military robotic research coming from the US is the Defense Advanced Research Project Agency (DARPA), a 67-year-old agency under the Defense Sciences Office of the United States unequivocally responsible for the Internet, email, cell phones, computer graphics, weather satellites, fuel cell, lasers, night vision, and the Saturn V rockets. With such a track record and a budget of almost $4 billion, we should not underestimate DARPA with whatever plans it has for the future. Those plans will certainly involve robotic motherships and swarms of autonomous drones, all, either managed from a distance or left to their autonomous brilliances.
Star Wars fantasies come to life
The scary, creepy, chilling part is that weapons now in draft stages are classified as top-secret. So, we are far from knowing how battles will be fought as early as a decade from now. It is alarming to think that someday we will have robots fighting robots in space. That might be alright. We might have to settle for a Battlestar Galactica programme, but could we handle troupes of robots lobbing whatever they can toss down at whomever they believe is their enemy? Something to think about. Let’s be thankful that humans can think.
Technologies are coming at rapid speeds to operate battlefields with little human control. Humans could continue to operate unmanned weapons remotely, or future robots could take over, autonomously sensing and acting in the world, identical to a law-abiding human combatant. It poses a dilemma that should be thought out soon before it becomes irrevocably decided by military planners.22
If the US can pilot an attack far from the battlefield, sooner, or later the supply chain of remote-controlled weapons will be available to almost any allied country that can afford them. Enemy states at war will fight on some state’s homeland after acquiring remote-controlled weapons. But in the next generation of conflicts, tools will not be just remotely controlled but rather semi-autonomous. And the next after that will be a fully autonomous military with machines gaining more and more autonomy, making decisions on who to attack. Will there be a generation after that? Of course! There always is. Some futurists point to the Energetically Autonomous Tactical Robot (EATR) that could “forage for plant biomass to fuel itself, theoretically operating indefinitely?” While on long-range missions, EATR searches and grabs preferred food with a robotic arm and puts it in its combustion engine “stomach” that charges its battery. No joke. It is one of the US military projects developed by two technology companies and the University of Maryland Park’s Center for Technology Systems Management.23
“As our weapons are designed to have ever more autonomy,” Singer tells us, “deeper questions arise. Can the new armaments reliably separate friend from foe? What laws and ethical codes apply? What are we saying when we send out unmanned machines to fight for us? What is the ‘‘message’’ that those on the other side receive? Ultimately, how will humans remain masters of weapons that are immeasurably faster and more ‘‘intelligent’’ than they are?”24 No doubt, remote-controlled mechanical soldiers save lives, mostly on the side using them. As some generals have pointed out, they obey orders, have no fear, do not get hungry, and show no emotions when their robotic partners get hit. There are humanitarian benefits to autonomous warfare; damage is better controlled and mostly limited to infrastructure and opponent war machines, and machines tend to follow rules of behaviour. Of course, there is always the concern of the moral health of military leadership and governance. So, one welcomed advantage is the diminishing number of war crimes committed.
Let’s not fool ourselves, though. Every technological advance brings with it both benefits and problems. Fully independent lethal weapons could be a problem if they cannot distinguish between military and civilian targets. The war in Ukraine is accelerating the research and development of autonomous weapons as the Red Cross addresses concerns about risks of unpredictable slipups beyond human control.25 Controlling the morals of humans is hard enough. Can we do better with robots? With the technical development of autonomous power advancing exponentially over time, we should be better prepared for a future when robots begin to learn, gain independence, and take autonomous control of decisions that were once limited to humans. There will be a time when some players feel immorally comfortable designing robots that can escape their control systems of moral responsibilities in decisions of life and death.26,27
Robot on robot
Today’s private armies are different yet concerning. I estimate that the US will be able to — not that it will — wage a completely autonomous small-scale war by 2035. That would require a remote workforce for operating computer-guiding missiles, drone bombing, and repairs. Eventually, it will cut the living workforce by more than 60%. What will be the role of the International Criminal Court when fully autonomous killer drones and infantry robots are fighting our wars? What happens then, when all sides of combat go on full AI with swarms of synchronised deadly automatons? Wars will continue until one side falls behind in its speed of weapon replenishment — a fight to the last android.
Ukraine has used thousands of semiautonomous drones along with AI drone interceptors. It has already used remote-controlled ground vehicles (UGV) that move up to the front lines and lob bombs into Russian infantry trenches.28 No country possesses fully autonomous robot combatants capable of attacking enemies on the ground; that is just a matter of time before the next technological advancement in weaponry when wars will be fought from remote positions with robot warriors on both sides of a conflict so that all the fighting could ease harm to humans.
The moral responsibility of military robots
As it almost always happens sooner or later in warfare, highly prized, expensive, and advanced weapons are lost, captured, or compromised in battle. Those seized weapons are then studied to be replicated, manufactured, and adapted to an ongoing conflict. They tend to be relatively small and easily transportable. So, what had once been expensive innovating tools of war become widely available cheap knockoffs readily available for counteroffensive battles. Worse, they could be found on the weapons trading black market for sale to anyone from terrorists to revolutionaries.
In late 2020, the assassination of Mohsen Fakhrizadeh, Iran’s highest-ranking nuclear physicist, was carried out by a remote-controlled truck with a covered satellite-operated machine gun and enough explosives to make the whole scene practically disappear so as not to blame Mossad. The killing worked smoothly but some equipment survived and was pieced together for evidence of Israel’s role. If an unmanned vehicle can position itself miles from a stationary remote control and trigger an automatic weapon then so can a tank, or a robot. Making tanks and robots killing machines takes sophistication to a higher level than making Mossad’s truck. But the basic science and engineering behind the truck with a gun is a tad less simple than that of a heavy equipment vehicle with a large-caliber gun mounted on a turret. Knowing how to build automated conventional weapons is now out of the bag of basics so the rest of our world of countries — as well as insurrectionists, revolutionaries, and terrorists — can make their unmanned killing machines. With that, battles will be a robot on robot.
Of course, the basics are not as simple as it seems in words. However, in the best circumstances, moral codes might be part of an autonomous infantry robot’s specs that depend on an initial programme of tactical brilliance that, under unexpected situations, could have objectives clashing with robot sensors. Without a human brain, it might go berserk not knowing when to click the off switch. As Daniel H. Wilson, one of the New York Times bestselling authors wrote in his book, How to Survive a Robot Uprising, “Any machine could rebel, from a toaster to a Terminator, and so it’s crucial to learn the common strengths and weaknesses of every robot enemy. Pity the fate of the ignorant when the robot masses decide to stop working and to start invading.”29
The sciences and technologies needed to build or modify military robots are not as challenging as nuclear fission or cancer research, especially because the software is open-source. So, almost every country in the world is now working on manufacturing and improving military robots. As Peter Singer said in a 2009 US Naval Academy lecture, “These technologies are not like an aircraft carrier or atomic bomb, where you need a massive industrial structure to put them together. A lot of them use commercial, off-the-shelf technology. Some of it is even do-it-yourself.”30 If Somalia, a sovereign state member of the UN vulnerable to conflict or collapse, can possess military drones,31 so can non-state actors, terrorists, revolutionaries, or small groups of activists that are hostile to a state. We might see this scenario as a way to save lives, perhaps because machines would replace living soldiers, and we don’t have emotions over the destruction of a machine or care about damage, other than it might spike a cost in taxes. Doesn’t that put war on the platform of making war acceptably entertaining?
Outsourcing the military
I see an immanent future with commanders, thousands of miles from the attack zone, in limited physical danger being hired by multinational corporations in the business of building android forces for hire. Why not? Russia had its Wagner Group, a shadowy band of hired fighters, and the United States has its Academi (formerly Blackwater) and ArmorGroup (accused of wasteful practices and fraudulent hiring) that will eventually have android armies operating beyond laws and treaties. With progressions in chip technology that permit the automation of almost every industrial product, from electric toothbrushes to military hardware, future wars could be fought as entertainment (think lions and gladiators in the Coliseum) on reality primetime TV camera-captured by robots and narrated remotely, all paid for by advertisers. Tune in for Sunday Night War, with ceasefires during weekdays. Would that be legal? So far, yes, at least in the US, which is far behind in establishing rules for drone warfare. And so far, international law has nothing in its statute books about the use of drones or robotic ground troops. Imagine the profits that drone companies could make to wage war on one country paid for by another. All it takes is money, not the lives of the aggressor. If governments could outsource prison ownership, why not outsource android-wars to save body bags?
Private armies are not new. Carthage employed them against Rome in the First Punic War in the early 3rd century BC., and they probably go back much further than even the epic wars brought to life by the Homeric poets. During the Middle Ages, especially in the Hundred Years War (1337–1453), mercenaries were legitimate essential soldiers fighting outside the authoritative control of the princes or local lords of warring states. Today, they are no different from clandestine cartels without detectible links to a government. Future wars will become outsourced so that corporations can employ warriors to invade or defend, neither knowing nor caring what they are doing. They will become athletic teams to cheer, equipped with guns, bombs, tanks, and fighter jets. Those private players will have a supplying industry of arms dealers willing to sell to anyone as they lobby to convince governments to start wars they say they can win, even when they know they can’t. Private fighters were always a problem in the mercenary system because they often joined for the money and, therefore, could switch employment in the direction of any force that had power and money. They had no attachment to an ideology and were prone to keep or steal weapons to be sold or used to pilfer unsecured villages.
Today’s private armies are different yet concerning. They, and even private citizens, have no sovereign immunities and are bound by customary law and the Geneva Conventions, yet there are human and geopolitical consequences.32 Under the Rome Statute, a treaty-based statute of humanitarian laws, offenders of war crimes, including private individuals, could be tried, and sentenced to many years in jail. But without state-run disciplinary codes and conduct, private armies could work for anyone if they do not attack their home state. We witness that behaviour in Africa, where the UN identified individual mercenary groups moving from conflict to conflict.33
One concern for the future of private armies is the multinational corporations that could replace human fighters with androids, which could make it easier to use force without empathic recognition of non-combatants. Moreover, for states that care about public opinion (even authoritative states do) private armies make going to war more acceptable since citizens care less about losing a private fighter, especially a foreign one, than they do about a state soldier.34 Strangely, though, the citizenry might care more about the loss or damage of robotic weapons than about a private fighter because weaponry is paid for by the taxes they pay.
Cyber vigilantes and the weaponisation of everything
Future wars will be far more autonomous than they are today. That is a concern, but alongside it is cyberwar distortion machines still in their newborn operating stages. We now have amplified disinformation ramblings invading democratic countries to turn propaganda on public policy, not with lethal weapons, but rather by creating culture wars of division and false realities to sway voters in directions that benefit the cyberwarrior whose mission is to support or oppose a military conflict.
What will happen when civilians working from their home countries with civilian wherewithal in digital operations get involved in armed conflicts with other countries using digital technology to influence public opinion? They may be concerned individuals, loyal refugees, patriotic hackers, or professional cyberwar dealers who can, for whatever curiously influenced reason, sabotage economies, hospitals, power stations, and government services, and spread false information to encourage political extremes to confuse voters ready to cast their ballots in elections.35
Cyberwars are not designed for killing but rather intended to win support for disputes that are likely to either start or end as military conflicts. Disinformation wars are not new, though we tend to think that all forgotten history is new. For hundreds of years, we have had flashpoints of conspiracies supported by polarising disinformation with the ultimate purpose of voiding factual trust to rock the existing state of affairs. With new social media tools, disinformation wars can now have geopolitical adversarial foot-soldiers without boots, possibly in pajamas, sitting anywhere in the world at computers providing distrust, uncertainty, fear, and chaos, exploiting belief in conspiracies needed for political division. TikTok, Facebook, and Instagram, electronic war machines with instantly visible battle actions, can have an acute effect on how wars appear. Without grasping the realities of war, they effectively show viral images to win hearts and minds in popular support that empowers one side. It is a relatively new media suitably recognised as electronic warfare.
We know that once fake information comes to social media it tends to run through an echo chamber of multiplying promotion. That’s how social media news works. Across the globe, we are seeing deep divisions injected with peculiar uncertainty. I say that as a euphemism lacking any other way to call it. In truth, the bot machines offer no truth by what Christopher Paul and Miriam Matthews of The RAND Corporation rightly called a “firehose of falsehoods” as a description of Russian propaganda combining “a very large number of communications and disregard for the truth.”36 There is more than that whooshing through firehoses, and it is not a disregard but rather a barrage of disinformation that can topple democratic governments. Firehoses of falsehoods have no defensive valves. Faucets could be closed, but it is too late by the time that happens. Others go on. They are the future weapons against democracies. Of the 195 member states of the United Nations, more than 64 are having elections in 2024. All are under attack by Chinese, Russian, and Iranian bots using heavy-duty disinformation tactics that could sway opinion to benefit hopeful authoritarians and crush the rule of law through constitutional democracies. It is another kind of war. One that does not necessarily kill, at least not before some tyrannical winner surfaces to public aftershocks.
Disinformation wars coming from hackers are not so benign as to be ignored. They can start wars, support wars, or end wars. Russian paid bots have been feeding Kremlin disinformation to the Americas and Western Europe since the 2016 US election to sway support for a presidential candidate who could ignore a planned invasion of Ukraine or abandon NATO. It almost worked in 2020. Who knows if it will work this year? We know that Russian and Chinese state-directed bots have been spreading false narratives, promoting conspiracy theories, and posting online AI-made fake images of mock candidates to mess with the US presidential election this coming November.37 Social media anti-Ukraine cheerleading pro-Russia propaganda has already been parroted to influential broadcasters to be relayed by US members of Congress on the House floor. When one country hires online hacker warriors to meddle in another’s election to target candidates and parties, it might not be warfare the way we know it; it does, though, have an enormous influence on the potential outcome of present and future wars. Fortunately, international humanitarian law still holds strict rules prosecutable as war crimes because cybernetic space has the same international laws as the space we live in. Cyberspace is not a lawless space.
Future wars will follow a marked change in direction, from war plans designed to kill combatants with autonomously launched incendiary weapons to cyberart strategies that strike at the political bases and economic substructures of enemy states.38 Cyberwars rely on accelerating technical advances that will transform weaponry supporting military conflicts while being more accessible to hackers who profit from extremist attacks, criminal activity, and information dealing. If cyberwars between enemy states diminish the number of casualties, there still will be dealers itching for profits. Though the number of global war-related deaths has indeed been declining since the last world war ended, that number is far too high for a civilised world.39 There is a highly disturbing likelihood that future warfare tactics could involve crippling satellites in space by injecting malware to muddle Global Positioning Systems, takeovers of nuclear plants, power structures, or hospitals, raising the odds of there being more deaths; however, if a new world of cyberarts brings war evolution mechanics to new cyberbattle fields where lethal weapons are obsolete and where countermeasures are in place, then perhaps we will not continue to kill in such large numbers. There might still be wars, but not the protracted ones known to those living in the first half of the twenty-first century.40
Mathematical models can theoretically estimate the odds of one country winning over another in battle. Unfortunately, even the best models rely on many hidden variables and educated guesses of the number of troops deployed, the qualities of each side’s artillery, and the number and quality of autonomous machines without souls, no fear of death, or mothers waiting at home. With enough information, mathematics can make some reasonably accurate predictions. But even the most sophisticated models do not guarantee correct better-than-even odds. And they certainly cannot predict, with any moderate precision, the number of real people who will die.
A decade from now
As I said, some countries, especially the US and its allies, will be able to wage a completely autonomous small-scale war by 2035. Soon after, those countries that can afford them will be buying or building uncrewed warships, fighter jets, submarines, swarms of drones, and even infantry fighting vehicles making grave decisions, all with no human operators. The next thirty years of AI technology behind semi-autonomous weaponry is accelerating towards fully proficient know-how in building self-governing tools of war that we may be unprepared for. The potential is not an existential threat. Rather, it breaks morals that define humanity. Elinor Sloan puts it this way in the epigraph of this article: “[I]t is not clear,” she writes, “whether it would ever be ethical for a machine to kill a human.” Are we prepared to risk the degradation of morality by deploying fully independent intelligent machines on battlefields?
It is hard to answer that question since all fortunes of war are unpredictable. Wars could be robots-on-robots. Or…, or…, OR…, could there possibly be an altogether different unexpected scenario? All this noise of autonomous warfare strikes at the insensitivity to moral codes and the foolishness of supporting wars without benefits for either side, benefits other than possible arms sales. Yet — I say partly as a whim, but with seriousness — as wars become more nonsensically autonomous, they may bring us to a time when wars become overwhelmingly wasteful and impractical. There may then be a time of no more international wars. Is it possible to imagine an end to wars? Whoa!!!, you say, but wouldn’t that be wonderful? Yes! That leaves syndicated crime, intranational wars, terrorism, and barbarisms launched by a few of those rogue war-mongering narcissistic failing leaders. They are altogether other matters.
About the Author
Joseph Mazur is an Emeritus Professor of Mathematics at Emerson College’s Marlboro Institute for Liberal Arts & Interdisciplinary Studies. He is a recipient of fellowships from the Guggenheim, Bogliasco, and Rockefeller Foundations, and the author of eight acclaimed popular nonfiction books. His latest book is The Clock Mirage: Our Myth of Measured Time (Yale).
References:
- Braun, William G. Prof.; von Hlatky, Stéfanie Dr.; and Nossal, Kim Richard Dr., “Robotics and Military Operations” (2018). Monographs, Books, and Publications. 20. https://press.armywarcollege.edu/monographs/399/
- I advise seeing the astounding animated essay, “How drone combat in Ukraine is changing warfare,” published on March 26, 2024, in Reuters by Mariano Zafra, Max Huder, Anurag Rao, and Sudev Kiyada. See: https://www.reuters.com/graphics/UKRAINE-CRISIS/DRONES/dwpkeyjwkpm/
- Steve Mills, The Dawn of the Drone: From the Back-Room Boys of World War One (Oxford: Casemate, 2019) 2.
- https://apps.dtic.mil/sti/tr/pdf/ADA568455.pdf
- The number of civilian casualties in WWI is between 6 and 13 million, close to 47% of all deaths due to the war.
- W. Singer, Wired For War: The Robotics Revolution and Conflict in the 21st Century (New York: Penguin, 2009) 116.
- If you want to know almost all there is to know about future wars and have more time to learn about war, I see no easy way other than to read Peter Warren Singer’s awesome book, Wired For War.
- United States Air Force, USAF Unmanned Aircraft Systems Flight Plan 2009-2047, 27.
- https://www.avinc.com/tms/switchblade-600
- https://documents-dds-ny.un.org/doc/UNDOC/GEN/N21/037/72/PDF/N2103772.pdf?OpenElement
- https://lieber.westpoint.edu/kargu-2-autonomous-attack-drone-legal-ethical/
- https://www.eglin.af.mil/News/Article-Display/Article/3480769/ai-successfully-pilots-xq-58-aircraft-at-eglin/
- https://hbr.org/2024/01/leading-in-a-world-where-ai-wields-power-of-its-own
- https://www.cnn.com/2024/03/13/europe/britain-air-defense-laser-dragonfire-intl-hnk-ml?cid=ios_app
- https://www.gov.uk/government/news/advanced-future-military-laser-achieves-uk-first
- https://www.cnn.com/2024/04/14/middleeast/iran-israel-attack-drones-analysis-intl?cid=ios_app
- US Air Force, “RQ-4 Global Hawk,” Air Force Fact Sheets, 16 October 2008, http://www.af.mil/ AboutUs/FactSheets/Display/tabid/224/Article/104516/rq-4-global-hawk.aspx (accessed 1 March 2014).
- https://apps.dtic.mil/sti/pdfs/ADA612259.pdf
- Ibid, Singer, p. 117.
- Hew Strachan & Sibylle Scheipers (Eds) The Changing Character of War (Oxford: Oxford University Press, 2011) 340.
- Singer. p 119.
- Ibid, Braun.
- https://www.robotictechnologyinc.com/images/upload/file/Overview%20Of%20EATR%20Project%20Brief%206%20April%2009.pdf
- https://www.wilsonquarterly.com/quarterly/_/robots-at-war-the-new-battlefield
- https://www.icrc.org/en/document/what-you-need-know-about-autonomous-weapons
- Hellström, T. On the moral responsibility of military robots. Ethics Inf Technol 15, 99–107 (2013). https://doi.org/10.1007/s10676-012-9301-2
- https://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?article=1010&context=phil_fac
- https://www.businessinsider.com/ukraine-using-ground-drones-destroy-russian-trenches-2024-4?amp
- Danial H. Wilson, How to Survive a Robot Uprising: Tips on Defending Yourself Against the Coming Rebellion (New York: Bloomsbury, 2005) 14.
- https://www.usna.edu/Ethics/_files/documents/PWSinger.pdf
- https://fundforpeace.org/
- https://www.un.org/en/genocideprevention/genocide.shtml
- Sarah Percy, “The United Nations Security Council and the use of private force”, Vaughan Lowe et al. (eds.) The United Nations Security Council and war (Oxford: Oxford University Press, 2008), 229.
- Sarah Percy, “The Changing Character oif Private Force”, Hew Strachan & Sibylle Scheipers (Eds.) The Changing Character of War (Oxford: Oxford University Press, 2011) 275.
- https://harvardnsj.org/volumes/vol1/schmitt/
- Christopher Paul and Miriam Matthews (2016). “Russia’s ‘Firehose of Falsehood’ Propaganda Model.” RAND Corporation. Doi:10.7249/PE198.https://www.rand.org/pubs/perspectives/PE198.html
- Tiffany Hsu, Steven Myers, “China’s Advancing Evidence to Influence the U.S. Election,” New York Times, Tuesday, April 2, 2024.
- https://worldfinancialreview.com/category/columns/joseph-mazur/
- https://www.un.org/en/un75/new-era-conflict-and-violence
- Sarantakes, Nicholas Evan (2023) “The Future-War Literature of the Reagan Era—Winning World War III in Fiction,” Naval War College Review: Vol. 76: No. 3, https://digital-commons.usnwc.edu/nwc-review/vol76/iss3/7
- https://www.reuters.com/investigates/special-report/us-chinaotech-drones/