Tesla CEO Elon Musk has been very vocal this year warning humanity about destructive Artificial Intelligence.
I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that …
It seems like the worries of a future we’ll never see, but the entire US stock market is currently at the mercy of highly sophisticated algorithms that can go berserk at a moments notice, crashing the entire stock market in a matter of minutes (or entire stocks in a matter of seconds).
Elon likens advanced AI to “summoning the demon”, comparing the resulting consequences to an uncontrollable beast who has the capability of destroying us. It’s good to talk about it now, particularly coming from someone as innovative as Elon, but he’s speaking in generalities. We’re already relying on technology that can get out of control. It’s not AI that will decide humans are a threat, like Skynet, but its dangerous enough to shut down power grids, nuclear reactors, and the stock market if the programming breaks.
As the US military increasingly relies on drones and drone technology to fight, we’re entering an era where the next step in technology is giving these machines the power to kill autonomously. Yet its not just the obvious control over weapon systems that could potentially turn on us, or accidentally kill based off of faulty programming, but the very systems that keep our modern way of life humming. Putting our energy and resources in the hands of an unpredictable beast can be just as catastrophic as a defense system gone mad. The problem is, as with all scientific progress, we’ll dive in first and figure out how to swim later. And we may only learn our lesson after entire cities go dark, a reactor melts down, or a modern economy crashes.
Skynet is the fictional AI from the Terminator movies that, upon becoming Sentient, decides that humans are a threat to its existence and, having been put in control of the vast military resources of the United States, launches a full scale nuclear attack on Russia knowing that they will retaliate in kind.
The devastating war is only the beginning, however. Military grade machines stalk the burned cities hunting for survivors. In the 80’s and 90’s, James Cameron’s vision was terrifying and plausible, but the technology involved in his doomsday scenario was still far-fetched. The real world simply wasn’t advanced enough. Someday, yes. But when?
The countdown has begun. The DoD is aggressively pursuing advanced technology to replace an inevitably shrinking US ground force. The modern US military is facing dire budget constraints and an a ever-changing battlefield that requires smaller, faster, and more nimble forces. The heavy reliance on drones to fight our wars overseas is just a prelude for what’s to come: reliance on machines to do our fighting. In the air, ground, and sea.
As I work on the sequel to A Cold Black Wave, I’m exploring a post-apocalyptic world dominated by machines and their inevitable integration with human beings. The idea of a weaponized machine patrolling an occupied city is now an imminent reality that is actively being pursued by the world’s largest military. It will bring into light a slew of questions, including that of accountability and morality. If a machine inaccurately blows up a school bus full of kids, who is held responsible? The contractor who built it? The US military as a whole? With US drones, they are controlled by “Operators” who are rarely, if ever, mentioned when a drone strike kills innocent civilians. It’s always the drone. The “drone” accidentally killed people. The drone only acts by the decision of a human, but we’re already conditioned to not care of the consequences or who was responsible.
Although, DARPA is also looking into using actual soldiers as “surrogates” for bi-pedal machines, akin to the movie Avatar (which their project is named after). It’s possible that the Geneva Convention will pass rules against autonomous “combat” machines in particular, so that there is still a human responsible for any associated deaths.
Otherwise, I can see the headlines now: “Glitch in software causes errant missile strike on school bus”.
There appears to be a growing concern about “autonomous weapons” becoming a reality, and a campaign has been started to keep them from ever seeing the light of day. In other words…KILLER ROBOTS! See article: here. Considering my book “A Cold Black Wave” involves this very reality, I thought this would be an interesting topic to explore.
The article states the obvious, in that these machines do not actually exist yet AND the technology is readily available to make one in a short amount of time. In fact, I’d go so far to say that an enthusiast could create one in their garage. Xbox’s “Kinect” technology has already been used by kids to develop machines that can accurately shoot a basketball. Add a hydraulic arm with a gun attached, and now you’re shooting bullets.
My book leaves a lot of questions unanswered about the machines that inhabit Josh and Leah’s new world (which will be answered in the sequel!), however, it’s not difficult to divine their purpose. Killer machines are not unique to science fiction, it’s an idea that’s been toyed with for decades. If a nation were to create autonomous machines to be used in combat zones, how would they be controlled? Today, the United States’ recruitment for drone pilots is going parabolic. The 21st century pilot and co-pilot are sitting thousands of miles away from their aircraft, controlling it remotely in a secured location on the ground. Drone pilots are the new fighter jocks.
The move from human pilots (controllers) to fully autonomous control by the machine isn’t a technological hurdle, but a moral and political one. In a world where everyone tries to blame someone or something else for their errors and wrongdoing, an autonomous machine that kills the wrong person would simultaneously serve as the perfect scapegoat for “collateral damage”. Programming is never perfect, right? A machine is still prone to errors, but those errors will be judged in percentages. What percentage of their murderous errors will be considered acceptable? 10%? 5%? If a machine is responsible for 100 “kills” per month, but 10 of them are children due to “error”, will that be an acceptable margin? Would that number allow the DoD to “wash” their hands of such things? Innocent civilians are killed in error by our drone strikes, but this has become an acceptable consequence in our “war on terror”.
As science fiction writers, it’s our responsibility to envision a future that takes a current or potential technology to its horrifying limits. I still consider James Cameron’s Terminator story to be the de facto end-game for autonomous machines (except maybe without the time travel), which eventually consider their creators to be the greatest threat to their existence and decide to wipe them out. Once you create something that can think and make decisions on its own, you unleash an uncontrollable force (if you have kids, well, case in point!).
While we’re infinitely further away from that reality than the first combat-ready autonomous machines, it certainly doesn’t hurt to start laying the groundwork now to prevent such technology from becoming an acceptable way to conduct warfare. As our world governments remain somewhat civilized, the need for such technology is not pressing. We still have the budgets and the manpower to wage our little wars with real people, and enough civilized people to voice outrage over egregiously horrifying incidents of “collateral damage”, that the risk of deploying autonomous machines outweighs the benefits. If, God forbid, we are ever faced with another world war, where victory must come at nearly any cost, then the deployment of killing machines will be seen as a “game changer” and the inherent risks worth whatever moral and political backlash that may stem from their usage.
If we’re willing to use nuclear bombs on cities filled with millions of people, certainly the use of killer bots will seem humane in comparison.
*UPDATE*: New article released today titled, “Navy unveils squadron of manned, unmanned craft“. They herald the use of unmanned drones as they can conduct riskier missions that would traditionally put a human pilot in danger. Currently, drones used by the US are limited to aircraft. That won’t always be the case once someone develops a single machine that can do the work of 12 grunts. Why send 12 young men down a hostile street when you can send a machine?