Ronald
Arkin’s The Case for Ethical Autonomy in
Unmanned Systems is overly optimistic regarding the potential for fully
autonomous combat systems to maximize ethical behavior. In particular, Arkin
fails to acknowledge that a machine can only be as “ethical” as the human which
designs it, even if the technology is implemented to perfection. Once we
realize this fact a host of new concerns regarding robotic warfare arise, which
must be addressed before we place undue trust in our technology – ethical dilemmas
previously limited to the battlefield now become the domain of defense
contractors and software engineers, where they will be subject to political and
economic pressures.
Critical
to understanding the discussion at hand is the notion that a robot can never be
“more ethical” than its designer. All autonomous decisions must ultimately
reduce to a set of binary choices if they are to be interpreted by a computer
(perhaps barring advances in quantum computing), and therefore a robust algorithmic
design must take in to account many different types of external variables if a robot is to behave “ethically.”
But an algorithm by its nature only takes the class of variables programmed
into it*. What this means is that if a new class of variable enters into an
ethical calculation during a violent situation, the programmer has no control
over how a robot will respond, and the ethical nature of the robot is therefore
constrained by the ethical depth with which a programmer has constructed his
algorithm. This is the case even if all of the information relevant to an
ethical decision is collected perfectly.
Arkin might argue
that a robot should be programmed to act “conservatively” in a situation with
unknown variables, standing down and perhaps even allowing itself to be
destroyed “in cases of low certainty of target identification” (333). But this
seems problematic during combined robot/human operations because by acting
conservatively, a robot might allow friendly soldiers to be killed. What is the
ethical response in this scenario? It is difficult to say. Of course, this is
exactly the sort of situation where we would prefer a person to have ultimate
control over the trigger behind lethal weaponry.
Since a robot
cannot be more ethical than its designer, whoever programs its algorithms
should be as virtuous as possible in order to avoid moral failures. However, we
have little reason to believe that software engineers will be as ethical as
possible when writing their code, as the CIA, DoD, and military organizations
will have incentive to demand algorithms which suit the practical interests of
the US. Arkin gives a long list of reasons why civilians get killed during war,
and one reason in particular stands out: utility (338). The following question
then presents itself: what reason do we
have to expect our nation’s leaders to disregard utility? We can and should
expect algorithms to be designed in such a way where the sacrifice of civilians
is acceptable in order to achieve some “greater” military end. Jo Becker and Scott
Shane describe President Obama’s decision to order a drone strike against
Baitullah Mehsud, even though all parties involved knew for a fact that
innocent civilians would die. Arkin believes that the killing of civilians for
the sake of utility is “alien to current artificial intelligence efforts and
likely…to remain so,” (388) but apparently this line of thinking is not alien
to the leader of the free world.
At the very least,
it seems like “genocidal thinking” and “power dominance/subjugation” (388) won’t
be on the feature list for a new generation of US-funded autonomous robot
fighters. That does not mean concern is unwarranted. Arkin seems to think we
should sigh in relief because these un-ethical routines are simply not a part of
current AI research. But we should remember that the US, even with its “upright
morals,” cannot be expected to have a monopoly on the use of autonomous force,
especially given rapid developments in technology. As Noel Sharkey points out, “it
is difficult to design and develop these new technologies but they are not so
difficult to copy” (381). Time moves forward, and eventually unscrupulous
actors will be able to acquire dangerous robotic weapons, programming them
without the ethical considerations which we value so highly.
All of this is not
to say that we should abandon efforts at automation entirely, and frankly, the
steady march towards these new technologies seems impossible to stop. If we
cannot stop, however, perhaps we should at least pause and consider the
ramifications of robot warfare.
*I realize that
advances in AI, self-generating code, etc, might call this view into question.
However, if our AI did evolve to a
point where it could consider these previously unaccounted variables, then the
robot would be exercising a foreign morality over which humans could not have monitored
development. Such a situation does not seem ideal.
No one ever commented on this...oh well :(
ReplyDelete