CommentaryTechnology

Project Maven does not make Google evil

As controversy rages over Google working with the U.S. Department of Defense to include its artificial intelligence software in American drone programs, Saahil Dama argues that regulation of autonomous weapons systems – not prohibition – is key

Saahil Dama

Recently, it was reported that about a dozen Google employees had resigned over the company’s involvement in the controversial Project Maven. These resignations come in the aftermath of a letter that employees had written to CEO Sundar Pichai protesting against Google assisting the U.S. Department of Defense in developing image-recognition technology that would be used by military drones to detect objects and track their movements. This project, titled the Algorithmic Warfare Cross-Function Team (or simply Project Maven), is aimed at providing the DoD with ‘actionable intelligence and decision-quality insights’ which will, ostensibly, be used for building autonomous drones and weapons in the future.

The letter is in the same vein as other letters against autonomous weapons, albeit with concerns about how being associated with Project Maven would tarnish Google’s reputation. “Google should not be in the business of war,” write the employees, arguing that the company should not outsource the moral responsibility of its technologies to third parties. To the employees, assisting the DoD in military surveillance and possibly even autonomous warfare is completely unacceptable. Their demands are clear – cancel Project Maven and implement a policy stating that Google would not build military technology.

There is nothing new in this debate around autonomous weapons. About a month ago, I published an essay titled “Banning Autonomous Weapons is not the Solution” which proposed that instead of trying to ban autonomous weapons, the international community should focus on regulating the development and use of such weapons. This idea was premised on the fact that nations have a significant incentive to possess autonomous weapons owing to their military and strategic benefits. Attempts to ban autonomous weapons would prove futile, especially since non-abiding countries and terrorist groups might use such weapons for causing large-scale destruction. Hence, countries would need an arsenal of autonomous weapons for reducing military casualties and maintaining the threat of mutually assured destruction against potential aggressors.

This essay evoked an interesting response from the Campaign to Stop Killer Robots, who wrote that pursuing regulation of autonomous weapons at this stage would be tantamount to accepting that it is already too late to retain control over such weapons.

Admittedly, it is not too late to retain control over autonomous weapons. As the Project Maven protests show, people are willing to go to great lengths – even risking livelihoods – to prevent the proliferation of autonomous weapons. The Guardian recently posted an article by three professors who expressed support for the resigning employees and asked Google to pledge against autonomous weapons. From Elon Musk to the late Stephen Hawking, there is widespread agreement over the fact that autonomous weapons need to be opposed.

And for good reason. It is easy to be terrified at the thought of automated drones indiscriminately slaughtering civilians in the Middle-East and robotic battle-tanks razing villages without remorse. But this is precisely the future that Google and Project Maven are seeking to prevent.

Autonomous weapons are a logical sequitur in modern warfare. There are two important reasons for this. First, humans are suboptimal agents in conflict situations and they have a history of causing excessive collateral damage and committing human rights violations in the heat of battle. Second, the moment a single country or organization acquires autonomous weapons, it would trigger an arms race because other countries would need autonomous weapons for self-defense.

If this is accepted as a starting point for a conversation on autonomous weapons, the issue shifts from what we should do to prevent the proliferation of autonomous weapons to how we can build and use autonomous weapons in accordance with international law and human rights. Regulation, not prohibition, becomes key.

Having accepted this, Google and other companies such as Amazon are working towards ensuring that autonomous weapons do not lead to the dystopian future envisaged by critics. Problems such as the inability to distinguish between civilians and combatants or weapons and ordinary objects, excessive collateral damage, and other life-threatening mistakes would largely be a consequence of insufficient or improper data and training. Google and Project Maven are addressing these issues; data labelling, turning raw data into actionable intelligence, and developing algorithms to accomplish key tasks are some of the core objectives of Project Maven. Given the breadth and volume of data that Google possesses, its involvement would help the DoD in building safer autonomous weapons systems.

While the world deliberates over what must be done to stop autonomous weapons, Google, with the foresight of a grandmaster, is solving problems that will arise when the arms industry is flooded with these weapons, as it inevitably will be. Project Maven is a small start, but it is a harbinger of the role that companies like Google will play in shaping the future of autonomous weapons. As a Google spokesperson was noted saying, “The technology is used to flag images for human review and is intended to save lives and save people from having to do highly tedious work.”

Intended to save lives, are the operative words.


Saahil DamaSaahil Dama is a technology lawyer working with a top-tier Indian law firm. He has previously published articles on issues such as the need for regulating autonomous weapons and self-driving cars. He has also co-authored papers on emerging issues in technology law such as 3D printing and copyright, and intermediary liability and hate speech. Previously, Saahil has served as the founding Editor-in-Chief of the Journal of Intellectual Property Studies at National Law University, Jodhpur (India).

Follow him on Twitter @life_trotter

All views and opinions expressed in this article are those of the author, and do not necessarily reflect the opinions or positions of The Defense Post.


The Defense Post aims to publish a wide range of high-quality opinion and analysis from a diverse array of people – do you want to send us yours? Click here to submit an Op-Ed.

Related Articles

2 Comments

  1. I have to disagree with an approach to thinking that I think Mr. Dama adheres to. In an age of nuclear weapons and aerial bombing, particularly when combined, there can be in reality no such thing as a “just war”. For example, a drone that is carrying a 350KT warhead and exploded over a major city will immediately kill hundreds of thousands of civilians. Hundreds of thousands would die in days to weeks from radiation and starvation. Is this not terrorism, whether committed by a fanatical armed group, or a nation-state?

    If Google participates in this, to my mind it is participating in the threat of mass genocide, which is recognized by the UN and the Geneva Conventions, as a war crime. With every generation, the ratio of combatants-to-non-combatants grows toward the murdering of people who play no role in the conflicts their governments choose to create. If Google cannot distinguish between profit that comes from non-violent activity and profit that does, I call that corruption.

    If we agree that the classic definition of “terrorism” is the unlawful killing of non-combatants for political reasons, it is clear that nuclear weapons and aerial warfare are most definitely a form of “state-terrorism”, in that the goals are political and the greatest number of people killed and hurt are non-combatants. A “just war” is now impossible because of the genocidal, ecocidal properties of weapons of mass destruction. There is nothing in the cosmos that could justify nuclear war: no amount of greed, money, love of power, hubris, arrogance, pride, conceit, resources, honor, freedom or sacrifice can justify the mass murder of millions of human beings who don’t want or deserve war. A general war between India and Pakistan will kill many 10s of millions of non-conbatants. And it will be an equal-opportunity type of mass murder in that all urban dwellers are targeted, regardless of race, class, family origins, gender, age or economic status. A general war between Russia and the U.S. will kill about 600 million from the heat, blast and fire, another 400 million people will die from radiation poisoning and/or starvation. War is pointless and we must do away with it.

    Google is corrupted by money and power. By supporting American militarism (a 2016 Gallup worldwide poll ranks the U.S. as the most dangerous nation), it is contributing to an increasingly unstable and chaotic world. If we are such a smart species, why can’t we figure out a way to resolve problems without killing one another? Google’s answer is if it makes a profit, we do it. That is an ethical failure, no doubt. As the writer Upton Sinclair wrote: “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!” The 3000 employees who protested understand something about right and wrong that Google’s leaders do not. The world is becoming a darker place and either we expand our sense of justice and work for peace or we are heading toward disastrous ruin.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button