Technology

Google will not use artificial intelligence for weapons, CEO Sundar Pichai says

Outlining a new AI policy, Pichai says Google will continue to work with governments and the military in other areas

Google announced it will not use artificial intelligence for weapons or to “cause or directly facilitate injury to people,” as it unveiled a set of principles for the use of these key technologies.

Chief executive Sundar Pichai, in a Thursday, June 7 blog post outlining the company’s artificial intelligence principles, said that Google “will not design or deploy AI” in application areas that “cause or are likely to cause overall harm” except where the company believes “that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.”

The specific areas highlighted by Pichai include “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” as well as those that “gather or use information for surveillance violating internationally accepted norms,” and “technologies whose purpose contravenes widely accepted principles of international law and human rights.”

Pichai said the company “will continue our work with governments and the military in many other areas” including cybersecurity, training, military recruitment and search and rescue.

In a separate post, Diane Greene the CEO of Google Cloud said that although they”will not pursue certain types of government contracts,”  Google is “still doing everything we can within these guidelines to support our government, the military and our veterans.”

Google is believed to be competing against other tech giants such as Amazon and Microsoft for lucrative “cloud computing” contracts with the U.S. government, including for military and intelligence agencies.

Greene highlighted the recent public focus on Google’s September 2017 contract with U.S. Department of Defense for the Project Maven initiative.

At least a dozen senior Google staff have resigned in protest at the project, while thousands of employees signed a petition protesting the development of the image-recognition technology to be used by military drones to detect, identify and track objects and urging Pichai to shut the project down.

According to Greene, the Project Maven contract “involved drone video footage and low-res object identification using AI, saving lives was the overarching intent.”`

Last week the company said that the contract would not be renewed.

“I would like to be unequivocal that Google Cloud honors its contracts,” Greene wrote.

“We will not be pursuing follow on contracts for the Maven project,” she added saying that the company is now “working with our customer to responsibly fulfill our obligations in a way that works long-term for them and is also consistent with our AI principles.”

“A big win for ethical AI principles.”

Some initial reaction to the announcement was positive.

The Electronic Frontier Foundation, which led opposition to Google’s Project Maven contract with the Pentagon, called the news “a big win for ethical AI principles.”

“Congratulations to the Googlers and others who have worked hard to persuade the company to cancel its work on Project Maven,” EFF said on Twitter.

Ryan Calo, a University of Washington law professor and fellow at the Stanford Center for Internet & Society, tweeted, “The clear statement that they won’t facilitate violence or totalitarian surveillance is meaningful.”

Mary Wareham, advocacy director with Human Rights Watch arms division and a leading figure in the Campaign to Stop Killer Robots in a tweet welcomed the commitment from Google not to develop artificial intelligence for use in weapons.

“Governments should heed this latest expression of tech sector support and start negotiating new international law to ban fully autonomous weapons now,” Wareham said.

Google’s move comes amid growing concerns that lethal autonomous weapons systems could be misused or spin out of control. At the same time, Google has faced criticism that it has drifted away from its original founders’ motto of “don’t be evil.”

Several technology firms have already agreed to the general principles of using artificial intelligence for good, but Google appeared to offer a more precise set of standards.

Project Maven does not make Google evil


With reporting from AFP, This post was updated in June 7 to include reaction and additional background.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button