Google is banning the development of artificial-intelligence software that can be used in weapons, chief executive Sundar Pichai said on Thursday, setting strict new ethical guidelines for how the tech giant should conduct business in an age of increasingly powerful AI.
The new rules could set the tone for the deployment of AI far beyond Google, as rivals in Silicon Valley and around the world compete for supremacy in self-driving cars, automated assistants, robotics, military AI and other industries.
โWe recognize that such powerful technology raises equally powerful questions about its use,โ Pichai wrote in a blog post. โAs a leader in AI, we feel a special responsibility to get this right.โ
The ethical principles are a response to a firestorm of employee resignations and public criticism over a Google contract with the Defense Department for software that could help analyze drone video, which critics argued had nudged the company one step closer to the โbusiness of war.โ
Google executives said last week that they would not renew the deal for the militaryโs AI endeavor, known as Project Maven, when it expires next year.
Google, Pichai said, will not pursue the development of AI when it could be used to break international law, cause overall harm or surveil people in violation of โinternationally accepted norms of human rights.โ
The company, however, will continue to work with governments and the military in cybersecurity, training, veterans health care, search and rescue, and military recruitment, Pichai said. The Web giant โ famous for its past โDonโt be evilโ mantra โ is in the running for two multibillion-dollar Defense Department contracts for office and cloud services.
Googleโs $800 billion parent company, Alphabet, is considered one of the worldโs leading authorities on AI and employs some of the fieldโs top talent, including at its London-based subsidiary DeepMind.
But the company is steeped in a fierce competition for researchers, engineers and technologies with Chinese AI firms and domestic competitors, such as Facebook and Amazon, who could contend for the kinds of lucrative contracts Google says it will give up.
The principles offer limited detail into how the company would seek to follow its rules. But Pichai outlined seven core tenets for its AI applications, including that they be socially beneficial, be built and tested for safety, and avoid creating or reinforcing unfair bias.
The company, Pichai said, would also evaluate its work in AI by examining how closely its technology could be โadaptable to a harmful use.โ
AI is a critical piece of Googleโs namesake Web tools, including in image search and recognition, and automatic language translation. But it also is key to its future ambitions, many of which involve ethical minefields of their own, including its self-driving Waymo division and Google Duplex, a system that can be used to make dinner reservations by mimicking a humanโs voice over the phone.But Googleโs new limits appear to have done little to slow the Pentagonโs technological researchers and engineers, who say other contractors will still compete to help develop technologies for the military and national defense. Peter Highnam, the deputy director of the Defense Advanced Research Projects Agency, the Pentagon agency that did not handle Project Maven but is credited with helping invent the Internet, said there are โhundreds if not thousands of schools and companies that bid aggressivelyโ on DARPAโs research programs in technologies such as AI.
โOur goal, our objective, is to create and prevent technological surprise. So weโre looking at whatโs possible,โ John Everett, a deputy director of DARPAโs Information Innovation Office, said in an interview on Wednesday. โAny organization is free to participate in this ongoing exploration or not.โ
