Google sets limits on its use of AI, but will work with the military in “many areas”, and self-police

No work on AI for weapons, CEO Sundar Pichai says. Nor surveillance technology “violating internationally accepted norms of human rights.” But he is effectively saying “trust us” on the oversight of work with the military.

One Google employee tells Wired that any such rules would be hard to trust if only interpreted and enforced internally. External oversight would be needed to reduce the risk of business concerns skewing decision making.

Peter Eckersley, chief computer scientist at the Electronic Frontier Foundation elaborates on that theme: “If any tech company is going to wade into a morally complex area like AI defense contracting, we’d recommend they form an independent ethics board to help guide their work. Google has a real opportunity to be a leader on AI ethics here, and they shouldn’t waste it.”

An article on Futurism argues that the guidlines are pretty much the same as they were before Project Maven. And alerts us to a potentially portentous development: “is it a coincidence that, last month, the company apparently removed its longtime motto “don’t be evil” from its code of conduct? You decide.”

Image: AleHive

Leave a Reply

Your email address will not be published. Required fields are marked *