Martin Wright looks at the plethora of efforts to try to avoid unintended consequences from the powerful new technologies

AI and its companion technologies have a huge capacity to transform the world for good, but also a worrying potential for some pretty devastating unintended consequences. When it comes to the question of governance, that poses quite a challenge. The “move fast and break things” culture that has helped drive AI is somewhat at odds with the safety first, precautionary principle approach of sustainability. And the fact that machine learning works best “in the wild” – ie, when it’s operating in the real world, not the confined environment of the lab – adds to the challenge.

Small wonder, then, that the last year or so has seen something approaching a frenzy of initiatives involving academics, tech companies and governments, aimed at setting standards and guidelines for AI.

They vary in breadth and focus, but most come up with strikingly similar sets of recommendations, which largely boil down to calls for the technology to be harnessed for the benefit of all of humanity, while minimising the risks inherent in its exploitation. Effectively – although without saying as much – they all start from the same premise as the Hippocratic Oath (“first, do no harm”): indeed, that might be a helpful preamble to all such guidelines.

Among them are the Future of Life Institute’s AI Safety Principles, which cover an impressively wide range of issues, from transparency and responsibility through to measures aimed at ensuring humans control AI, not the other way around, and that it brings “shared benefit and shared prosperity”. It also calls for an end to an “AI arms race” (see Apocalypse soon?)

Recommendations developed by the AI Now Institute of New York University focus on transparency, in particular on overcoming the “black box” problem, and ensuring that AI doesn’t incorporate longstanding biases that might discriminate against disadvantaged groups. They stress the importance of diversity, too – involving women and minorities in AI development and decision-making, and also people from outside the usual disciplines of IT and engineering. The NGO BSR, formerly Business for Social Responsibility, is exploring ways to incorporate UN Human Rights principles into AI development guidelines.

The EU’s new General Data Protection Regulation is also concerned with privacy and the “black box”, enshrining the principle that everyone has a right to understand how data is being used to make any judgements that affect them. Critics point out that it lacks teeth, and can only be invoked after a decision has been already made.

One set of principles winning increasing respect is the Global Initiative on Ethics of Autonomous and Intelligent Systems, hosted by the Institute of Electrical and Electronics Engineers. It sees itself as an “incubation space for new standards and solutions, certifications and codes of conduct”. Its latest iteration, scheduled for release at the end of 2017, is expected to be particularly strong on ensuring wider environmental and social benefits.

The move fast and break things culture that has helped drive AI is somewhat at odds with the safety first, precautionary principle approach of sustainability

Most of the tech giants involved in AI are starting to develop their own thinking on the issue, with DeepMind’s Ethics and Society initiative perhaps the most ambitious, committing itself to “deep research into ethical and social questions, the inclusion of many voices, and ongoing critical reflection”. Google has partnered with Microsoft, Facebook, Amazon, IBM and Apple – effectively the Big Six of Tech – to set up a Partnership on Artificial Intelligence to Benefit People and Society, which aims to advance public understanding, and provide a “trusted and expert point of contact” on the issues involved.

All this is very well, but some believe governments need to get involved, and get tough. The Oxford Internet Institute has called for a European AI watchdog, to police the way the technology is implemented. Its authors suggest sending independent investigators into organisations to scrutinise how their AI systems operate, and propose certifying “how they are used in critical arenas such as medicine, criminal justice and driverless cars… We need transparency as far as it is achievable”, says the Institute’s Luciano Floridi, “but above all we need to have a mechanism to redress whatever goes wrong, some kind of ombudsman. It’s only the government that can do that.”

DeepMind's AI initiative has committed to "deep research into ethical and social questions"
 
 

Governments are beginning to respond, but only just. Germany is drafting a set of ethical guidelines for driverless cars. The UK’s latest Industrial Strategy identified AI as an area of great potential, and commissioned an independent review led byDame Wendy Hall, professor of computer science at the University of Southampton, and Jérôme Pesenti, chief executive of BenevolentTech. This came up with a range of more or less familiar recommendations, calling for more investment in research and training, programmes to win public trust and support, and ensure more diversity in the industry. One distinctive feature was the call for the development of “data trusts’” to encourage the sharing of data to everyone’s benefit. Data and diversity were the focus of recommendations to government by the Royal Society, too.

The Confederation of British Industry, while joining the call for responsible AI, wants the government to convene a joint commission of business, academics and employee representatives to study the impact on people and jobs. And along with virtually everyone involved in the debate, it calls for more investment in skills and research.

People die and governments change because of stuff that happens with software. It’s got to be more regulated

Many experts think more robust government involvement is essential. “AI is too powerful not to have government be part of the solution,” says Craig Fagan, policy director at Tim Berners-Lee’s Web Foundation. Joanna Bryson, an AI researcher at the University of Bath, summed up the case neatly in an interview in The Guardian. “People die and governments change because of stuff that happens with software. It’s got to be more regulated,” she said.

So where does this leave the sustainability and CSR professionals, some of whom are probably even now contemplating a wodge of worries over AI landing in their in-tray? Well, not necessarily at square one. AI itself may be full of new, bewildering stuff, but anyone who’s been involved with sustainability over the last 20 years will find that some at least of the key issues are starting to look strikingly familiar. After all, at the core of the sustainability quest is the search to minimise the negative consequences of human ingenuity (on the planet, and other people) while maximising human potential. Pretty much the same can be said of AI.

 

Google HQ (credit: Uladzik Kryhin/Shutterstock Inc.)
 
 

Which means that a sustainability lens can be a very helpful way of framing the debate. As Harriet Kingaby of Bora.com, a consultancy exploring this very topic, points out, “people are looking at individual issues around AI [such as privacy, or risk], when what we need is much more of a systemic approach. And that’s where all the lessons of systems thinking, which is at the heart of sustainability, can be so valuable”.

So when it comes to integrating a response to AI on the one hand with the whole sustainability structure on the other, we’re not starting from scratch. We don’t need to completely reinvent the wheel, in other words – even if it is attached to a driverless car.

 

Main image credit: Gaudilabs/Shutterstock Inc.
 
 
 
BSR  Future of Life Institute  AI Now Institute  General Data Protection Regulation  Institute of Electrical and Electronics Engineers  AI  google  Oxford Internet Institute  UK Industrial Strategy  CBI 

comments powered by Disqus