The Coming Clash Over New Laws for AI

Barney McManigal
6 min readOct 30, 2020

Landmark Oxford forum sees agreement on ‘tech-lash’, but not new laws and regulations to address it

Author Safiya Noble told AI conference attendees that existing laws and regulations for algorithmic bias are “woefully out of touch.”

Oxford computer scientist Mike Wooldridge doesn’t lose sleep at night over killer robots, because as he says, developers still can’t get software-based artificial intelligence to tell a proper joke or an interesting story.

But as Wooldridge and colleagues celebrated the achievements of machine learning alongside 500 academics and entrepreneurs last year, speaker after speaker returned to the field’s recent problem areas: from biased algorithms and fake news to privacy breaches and automation-related job losses.

“There is a boatload of ethical challenges in front of us,” said Sir Nigel Shadbolt, a fellow Oxford computer scientist at AI@Oxford, a conference designed to showcase British research and spin offs to rival Silicon Valley, held at Oxford’s Said Business School.

Although speakers agreed that machine learning innovations have fuelled a “tech-lash,” they clashed over potential remedies, including the need for new laws and regulations. Some called for tougher government measures, while others opposed them.

As the UK positions itself for dominance in the global technology sector thought leaders are struggling to balance public concerns with giving innovators space to work. The task is particularly fraught for universities, which value academic freedom but also rely on external funders for support.

At a time when industry and government are looking to academics to chart the course for safe and ethical AI, divisions at the Oxford forum suggest they are failing to provide a coherent way forward.

Most Oxford talks and displays celebrated the diversity of machine learning applications, from commercial products, like driverless car software, to environmental and social innovations, such as climate modelling. A full third of the programme focused on health applications, including use of computers to read X-rays, map the brain and train health care workers in Africa.

“They’re going to save lives on a massive scale,” Wooldridge said, explaining that the software behind these innovations broke new ground because they could absorb reams of existing data and learn on their own, even if they stopped short of human-level cognition.

But panellists also warned about harms implicit in software applications that have generated a torrent of public criticism — or “tech-lash” — in recent years. These include algorithms that make decisions about policing, education, recruitment and other matters, using datasets which contain gender and racial biases.

Others cited concerns about software that can collect information about children; the future loss of many existing jobs due to automation; and, the well-publicised use of social media platforms, including Facebook and Twitter, to disseminate misinformation in the 2016 presidential election.

Some speakers focused on future technologies likely to raise ethical conflicts, from smart-phones that can modify advertisements based on users’ facial expressions, to advanced lip-reading software that can transcribe a video conversation without audio.

Perhaps the most troubling, one speaker said, are the so-called deep fake videos, which can create fraudulent footage of a person based on their photographs.

To address these troubling headwinds, Wooldridge and other organisers called for greater investment in ethics research, and highlighted the proposed Institute for Ethics in AI at Oxford, made possible from a £150 million donation from financier and Donald Trump supporter Stephen Schwarzman.

But several panellists said the challenges of machine learning required more than ethical discussion. They said governments should begin taking concrete steps to keep citizens safe.

“Ethics principles are useful but we need to go so far beyond them, yesterday,” said Linnet Taylor, a researcher at Tilburg University in Germany, noting that most governments lack a coherent policy for AI beyond exploiting the economic benefits.

Governments should respond to AI in the way that policies for flight safety, policing and drug-making responded in previous decades to air travel, crime and new medicines, Taylor said.

“All of these things, we don’t just let them roll,” Taylor said. “We apply thinking about safety, about justice, about legitimacy.”

Carissa Véliz, an Oxford philosophy researcher, also compared software applications to new drugs.

“We’re allowing algorithms to do all sorts of things without checking on them, and there’s no kind of accountability for it,” said Veliz, who described some software applications as “just as powerful as a drug and just as destructive if it doesn’t work well.”

Véliz called for mandatory testing of algorithms that can impact the public adversely.

“No AI used to allocate resources or opportunities should be let loose in the world before being tested through a randomised control trial,” Véliz said.

Panellists also addressed the potential need for intervention in the labour market.

Mike Osborne, the Oxford engineer who along with economist Carl Frey published a seminal paper estimating that up to 47 percent of existing US jobs will eventually disappear as a result of automation, said potential impacts on displaced workers will likely require political solutions, for example, education investments and new labour laws.

“We need to introduce the political process to get the outcomes we want,” said Osborne, who credited worker activism, labour reforms and universal secondary education for smoothing Britain and America’s transition from agrarian to industrial economies in the 19th and 20th Centuries.

“We have to recognise that technology is fundamentally political,” Osborne said. “Technology is not just something that happens to us.”

But speakers disagreed over how to confront challenges raised by machine learning, and the need for new laws.

Some argued that government should simply apply existing laws, rather than create new ones.

“My instinct is not to legislate in innovation domains like this,” said Phil Howard, a sociology professor and director of the Oxford Internet Institute, who discussed the use of fraudulent news and online propaganda in elections. “In most democracies we already have guidelines that industry and government should be following.”

Others conceded that emerging AI technologies needed new regulations, but said industry should take the lead in providing them.

Jade Leung, researcher and doctoral student at Oxford’s Future of Humanity Institute, said governments lacked the expertise to regulate the fast-moving world of tech, and that the private sector should step in.

“We often don’t have the capacity within government institutions to keep up pace with how that innovation is going,” said Leung, who studies international relations. “We should be placing more responsibility and more expectation on private AI labs to do that.”

But other speakers described handing over regulatory responsibility to industry as a clear conflict of interest.

Safiya Noble, a professor of information studies at UCLA, said industry-led solutions inevitably fail to fix problems such as racial and gender biases in software because they leave the underlying logic that produced them in place.

“Technical attempts to isolate and remove bad data and bad algorithms will inherently fail to effect justice and address the needs and concerns of sceptics and critics of AI,” Noble said.

Noble, who authored the book Algorithms of Oppression, described existing laws and regulations as “woefully out of touch.”

Responding to calls for greater political action, advocates of machine learning urged attendees not to over-react to concerns in a way that impedes the future benefits of the technology.

“One of the risks is that we throw the baby out with the bathwater,” said Chris Bishop, a computer scientist at Microsoft and Cambridge University.

But others warned that failing to respond to the social impacts of new technologies could result in unintended political consequences.

Mike Osborne cited research, which described the unexpected victory of President Trump in 2016 as partly due to the impact of technological change in critical US battleground states.

“Those areas of the US that have historically been most affected by automation are also those areas that were most predisposed to vote for now-President Trump,” Osborne said.

The two-day conference at Oxford’s Saïd Business School concluded on 18 September 2019.

--

--

Barney McManigal

Barney McManigal is a journalist and political scientist who teaches at the University of Oxford