Google and Ethical AI

Why has Google fired members of its ethical AI team? How pervasive are problematic algorithms in society? And who is holding developers of new technologies to account? We speak to leading AI researchers about current challenges facing the industry.

Featured:

Tiberio Caetano, Chief Scientist, Gradient Institute

Professor Fang Chen, Executive Director of Data Science, University of Technology Sydney

Producer/presenter: Julia Carr-Catzel

Article written by: Zoe Stojanovic-Hill

Over the past year, Google has fired two researchers on its ethical artificial intelligence team, Timnit Gebru and Margaret Mitchell. The circumstances surrounding their departure are disputed. Google says it let Mitchell go because she violated the terms and conditions of her employment contract. Mitchell, however, says that she was sacked because she spoke out against the discriminatory implications of one of its AI technologies. This has sparked controversy in the AI world. A major AI ethics research conference, the ACM Conference for Fairness, Accountability and Transparency, suspended its sponsorship arrangement with the company a week after it fired Mitchell. Google employees backed the two researchers, rallying online, arguing that the company is willing to crack down on anyone who gets in the way of the pursuit of profit.

The case raises questions about the ethical responsibilities of tech giants dealing in AI – some of the most powerful and profitable companies in the world. What are some of the ethical issues around AI and how can we make AI more ethical? In this episode of Think: Digital Futures, 2SER producer and presenter Julia Carr-Catzel explores these questions around ethical AI, speaking with Chief Scientist at the Gradient Institute Dr Tiberio Caetano and Executive Director of Data Science UTS Distinguished Professor Fang Chen.

Reflecting on the Google case, Dr Caetano says that ethics researchers play an important role in keeping big tech accountable, although as this case shows, there is a limit to what researchers can do when they could get fired for thinking too critically. Complicating matters further, ethics research conferences like ACM are often funded by industry, giving big tech more power to set the terms of the debate. Dr Caetano says that the problem, however, is bigger than simply regulating big tech because almost every large institution uses some form of AI now – from retailers to banks to media organisations.

A major ethical issue in AI is algorithmic bias – the tendency for algorithms to copy and amplify existing biases in society. For example, machine learning algorithms help banks decide who to lend to, and these algorithms are generally based on past decisions, so customers who were denied loans in the past are more likely to be denied in the future.

“The algorithm will essentially confuse what has happened in the past with what should happen in the future,” Dr Caetano explains.

“Because data reflects the past, and the past was unjust and unfair in many ways, if you tell an algorithm, ‘Here’s the data, make a prediction about who should get this loan for this application round,’ the algorithm will likely believe that the people who should get the loan are people who look like people who in the past got loans.”

This could further entrench racism, sexism, classism and other forms of systemic and structural oppression. It could harm people of colour, single women, people with disabilities, young people, and other social and cultural minorities. If customers from minority groups were unfairly denied loans in the past, the AI technology will assume that these customers are less creditworthy than they are, leading to discriminatory lending.

A major challenge, then, is teaching AI how to be ethical. This requires researchers to translate ethical concepts into mathematical equations. They are currently mapping out concepts like equity in computer code.

Distinguished Professor Cheng’s team of data scientists at UTS are exploring how to do this in practice. They are working with the recruitment company Rejig to make its algorithm more inclusive. They are focusing on hiring more equal numbers of men and women.

“In recruitment one of the typical ones is gender balance, particularly in STEM kind of an area,” Distinguished Professor Cheng explains. “If in your training data you have all men, no women, representing in the selections basically you already have a bias in the data sets and it’s very hard to expect that your selection algorithm is going to pick the right balance of candidates.”

The UTS team changed the training data of Rejig’s recruitment algorithm, picking gender-balanced samples, and companies have now used the tool to increase their female staff.

Academics, governments and business have developed ethical EI frameworks in response to the rapid development and largely unchecked use of AI across the economy. There are over two hundred frameworks across the globe. While AI needs to be regulated across all sectors, big tech still has the most power and requires the most regulation.

Dr Caetano highlights how this is a matter of democracy.

“Companies like Twitter, Facebook and Google, it’s really an ecosystem of companies that have a lot of power. But they also have a power of which information they present in front of you. And that particular power is capable of shifting the direction of democracy.”

“The people who are actually essentially the target of those systems, the clients of a bank or of an insurance company or telecommunications provider, they need to be part of this conversation.”

“We need to help the machines to help us.”

 

You may also like

Episodes