AI ethics a ‘baseline safety issue’ – new report essential reading for data researchers

Kate Crawford and Meredith Whittaker of New York University have this week published a thirty page report on ethics in the AI industry that raises the bar on data ethics discourse and makes solid recommendations for the sector. These include the eradication of black box algorithms in high stakes domains such ​​ as criminal​ ​justice and​ ​rigorous​ ​pre-release​ ​trials​ ​by AI companies to ensure​ ​that technology​ ​will​ ​not​ ​amplify​ ​biases​ ​and​ ​errors​​.

A resonant recommendation for the Data Ethics Case Study Project is number 9. It is our hope that we can help to connect researchers and ethicists looking to collaborate, through the provision of plain English ethics case studies sourced directly from data and AI developers.

The recommendation reads: ‘The​ ​AI​ ​industry​ ​should​ ​hire​ ​experts​ ​from​ ​disciplines​ ​beyond​ ​computer​ ​science​ ​and engineering​ ​and​ ​ensure​ ​they​ ​have​ ​decision​ ​making​ ​power.​​ ​​ ​As​ ​AI​ ​moves​ ​into​ ​diverse social​ ​and​ ​institutional​ ​domains,​ ​influencing​ ​increasingly​ ​high​ ​stakes​ ​decisions,​ ​efforts must​ ​be​ ​made​ ​to​ ​integrate​ ​social​ ​scientists,​ ​legal​ ​scholars,​ ​and​ ​others​ ​with​ ​domain expertise​ ​that​ ​can​ ​guide​ ​the​ ​creation​ ​and​ ​integration​ ​of​ ​AI​ ​into​ ​long-standing​ ​systems with​ ​established​ ​practices​ ​and​ ​norms.’

In an interview with Wired today Kate Crawford points to the skewed development of some technology fields and the implications of that imbalance for society.

‘Who gets a seat at the table in the design of these systems? At the moment it’s driven by engineering and computer science experts who are designing systems that touch on everything from criminal justice to healthcare to education. But in the same way that we wouldn’t expect a federal judge to optimise a neural network, we shouldn’t expect an engineer to understand the workings of the criminal justice system.’

Crawford says the current practice of consulting on project ethics post-development has to stop.

‘We have a strong recommendation that the AI industry should be hiring experts from beyond computer science and engineering and ensuring that those people have decision making power. What’s not going to be sufficient is bringing in consultants at the end when you’ve already designed a system and you’re already about to deploy it. If you’re not thinking about the way systemic bias can be propagated through the criminal justice system or predictive policing, then it’s very likely that if youre designing a system based on historical data, youre going to be perpetuating those biases.Addressing that is much more than a technical fix.’

 

Leave a Reply

Your email address will not be published. Required fields are marked *