How can we ensure robots share our values? Deepmind investigates

DeepMind launches new research team to investigate AI ethics

With each passing week more investment is directed towards the pursuit of ethical AI development. For years there has been a worrying lag between technology development and human engagement with the moral and human rights issues that these technologies will surface. Google’s Deepmind is the latest purse to be opened. The UK-based company, which Google bought in 2014, has just announced the formation of a new research group dedicated to the most difficult issues in artificial intelligence. These include the problems of managing AI bias; the coming economic impact of automation; and the need to ensure that any intelligent systems we develop share our ethical and moral values.

The group has eight full-time staffers at the moment, but DeepMind wants to grow this to around 25 in a year’s time.

The new research unit, called Deepmind Ethics and Society, is co-led by Verity Harding and Sean Legassick. They write in their most recent blog “If AI technologies are to serve society, they must be shaped by society’s priorities and concerns.”

Setting the moral compass of virtual avatars is discussed in our case studies section, as part of the Reverie project

Read the full report on The Verge here

Leave a Reply

Your email address will not be published. Required fields are marked *