Policymakers in Canada are beginning to move on concerns about the growing distance between AI tech development and public interest. A number of initiatives are in motion to identify the problems, but we are still a long way off solving them.
The gap became starkly evident this week as Elon Musk announced that he was stepping down from the AI ethics body he founded in 2014 citing a conflict of interest now that his electric car company is ‘more focused on AI’.
When Mr Musk set up Open AI he said AI was humanity’s biggest existential threat.
The first, tentative steps by Canada’s policymakers include a move by the Treasury Board Secretariat (TBS) to trigger a process of public consultation around the responsible use of AI in government. In partnership with the AI community, the Treasury Board is piloting an open online consultation with an eye to generating guidelines for AI use in federal government services.
In an article by Fenwick McKelvey, Assistant Professor in Information and Communication Technology Policy, Concordia University
and Abhishek Gupta, AI Ethics Researcher, McGill University, the initiatives are described as an example of ‘how the federal government can lead by adopting strong guidelines for its own use of AI.’
‘These concrete, narrow applications are a sign the government understands the risks of automation in its own work. We hope the final report will result in strong guidelines that guarantee transparency, accountability and fairness,’ say the authors.
Global Affairs Canada is also moving on this – leading a multi-university collaboration on artificial intelligence and human rights. Graduate students in Fenwick McKelvey’s Media Policy seminar at Concordia University are contributing to a broad scan of AI’s policy implications for human rights. McKelvey aims ‘to link debates around AI to the expertise in communication and cultural studies that have long been questioning the cultural, social and political dimensions of media and technology.’
Read the full article in The Conversation