There is no denying that ethics is generally a complex matter. Data Ethics, as the set of principles and methods through which the ethical implications of applications of data processing and analysis (Data Science) are considered and resolved, is therefore twice complex: It is about understanding complicated, hard to predict social phenomena emerging from complicated, hardly understood technologies.
The point here is that ethical implications are potentially infinite. As discussed in a our previous post, efforts are made to address this through methodologies that address the whole process of AI and Data Science development. However, it is more than about trying to figure out what issues might emerge from each step, and what are the solutions to them: Solutions have to be found during the design phase of the process to problem that might emerge much later, when the technology/application has been developped, operationalised, deployed and (more or less) widely adopted.
The methodology put forward in the previous post includes, as an essential principle, that the investigation of ethical implication has to be anticipatory in nature. A recent article on RTÉ Brainstorm gives a more concrete, and possibly a bit surprising, explaination of what that can mean: Write science fiction. As used already in methods related to design fiction, this idea is not new. It consists in projecting ourselves in a not too distant future where whatever we are trying to build is actively being used and to answer the question: “In that world, what really, really bad things could happen?”
And it works… with the help of popular science fiction. The Re-Coding Black Mirror workshop is an examplar application of this method, taking place on the side of technology conferences such as The Web Conference, and asking technology academics to think and write about the possible negative implications of their own technology through storytelling.