Case Study: Automated video caption development- should researchers anonymise human images before publication?

Project Title: TrecVid 2017

Project Description:

TrecVid is an ongoing video-to-text caption generation task. This international project has been running since 2001, and each year different research groups around the world are given a TRECVid task. In this case study, the approach to the project was to extract keyframes from short videos (Vines) and to generate a caption for each keyframe. The goal is to create an end-to-end system whereby images can be ‘read’ and captioned by the system automatically. TRECVid is a long-running, global benchmarking activity for content-based operations on video. Over the course of 2017 the research team generated natural language captions for more than 1,800 videos using no external metadata, just an analysis of the video content. The videos, or Vines, were sourced from social media.

Research that aggregates the content of social media posts is a very busy area. Large datasets of Twitter comments or Facebook status updates are processed and analysed routinely for a range of audiences from market research companies to security agencies.  The text is analysed while the authors are, usually, anonymised.
However, social media users are increasingly communicating using images and video as well as, or in place of, text.  Stakeholders like those mentioned above – marketeers, security agencies – are equally interested in the aggregated meaning of video content. Artificial intelligence cannot read the content of a video using standard text analysis tools.
This TRECVid project, like many other image analysis research, uses large video datasets to build systems that can ‘read’ and caption videos for analysis, learning to recognise images from repeated exposure to large numbers of, in this case, Vines.
1.        Anonymity of image subjects
The Vines used in the study are sourced from social media platforms. As such, they are considered to be in public ownership. The researcher in this case raises the concern that, while the subjects of the videos intended to make them public on social media, it cannot be assumed that social media users intended for them to be published as part of and academic or industrial research project. Text contributions are simple to decouple from the source. Images of people prove more difficult to anonymise.
Key question: At the point of publication, if images from social media are used,  should they be processed in order to preserve the anonymity of the subject?
2.       Image Anonymisation: Consistency of Practice, Universal Guidelines and Software
The researcher in this case elected to anonymise the images he uses at the point of publication, based on his contention that, as a social media user himself, he would not like his image to be published elsewhere. He also flagged the frequent appearance on minors in the imagery, andn the need to protect their identities. This, he points out, is a lab-level decision and not based on any universal guidelines. In his efforts to satisfactorily anonymise images, this researcher consulted with other researcher sin the field to establish best practice and identify appropriate anonymisation software. He discovered inconsistencies in approach and no common mode of image processing.
Key question: Sould there be a universal set of rules or guidliens around the anonymization of social media images used in research?
Key question: Shoul dther be standard processing tools for image anonymization available to researchers to ensure consistency across publications?
3.       User Agreements
Social media platform user agreements (‘Terms and Conditions’) generally contain an API clause – Application Programming Interface. This clause releases content posted by account holders for secondary use such as marketing, advertising and research. Many account holders do not read user agreements in detail. Social media researchers may not be familiar with the user agreements signed by the subjects of their research.
Key question: Should social media platform providers work harder to inform the public about secondary use of data?
Key question: Should social media researchers be required to familiarise themselves with the user agreements of the platforms they research, as these agreements constitute de facto  research subject consent forms?

Leave a Reply

Your email address will not be published. Required fields are marked *