Case Study: A crisis management system

Project Title: Crisis Management System

Project Description: Social media can be a source of vital information during crisis situations but the sheer amount of information available and the accuracy of that information presents a significant challenge. The aim of this project is to provide an intelligent system that can assist relief organisations, emergency services, journalists, government entities, volunteers and others in using social media data to provide help and assistance in crisis situations. This system, if it works well, will be able to identify and classify crisis information in social media, namely Twitter for now, and it will also be able to give an assessment of how credible the information is. The researchers are applying AI and deep learning techniques to Twitter data to develop this system. The techniques developed for this crisis management system would ideally be applicable to other social media resources. They would also be ideally applicable across languages


Social media networks such as Twitter are an increasingly important resource for researchers. The content posted on social media offers insights into a wide range of areas such as public health, information flow, political developments and in this case crisis situations. Content posted on social media networks has the added attraction of accessibility: since the material has already been published, it falls outside the remit of higher education ethics boards and is largely unfettered by issues relating to consent, reuse, third party sharing, storage and other constraints that apply to data collected in more traditional ways. To date, there is little no regulation in this area and many researchers are working in the dark, hoping that their research will not infringe user rights, become vulnerable to emerging legislation or fall foul of the courts.

This particular project aims to develop a platform that would enable organisations reacting to a crisis situation to gain useful information from social media such as Twitter. One of the main challenges is in managing to train a machine to sort tweets related to a crisis situation for credibility and relevance.

There are a number of ethical issues involved according to the researchers. Some of these issues are specific to this sort of project, while others relate generally to the use of Twitter data for research.

  1. When the data models are built and can identify information relating to a crisis, should that crisis information be separated out into its own data stream?

The advantage of separating the crisis related data into its own stream is that it is more cost effective and computationally simpler. Researchers could also apply specific models to the data depending on the type of crisis involved. The risk in separating it out however, is that something important in the main data stream might be missed.

Key question: Should researchers process all of the available data or should they section individual crisis data off from the main data stream?

2. In man-made crisis situation such as a war, people may be fleeing or hiding. Identifying their locations or sharing them publicly could be harmful to those people.

If specific communities or locations are more or less affected by a crisis, sharing that information could be harmful for those communities.

Likewise, in the case of a flu-pandemic for example, could identifying particular infection hotspots create panic?

Key question: Should any social media based crisis management system, also include a panic response strategy and what should that entail?

Key question: Could sharing information and increasing its impact be harmful to people or populations?

  1. Rules of social media. What’s legal isn’t necessarily ethical

While Twitter users accept the terms and conditions of Twitter which include the fact that their data can be used for research, does ticking a box really constitute consent? Does the intention of the social media user mean something from an ethical point of view?

In this project’s case, intent is arguably slightly less of an issue. Researchers could argue that if someone is tweeting for help in a crisis situation, the intent of their tweet is clear. That person wants their tweet to be read and for help to be forthcoming. This research will increase the chances of that happening.

However, the crisis management system will be processing data that is not related to the crisis situations as well. Have those users given consent? Again, this is a situation where we have a conflict between what is legal and what is ethical. Does ticking the terms and conditions of Twitter constitute informed consent for your data to be used by researchers?

A framework that protects the user and the researcher would be welcome.

Key question: It may be legal, but is it ethical?

Key question: Do the social media providers have a role to play in this?

  1. Not everybody is on Twitter. Is there a risk that people on Twitter will be attended to in a crisis sooner than people of equal or greater need, by virtue of their social media use?

This is a risk, but the research at the moment focuses on Twitter as an open and available source of data. The hope is that the research will be applicable to other data sources. Twitter is a useful testbed but researchers are aware that the data should never be used in isolation. This awareness will need to be conveyed to those who end up using the system.

Key question: Could researchers’ focus on Twitter lead to some manner of discrimination? Or could it lead to inadvertent discrimination by people who don’t fully understand the data?

  1. The researchers’ preference would be to make the platform accessible to the public but concede there are decisions to be made in relation to what information should be made public and what information should be private. Researchers mention that if there is information in critical situations that could cause harm to the people affected then that information should not be publicly available. They suggest that perhaps there could be different levels of access depending on who is using the platform.

Key question: Should the crisis management platform, if and when it has been developed, be publicly accessible?

  1. Twitter contains the option to geo-tag tweets so, when a user tweets, the user’s GPS location is broadcast with the tweet. Only one percent of Twitter users actually use geotagging. However, research has shown that tweet locations can be accurately determined up to 67 percent of the time using identifiers apart from geotagging.

How do you quote information and use information from Twitter in crisis situations without inadvertently identifying the user? Twitter data is a challenge faced by all researchers using this data. If all personal data is stripped out of a tweet, there is still a risk that the topic or the context being tweeted about could identify a user.

Key question: How do you protect privacy while also achieving a system that will be useful in a crisis situation?

  1. Training machines to assess the credibility of tweets and is an important part of this research project. Credibility is not black and white. There is a spectrum of credibility based on many factors and the machine will decide what deserves attention. But how can we be certain that the researchers training the machine aren’t inadvertently passing their own prejudices or biases on to the machines? Machine learning is only as good as the dataset that is provided. Researchers need to be aware that their own conscious and unconscious biases can affect the machines in these ways.

Key question: How do researchers ensure that the credibility measures they use to train machines are unbiased?

Key question: If culture is an important factor in a crisis situation, perhaps a machine needs to be trained with a knowledge of and a mind to specific cultural traits. By its very nature, that dataset needs to be biased. Is incorporating bias necessarily an important part of this research?


Leave a Reply

Your email address will not be published. Required fields are marked *