WDR 2005 - Chapter 8: Disaster data: building a foundation for disaster risk reduction

Disaster data are vital for identifying trends in the impacts of disaster and tracking relationships between development and disaster risk. This chapter offers a brief review of four international disaster databases: EM-DAT, NatCat, Sigma and DesInventar. It identifies challenges for the collection, validation and presentation of disaster data, and considers options for improving such data.

Disaster databases are becoming increasingly useful, as their data are being fed into analytical tools to help prioritize international action to reduce disaster risk. Galvanised in part by the tsunami, databases are also being used to develop early warning tools.

The Emergency Disasters Data Base (EM-DAT), managed by the Brussels-based Centre for Research on the Epidemiology of Disasters (CRED), is the most complete publicly accessible international database. It produces estimates of human and economic losses and includes relatively small disasters (10 deaths or more) where reliable data exist.

The contribution of EM-DAT to policy planning is constrained primarily by the lack of systematic, standardized local and national disaster data collection. This is a particular challenge for EM-DAT, which draws from international sources built on local and national data. EM-DAT catalogues events by country, making it difficult to identify sub-national patterns of disaster loss.

NatCat and Sigma are highly sophisticated databases managed by Munich Re and Swiss Re respectively, two of the world’s largest reinsurance companies. NatCat has created its own methodology for calculating economic losses from major disasters (excluding drought), once insured losses are known. Their data have been verified by loss estimates from the field.

Sigma presents annual information on insured property losses, plus economic and human losses, from large natural and technological disasters. Sigma categorizes entries by disaster event, while NatCat and EM-DAT categorize entries by country. Both companies provide limited information on countries with low insurance density. This reduces their data coverage for Africa, Asia and Latin America, particularly in rural areas.

DesInventar, managed by a coalition of non-governmental actors, covers 16 countries in Latin America and the Caribbean. Sub-national DesInventar databases exist for individual states in the US, Brazil, Colombia, South Africa and India.

DesInventar specializes in local records of disaster loss and presents national disasters through local loss data. DesInventar gathers data on human and economic losses, but tends to record higher numbers of people affected than other databases. The media are a prominent source for Desinventar, yet the reliability of media reporting of losses is debatable. One aim of DesInventar is to collect information on secondary impacts and infrastructure losses, but this information is unevenly reported.

Disaster data have improved greatly in the last 20 years, but there remain a number of common challenges:

  •     Defining hazards and distinguishing events:

Drought is the deadliest natural hazard, but also the most difficult to study. Problems arise from the lack of a common definition for drawing spatial and temporal limits around drought events. This creates major challenges when comparing drought impacts. Environmental and human factors, such as soil loss, armed conflict or HIV/AIDS make it difficult to judge whether drought is a cause, effect or context for reported losses.

When disasters impact more than one country, such as hurricane Mitch, this can lead to double counting, when losses are recorded for individual countries and the event as a whole.

A harder challenge to overcome is reporting on disaster ‘cascades’, when an initial hazard (e.g. earthquake, flood) triggers a secondary event (e.g. landslides). With no common methodology for the local reporting of disaster losses, impacts might be associated with either event. One recent advance is an agreement to use a common, unique global identifier (GLIDE) number for each event.

  •     Standardized and systematic data collection:

The absence of standard guidelines for local disaster data is often compounded by an ad hoc system of collection by local media, government or civil society groups. Local data are collated and fed to international databases by intermediaries. Yet intermediaries lack standard definitions to organize their data and may be tempted to exaggerate or suppress data for professional, political or economic advantage.

Each key indicator of disaster impact has limitations. Mortality is the ‘cleanest’ indicator of disaster loss. But the distinction between deaths and people missing creates uncertainty, with some countries requiring that people be declared dead after being missing for 12 months. Wide variation in mortality reports is common. Figures for people affected by disaster are even more open to dispute, since there is no universal definition for what is meant by ‘affected’.

However, data are most incomplete for economic losses. Over the past three decades, macro-economic losses were reported for less than 30 per cent of all natural disasters – with least data for developing countries. There is no standardized methodology for reporting macro-economic losses. Loss estimates from Iran’s Bam earthquake in 2003 ranged from US$ 32.7 million to US$ 1 billion. Livelihood losses, especially in the informal sector, are also poorly understood and rarely recorded.

CRED has developed a ranking system to rationalize its choice of data sources. This improves transparency but cannot address the lack of standardized and systematic data collection. Resources do not exist for international database managers to coordinate local collection, although Munich Re’s 60 national offices lead impact assessments following major disasters.

  • Public accessibility of data:

The NatCat and Sigma datasets are not fully accessible to the public. Even the presentation of public database websites could be more user-friendly. The growing numbers of organizations interested in disaster data suggest a rethink in access provision may be needed.

In order to improve the quality of international disaster databases, systematic collation and standardized collection of local disaster data is urgently needed. This should include agreed protocols for: the start- and end-dates of disasters; geo-referencing disasters; distinguishing between cascading hazards; measuring human impacts; measuring economic impacts (including secondary losses); measuring livelihood losses (particularly in the informal sector); measuring ecological impacts; and ethical disaster data collection and use. Improved local data collection needs to be supported by greater standardization and transparency amongst intermediaries as well.

Baseline data on the social, economic and ecological status of areas at risk would enhance the accuracy of disaster loss measurement. Developing baselines goes beyond the capacity of disaster data managers, but it is an agenda in which they could usefully participate.

The recommendations presented here respond, in part, to the UN’s Hyogo Framework for Action 2005-2015, which endorsed the need for more work on disaster data and analysis to feed into disaster risk reduction. In May 2005, two important initiatives were underway. First, OCHA’s Global Disaster Alert System, which aims to provide initial data within 24 hours of an event. Secondly, a Global Risk Identification Programme, which aims to improve the comprehensiveness and accuracy of measuring disaster impacts by building on the work of existing disaster databases.

Recommendations for future action

Recommendations for governments, data collection organizations (including the international Red Cross and Red Crescent movement) and international data managers:

  1. Build local and national human resources for systematic disaster impact data collection.
  2. Standardize methodologies for local disaster data collection, with a focus on:
        a. measuring total economic losses.
        b. incorporating ecological losses.
  3. Standardize definitions for drought and complex humanitarian emergencies.
  4. Systematize disaster data information flows between local collectors, intermediary collators and international database managers.
  5. Support collaboration between international database managers to minimize overlap and encourage the sharing and verification of data.
  6. Improve public accessibility to basic and summary impact data.