How Can We Make Disaster Management Evaluations More Useful? An Empirical Study of Dutch Exercise Evaluations

Abstract The evaluation of simulated disasters (for example, exercises) and real responses are important activities. However, little attention has been paid to how reports documenting such events should be written. A key issue is how to make them as useful as possible to professionals working in disaster risk management. Here, we focus on three aspects of a written evaluation: how the object of the evaluation is described, how the analysis is described, and how the conclusions are described. This empirical experiment, based on real evaluation documents, asked 84 Dutch mayors and crisis managem... Mehr ...

Verfasser: Beerens, Ralf Josef Johanna
Tehler, Henrik
Pelzer, Ben
Dokumenttyp: Artikel
Erscheinungsdatum: 2020
Reihe/Periodikum: International Journal of Disaster Risk Science ; volume 11, issue 5, page 578-591 ; ISSN 2095-0055 2192-6395
Verlag/Hrsg.: Springer Science and Business Media LLC
Schlagwörter: Management / Monitoring / Policy and Law / Safety Research / Geography / Planning and Development / Global and Planetary Change
Sprache: Englisch
Permalink: https://search.fid-benelux.de/Record/base-27074001
Datenquelle: BASE; Originalkatalog
Powered By: BASE
Link(s) : http://dx.doi.org/10.1007/s13753-020-00286-7

Abstract The evaluation of simulated disasters (for example, exercises) and real responses are important activities. However, little attention has been paid to how reports documenting such events should be written. A key issue is how to make them as useful as possible to professionals working in disaster risk management. Here, we focus on three aspects of a written evaluation: how the object of the evaluation is described, how the analysis is described, and how the conclusions are described. This empirical experiment, based on real evaluation documents, asked 84 Dutch mayors and crisis management professionals to evaluate the perceived usefulness of the three aspects noted above. The results showed that how evaluations are written does matter. Specifically, the usefulness of an evaluation intended for learning purposes is improved when its analysis and conclusions are clearer. In contrast, evaluations used for accountability purposes are only improved by the clarity of the conclusion. These findings have implications for the way disaster management evaluations should be documented.