Skip to main content

Events

Tail calibration of probabilistic forecasts


Event details

Abstract

Reliable forecasts are invaluable to practitioners in various domains. Probabilistic forecasts comprehensively describe the uncertainty in the unknown outcome, making them essential for decision making and risk management. While several methods have been introduced to evaluate probabilistic forecasts, existing techniques are ill-suited to the evaluation of tail properties of such forecasts. However, these tail properties are often of particular interest to forecast users due to the  severe impacts caused by extreme outcomes. In this work, we reinforce previous results related to the  deficiencies of proper scoring rules when evaluating forecast tails, and instead introduce several  notions of tail calibration for probabilistic forecasts, allowing forecasters to assess the reliability of their predictions for extreme events. We study the relationships between these different notions, and  propose diagnostic tools to assess tail calibration in practice. The benefit provided by these diagnostic  tools is demonstrated in an application to European weather forecasts.


 

Location:

Laver Building LT3