• Contingency tables counts are calculated for each model. The table definitions are: Hits (a): where both forecast and observation are greater than or equal to a threshold; False alarms (b): where forecast is above and observation is under a threshold; Misses (c): where the observation is above and forecast is under a threshold; No forecasts (d): where both forecast and observation are under the threshold.

  • All data are regridded using bilinear interpolation, if needed, to the G211 grid. The model APCP 6 hour accumulation forecast files are combined to create a 24 hour forecast. These are compared to Climatology­Calibrated Precipitation Analysis (CCPA) 24 hour accumulation, valid 12Z to 12Z.

  • Verification statistics are computed from the contingency table counts. Bias Score: BS=(a+b)/(a+c), measures over-forecasts (BS gt 1) or under-forecasts (BS lt 1) precipitation frequency over an area for a selected threshold. Equitable Threat Score: ETS=(a - ar)/(a + b + c - ar), where ar is the expected number of correct forecasts above the threshold in a random forecast where forecast occurrence/non-occurrence is independent from observation/non-observation, ar=(a + b)*(a + c)/(a + b + c + d). ETS=1 means a perfect forecast. ETS le 0 means the forecast is useless. Statistical signifance is calculated using a Monte Carlo significance test using 10,000 resamples. For more information see here.

  • The verification regions used for surface are: G211, West and East.