Drift guidance metrics #2314
Replies: 2 comments
-
@rgrumbine thanks for providing links to those resources. As I mentioned in the METplus NOAA telecon this morning, yes, I clearly see the parallels with tropical cyclone verification. TC tracks are stored in ASCII files following the ATCF file format. And the MET TC tools, mainly tc_pairs and tc_stat, match up and compare the forecast and observed TC tracks. I imagine many of the track error differences could be applied to those tracks as well. To address this in MET, there's several details we'd need to iron out. I'll list some questions that come to mind but don't actually expect answers to them. I recommend that we identify a METplus scientist to collaborate more closely with you on these details.
One example already existing in MET is the output from the MODE Time Domain tool. We have info to compare how forecast and observed object centroids move through time. Perhaps we could use that specific data to brainstorm how we'd analyze this type of data in a more generic way? |
Beta Was this translation helpful? Give feedback.
-
Sorry, way too long since I checked here. I think the idea of a brainstorming session is the way to go. I have much of an answer in hand (code-wise), and another in hand (someone else's code, that's been exercised more broadly than mine). But much of the question is really about what would be the easiest path for migration in to Met+. I realize (think) we're late in the cycle for a release, but would like to see it in the one after. |
Beta Was this translation helpful? Give feedback.
-
Though I start from a sea ice perspective, this is a general situation -- there is a feature whose location is known through time, whether a hurricane, storm system, sea ice floe, ... and we want to know how well the model forecast that motion.
In https://polar.ncep.noaa.gov/mmab/papers/tn315/ I discuss details of how to do the comparison and the relative performance of metrics as discriminators of forecast skill. Source codes in https://github.com/rgrumbine/ice_scoring/ (drift, Fortran and C++) and submodule SIDFEx (in R), in https://github.com/rgrumbine/ice_scoring/SIDFEx from the Sea Ice Dynamics Forecast Experiment -- https://sidfex.polarprediction.net/
Preferred measures are the directional bias and rms, the ratio between observed and modelled speed, and the error radius (distance between observed and forecast location).
The language of storm discussions is similar -- 'the model moved the storm too quickly', 'the model is veering the storm to the north of its real track' and such.
For the sea ice case, verification data are buoy locations from the IABP and IPAB (IO discussion already started)
Beta Was this translation helpful? Give feedback.
All reactions