Sunday, December 9, 2018

Winter Storms and Flurries

High-impact weather events can either be annoyingly difficult to forecast...or a joy, depending on how you view it. While there is no right or wrong view on that, it can be an absolute nightmare when it is forecast and doesn't occur, or vice-versa. Several years ago, I wrote a post about the "Winter Storm that Never Was". The focus with that post was more about sharing (or not sharing) snowfall total maps from one model output. For this post, I wanted to look a bit more at the forecasting aspect.

7 to 10 days ago, model guidance was hinting at the potential of a significant snowfall for portions of the Central/Southern Plains. Snow lovers rejoice! And, haters...well, it's winter.

One example of what some model guidance was suggesting (this was the GFS forecast from last Sunday)
Fast-forward to present day and what was supposed to be a big snow ended up being scattered flurries, at least for parts of the Plains. Across parts of the Southern Plains, winter impacts were still felt, though (just ask the fine folks in Lubbock).

NOHRSC Modeled Snow Depth for Dec 9, 2018
This wasn't a case of no snow occurring at all, but the actual swath of snow was quite a bit different compared to what many models were showing days in advance. The tough part with this forecast was that the models continued to suggest a higher-impact snow even up to 2-3 days or so in advance for areas that ended up seeing no flakes at all, or more of a wintry mix as opposed to all snow.

NAM Snowfall Forecast 2-3 Days Out
I worked leading up to, and during, this event. Several things stuck out to me and/or came up in conversation within our office and with neighboring offices.

1) Trends are your friend, but know when to lock in. While not completely consistent, there was a noticeable trend further south with successive model runs. The trick here is knowing when to bite on a solution in the middle of a trend. This can be especially difficult when a certain trend continues well into the Watch/Warning/Advisory decision window. This is kind of like figuring out when to fill up on gas while prices are falling. You want to get the best price (forecast), but don't want to run out of gas (miss the forecast). From a messaging standpoint, you want to give people as much lead time as possible while still balancing out the potential crying wolf syndrome. The suggestion here is to be cautious with specific impacts if the models are in the middle of a consistent trend. If you can, try to wait until the guidance "levels off". This may be especially important when models show a drier, less snowy, less severe, etc type of trend for your particular area.

2) Consistency doesn't always equal higher confidence. There were several model runs in which the guidance were well-clustered on QPF / snowfall amounts. Typically, this would equate to higher confidence for the forecaster. The catch is that run-to-run consistency on where the heaviest snow would fall wasn't always there. Consistency in one model cycle is great, but make sure to look at it in context from previous runs.

3) I cannot stress this enough...don't let social media get to you. We can continue to educate folks and message events as best we can, but some things are simply misunderstood. Keep in mind, too, that the dreaded phrase, "They said...", while directed at us, likely includes non-Meteorologists as well. John Q posting a 400-hr snowfall map from some model is probably getting lumped into people's view of the error in the forecast. We didn't post those maps and yet we still get blamed. One suggestion is to take each event and, if you can, try to explain things to folks. I realize it won't always be received well, but don't give up trying. If this doesn't work, know when to just let it go.

4) Be honest with yourself. No matter how hard we try, we are going to bust at times. The models aren't perfect and neither are we. We all know this, but do we truly account for it in an honest post-event reflection? If you, personally, can do something better next time, then work at it. But, realize that even after considering all of the above suggestions on trends, messaging, science, and consistency...you will miss a forecast from time to time. Period. You are not alone...we all will miss forecasts. Richelle Goodrich said it well - “Many times what we perceive as an error or failure is actually a gift. And eventually we find that lessons learned from that discouraging experience prove to be of great worth.”

5) Postmortems! If you and/or your office do this already, great! If not, now is as good a time as any to start. It doesn't have to be a lengthy, detailed process. It could simply be an email pointing out which models did well or what stuck out to you during the event. Start a discussion. Figure out what went well and what didn't. Remember...be honest with yourself and as an office and learn from it. With each event, successful or not, we have an opportunity as individuals and as a team to improve. Make the most of each opportunity.

Forecasting the weather has its challenges, especially when high-impact events are at stake. What I'm learning is to implement change through lessons learned, figure out how to best interpret model guidance in various scenarios, and to be honest with myself. But, don't take my word for it...give it a try for yourself!

Note: if there are things you have learned from forecasting high-impact events, then let me know and add to the discussion!

No comments:

Post a Comment