Showing posts with label models. Show all posts
Showing posts with label models. Show all posts

Sunday, December 9, 2018

Winter Storms and Flurries

High-impact weather events can either be annoyingly difficult to forecast...or a joy, depending on how you view it. While there is no right or wrong view on that, it can be an absolute nightmare when it is forecast and doesn't occur, or vice-versa. Several years ago, I wrote a post about the "Winter Storm that Never Was". The focus with that post was more about sharing (or not sharing) snowfall total maps from one model output. For this post, I wanted to look a bit more at the forecasting aspect.

7 to 10 days ago, model guidance was hinting at the potential of a significant snowfall for portions of the Central/Southern Plains. Snow lovers rejoice! And, haters...well, it's winter.

One example of what some model guidance was suggesting (this was the GFS forecast from last Sunday)
Fast-forward to present day and what was supposed to be a big snow ended up being scattered flurries, at least for parts of the Plains. Across parts of the Southern Plains, winter impacts were still felt, though (just ask the fine folks in Lubbock).

NOHRSC Modeled Snow Depth for Dec 9, 2018
This wasn't a case of no snow occurring at all, but the actual swath of snow was quite a bit different compared to what many models were showing days in advance. The tough part with this forecast was that the models continued to suggest a higher-impact snow even up to 2-3 days or so in advance for areas that ended up seeing no flakes at all, or more of a wintry mix as opposed to all snow.

NAM Snowfall Forecast 2-3 Days Out
I worked leading up to, and during, this event. Several things stuck out to me and/or came up in conversation within our office and with neighboring offices.

1) Trends are your friend, but know when to lock in. While not completely consistent, there was a noticeable trend further south with successive model runs. The trick here is knowing when to bite on a solution in the middle of a trend. This can be especially difficult when a certain trend continues well into the Watch/Warning/Advisory decision window. This is kind of like figuring out when to fill up on gas while prices are falling. You want to get the best price (forecast), but don't want to run out of gas (miss the forecast). From a messaging standpoint, you want to give people as much lead time as possible while still balancing out the potential crying wolf syndrome. The suggestion here is to be cautious with specific impacts if the models are in the middle of a consistent trend. If you can, try to wait until the guidance "levels off". This may be especially important when models show a drier, less snowy, less severe, etc type of trend for your particular area.

2) Consistency doesn't always equal higher confidence. There were several model runs in which the guidance were well-clustered on QPF / snowfall amounts. Typically, this would equate to higher confidence for the forecaster. The catch is that run-to-run consistency on where the heaviest snow would fall wasn't always there. Consistency in one model cycle is great, but make sure to look at it in context from previous runs.

3) I cannot stress this enough...don't let social media get to you. We can continue to educate folks and message events as best we can, but some things are simply misunderstood. Keep in mind, too, that the dreaded phrase, "They said...", while directed at us, likely includes non-Meteorologists as well. John Q posting a 400-hr snowfall map from some model is probably getting lumped into people's view of the error in the forecast. We didn't post those maps and yet we still get blamed. One suggestion is to take each event and, if you can, try to explain things to folks. I realize it won't always be received well, but don't give up trying. If this doesn't work, know when to just let it go.

4) Be honest with yourself. No matter how hard we try, we are going to bust at times. The models aren't perfect and neither are we. We all know this, but do we truly account for it in an honest post-event reflection? If you, personally, can do something better next time, then work at it. But, realize that even after considering all of the above suggestions on trends, messaging, science, and consistency...you will miss a forecast from time to time. Period. You are not alone...we all will miss forecasts. Richelle Goodrich said it well - “Many times what we perceive as an error or failure is actually a gift. And eventually we find that lessons learned from that discouraging experience prove to be of great worth.”

5) Postmortems! If you and/or your office do this already, great! If not, now is as good a time as any to start. It doesn't have to be a lengthy, detailed process. It could simply be an email pointing out which models did well or what stuck out to you during the event. Start a discussion. Figure out what went well and what didn't. Remember...be honest with yourself and as an office and learn from it. With each event, successful or not, we have an opportunity as individuals and as a team to improve. Make the most of each opportunity.

Forecasting the weather has its challenges, especially when high-impact events are at stake. What I'm learning is to implement change through lessons learned, figure out how to best interpret model guidance in various scenarios, and to be honest with myself. But, don't take my word for it...give it a try for yourself!

Note: if there are things you have learned from forecasting high-impact events, then let me know and add to the discussion!

Thursday, August 2, 2018

The Target of Opportunity Trap

A buzz word/phrase in the NWS right now is "targets of opportunity". The idea is to find those areas of the forecast that need the most attention and where value can be added by the forecaster. Any part of the forecast not considered a target of opportunity can probably be left to the models to handle.

That last sentence can be a bit worrisome, though, because it seems to be the "beginning of the end" of the human element of forecasting as we know it. Whether it is or isn't, I don't know. What I do know is that the "end" has not arrived. My concern is that forecasters will let that last sentence be all they hear and start acting like the end has already come.

As Meteorologists, the support we provide to our clients, partners, and/or the public starts with a solid forecast...and a solid forecast starts with a sound, scientific approach. The models are certainly improving on the sound, scientific approach aspect, but they aren't perfect and there are times when the forecaster CAN add value. The key, in my opinion, is learning when to let the models do their thing and when to deviate. I believe finding this balance is in the best interest of those we serve.

While I support the target of opportunity concept, my concern, as mentioned earlier, is that it will have a negative impact on some forecasters. "If models are doing so well, why even bother anymore?" some might say. The problem here is that frame of mind can lead to missed opportunities to add value. Missing those opportunities may lead to a less-than-ideal forecast which could lead to a less-than-ideal service. Being a service industry means keeping the needs of those we serve at the forefront of what we do. Living in fear of losing our jobs to models, or assuming models are always best, can ultimately lead to a degraded service. The opportunity to add value may be lower than it was 5-10 years ago, but it isn't non-existent. Be intentional about finding those opportunities.

I believe keeping sharp on the science is one way to aid in finding those opportunities to add value. This can also help us as forecasters to know how much to deviate from the models and how to best message these impactful, or potentially impactful, periods/events.

One such target of opportunity I have often seen is with convection. Sometimes the models are spot-on, especially with all the recent CAM (Convective Allowing Models) development, but other times they are horribly wrong. When they are wrong, it is important to know why. Knowing why can help guide the forecast into later periods. Being intentional to keep up with the science can help answer the question 'why' and can provide guidance on the forecast. This, in turn, can lead to the best possible forecast and service.

At other times, a target of opportunity may simply be figuring out which model(s) handle certain impactful patterns/events better than others and leaning the forecast that direction. The various blends out there work great in many situations, but at other times, certain members of those blends out-perform the blend, itself. Learn when to deviate from the blends (research and model verification can help with this). 

I won't go into all the different targets of opportunity, but I strongly encourage anyone out there who is struggling with this concept to not let it become a motivation killer. Be intentional about finding that balance between model value and human value. Keep sharp on the science. Keep up with / research model performance/verification. With this approach, I believe we have the opportunity to provide the best service possible to those counting on us.