Wednesday, July 31, 2019

The Best Service Starts with Weakness

One of my favorite tasks in the NWS is working radar. I enjoy the challenge of figuring out what storms are, or will be, doing and how to best communicate those threats to people (through warning vs. not warning decisions, what tags to use, and so on). Ironically, though, working radar also happens to be the task that I most often deal with some anxiety about. Typing that last sentence is difficult for me and I keep thinking through ways to word it in a way that sounds the least negative. That wouldn't be very honest of me, so I'll just leave it as is. Oh, the joy of weaknesses and struggles. Can I get an 'amen!'?! No takers? Didn't think so.

As a general rule, I struggle with being open and honest with co-workers regarding my weak areas. I subscribe to the 'honesty is the best policy' idea, but really struggle to put that into practice sometimes. The past year here at the NWS Wichita office (hard to believe it has been a year already!) has challenged me a lot in this area of honesty. What I've come to realize is that my struggle with being honest with others actually starts with the struggle to be honest with myself.
At my first NWS office (Great Falls, MT), I had very little anxiety when working radar. A lot of people in the office didn't want to work radar, which gave me lots of opportunities to gain experience and confidence. You'd think that confidence would follow me here to Kansas, but it didn't quite work out that way. In Montana, I was warning for small towns and cows. But now I am warning for a much larger population. The city of Wichita, alone, has more people than the entire NWS Great Falls' CWA. The magnitude of severe weather is different here as well, especially regarding the higher frequency of tornadoes and the significant threat to life and property that that threat, alone, carries. There are simply more people in my current CWA to be "hit". Radar is radar, though. It's not like dBZ values have a different meaning here. CC still drops with debris, Dual-Pol data still reveals large hail, tornadoes, and heavy rain processes. The only thing that changed was my mindset (stressful vs not stressful) towards working radar.

I didn't want to admit this and for months I kept trying to hide it...from myself. Which, if you think about it, is kind of a silly goal, and yet it is a very real problem. The truth is, I got some great radar experience while in Montana, but still needed more. I didn't want to need more. I wanted to walk into ICT a confident radar operator, being the go-to guy in severe events. I wanted to be trusted at any point in any event. It simply did not work out that way.

A year in, I have gained additional experience and my confidence is coming back. But, it's still not to the level I want to be at in order to provide the best possible service I can. Part of what has helped me is finally coming to terms with my struggle. I admitted it to myself...I still need improvement on radar. It took me long enough to admit that, but now it is time to be honest with others - this blog post is a part of that and has been a year in the making! In a strange way, working on this post has actually helped me feel more confident. Not that I magically have things all figured out, but now when tasked with working radar, I have one less weight on my shoulder - the weight of acting like I have it all together. Whether anyone in my office ever sees this post, the simple act of typing it out, and making it public, is a huge step even in just being honest with myself.

On a side note, I somewhat comically think back to the first time I was tasked with radar here at ICT. Man, I was so nervous that I was struggling even with the less-visible SPS (a far cry from the confident radar guy I was back in Montana). At one point, the Lead Forecaster I was working with nicely remarked, "You know, it's ok to issue an SPS". I appreciate the grace is in his patience. I have come a long way since that day, but it sure was a low point for me.

So, what about you? Are there any weaknesses or struggles that deep down you know are there, but don't want to admit it? Maybe it isn't even a weakness, per-say...maybe you don't like forecasting or public briefings, but are afraid to admit it. Maybe you don't mind being open with others, but fear their response. I totally understand the concern. I'm not saying you go into work today and draft up an all-hands email about every struggle. Maybe for now you simply need to be honest with yourself.

The beauty here is that the admission of weakness is the beginning of wisdom.

Convincing ourselves that a weakness or struggle doesn't exist, or isn't that big of a deal, can cause us to miss important learning opportunities. Consider this as well...what you learn through your weakness now, could play a vital role in you helping someone else through a similar weakness down the road. Just the simple fact that you are not the only one with a certain struggle can be encouraging!

Now, I will say, being honest with others may open the door to less-than-receptive  responses and I don't want to ignore that possibility. Even in great working environments, there are often one or two that may give people a hard time for showing, or sharing, struggles. If one imperfect individual cannot accept someone else's imperfection...well, that is on them. I know this doesn't ease the fear of a negative response, but I do still believe in the importance of being open with others at times. Start with yourself, then consider opening up to at least a few others.

We ALL have weaknesses and struggles. There are no perfect Meteorologists. If we're honest about our weakness and patient with others', we can all work together to provide the best service possible. If you want to set yourself, and others, up for success, then start with honesty.

Sunday, February 10, 2019

Value of the Mesoanalyst Position

Modern-day forecasting within the National Weather Service (NWS) continues to change as modeling, technology, and communications evolve. For some this is exciting...for others perhaps not as much. In reality, I would venture to say that for many, the evolution is a mix of both.

Last year, I read a #WxTwitter thread that discussed the Mesoanalyst role (Meso-A, I'll call it). The discussion drifted from the original topic to a discussion about hand analysis. The way the chat became so focused around hand analysis got me to thinking that perhaps there needs to be a change in how the position is viewed (I include myself in this).

Filling the Meso-A role doesn't always have to involve hourly hand analyses or manually calculating CAPE. The thought of either of those probably scares some people away. Perhaps at one time the position was centered around manually analyzing surface and upper air charts...for some it probably still is. Technology continues to advance and while I hope manual analysis is never completely abandoned, the days of whipping out the colored pencils may not be as widespread as it once was. So, let's step away from how tasks are accomplished and instead evaluate the role in consideration of current technology and model capabilities and whether it can still play an important role in the current NWS service model.

A key concept within the NWS right now is IDSS - Impact-Based Decision Support Services. Achieving goals within this framework requires some evolutions of past service models/concepts/methodologies. I believe one such evolution involves the Meso-A concept. To determine if a concept needs to evolve, we should start with why it was put there in the first place. I see the forecaster in the Meso-A position as being tasked with monitoring and effectively communicating the current state of the environment as well as expected near-term changes / trends.

Note that there is no explicit clarification on how this task is accomplished. Instead of focusing on a specific task, or means to accomplish a task, we should focus on the goal of the position to help determine if it fits into the current framework of NWS operations. Just purely based on how I defined it above, I would argue that it does still play an important role. But, let's dig a little deeper.

With advances in modeling and analysis, have we reached a point where the Meso-A tasks can be combined with other roles (communications, warnings, forecasting) as opposed to it being the primary task of one forecaster? This question may actually be more important than the definition of the position.

The Meso-A position isn't the only one to evolve. Think about how the communications role has evolved with the advent of social media. Comms isn't just answering the phones anymore. In fact, comms has evolved so much, it will sometimes be a 2-forecaster job. Does DSS get its own forecaster or is that lumped in with comms? In addition, radar operators now have Dual Pol, SAILS, TWIP research, etc to consider. Offices handle tasks differently, but the reality is that the workload has evolved with evolutions in messaging and technology.

With that in mind, is it possible that the Meso-A position is actually even more important now than ever before? Maybe in the past the forecaster answering the phones could also handle monitoring and communicating environmental changes. Now that person may be tasked with phones, social media, and DSS briefings. Consider, too, how the position can support ALL of the other positions.

A stronger-than-forecast low-level jet quickly developing with a subsequent increased tornado threat, for example, might be missed by other forecasters knee-deep in warnings, social media graphics, or in-situ oral briefings to emergency managers. The Meso-A forecaster noting this change, especially if it was not modeled well, could provide very important information to aid in warning decisions, staffing and/or forecaster responsibilities, and messaging to the public and partners.

Even with advances in modeling, there remains an important human element at's the Target of Opportunity concept within the NWS. Whether the models are handling an event well or not, at minimum it can help to have someone monitor and communicate the current state of the atmosphere as well as important trends. Then, when you do have those events that are not being handled well by the  models, you have someone who can manually adjust on the fly as an important part of the office's life-saving mission.

The NWS strives to protect lives and property and I firmly believe the Meso-A position, if seen and utilized effectively, can be a vital Target of Opportunity within this mission. I have always believed in the role of the Mesoanalyst, but in thinking about it further, I have become even more convinced of its relevance. But, don't take my word for it. For some additional thoughts on the position, check out the findings in these RAMMB and OPG documents.

One final comment...the heart behind this post was not to argue for or against how the role is performed. My hope is to encourage folks to see the value of the role without the distraction of how one goes about accomplishing associated tasks. Effectively monitor and communicate...that's the idea. How that is done will differ from forecaster to forecaster, but I believe it is a vital support function during many severe weather events.

Sunday, December 9, 2018

Winter Storms and Flurries

High-impact weather events can either be annoyingly difficult to forecast...or a joy, depending on how you view it. While there is no right or wrong view on that, it can be an absolute nightmare when it is forecast and doesn't occur, or vice-versa. Several years ago, I wrote a post about the "Winter Storm that Never Was". The focus with that post was more about sharing (or not sharing) snowfall total maps from one model output. For this post, I wanted to look a bit more at the forecasting aspect.

7 to 10 days ago, model guidance was hinting at the potential of a significant snowfall for portions of the Central/Southern Plains. Snow lovers rejoice! And, haters...well, it's winter.

One example of what some model guidance was suggesting (this was the GFS forecast from last Sunday)
Fast-forward to present day and what was supposed to be a big snow ended up being scattered flurries, at least for parts of the Plains. Across parts of the Southern Plains, winter impacts were still felt, though (just ask the fine folks in Lubbock).

NOHRSC Modeled Snow Depth for Dec 9, 2018
This wasn't a case of no snow occurring at all, but the actual swath of snow was quite a bit different compared to what many models were showing days in advance. The tough part with this forecast was that the models continued to suggest a higher-impact snow even up to 2-3 days or so in advance for areas that ended up seeing no flakes at all, or more of a wintry mix as opposed to all snow.

NAM Snowfall Forecast 2-3 Days Out
I worked leading up to, and during, this event. Several things stuck out to me and/or came up in conversation within our office and with neighboring offices.

1) Trends are your friend, but know when to lock in. While not completely consistent, there was a noticeable trend further south with successive model runs. The trick here is knowing when to bite on a solution in the middle of a trend. This can be especially difficult when a certain trend continues well into the Watch/Warning/Advisory decision window. This is kind of like figuring out when to fill up on gas while prices are falling. You want to get the best price (forecast), but don't want to run out of gas (miss the forecast). From a messaging standpoint, you want to give people as much lead time as possible while still balancing out the potential crying wolf syndrome. The suggestion here is to be cautious with specific impacts if the models are in the middle of a consistent trend. If you can, try to wait until the guidance "levels off". This may be especially important when models show a drier, less snowy, less severe, etc type of trend for your particular area.

2) Consistency doesn't always equal higher confidence. There were several model runs in which the guidance were well-clustered on QPF / snowfall amounts. Typically, this would equate to higher confidence for the forecaster. The catch is that run-to-run consistency on where the heaviest snow would fall wasn't always there. Consistency in one model cycle is great, but make sure to look at it in context from previous runs.

3) I cannot stress this enough...don't let social media get to you. We can continue to educate folks and message events as best we can, but some things are simply misunderstood. Keep in mind, too, that the dreaded phrase, "They said...", while directed at us, likely includes non-Meteorologists as well. John Q posting a 400-hr snowfall map from some model is probably getting lumped into people's view of the error in the forecast. We didn't post those maps and yet we still get blamed. One suggestion is to take each event and, if you can, try to explain things to folks. I realize it won't always be received well, but don't give up trying. If this doesn't work, know when to just let it go.

4) Be honest with yourself. No matter how hard we try, we are going to bust at times. The models aren't perfect and neither are we. We all know this, but do we truly account for it in an honest post-event reflection? If you, personally, can do something better next time, then work at it. But, realize that even after considering all of the above suggestions on trends, messaging, science, and will miss a forecast from time to time. Period. You are not alone...we all will miss forecasts. Richelle Goodrich said it well - “Many times what we perceive as an error or failure is actually a gift. And eventually we find that lessons learned from that discouraging experience prove to be of great worth.”

5) Postmortems! If you and/or your office do this already, great! If not, now is as good a time as any to start. It doesn't have to be a lengthy, detailed process. It could simply be an email pointing out which models did well or what stuck out to you during the event. Start a discussion. Figure out what went well and what didn't. honest with yourself and as an office and learn from it. With each event, successful or not, we have an opportunity as individuals and as a team to improve. Make the most of each opportunity.

Forecasting the weather has its challenges, especially when high-impact events are at stake. What I'm learning is to implement change through lessons learned, figure out how to best interpret model guidance in various scenarios, and to be honest with myself. But, don't take my word for it...give it a try for yourself!

Note: if there are things you have learned from forecasting high-impact events, then let me know and add to the discussion!

Thursday, December 6, 2018

Confessions of a Prideaholic

Six months ago, I started at my new office here in Wichita. On my first day, my MIC (Manager-in-Charge) was giving me a run-down of the office - a whose who of the staff, mentioning various forecasters who would be good resources for radar, outreach, etc. In that moment, I immediately found myself desiring to have my name added to that list for future run-downs with new staff members. what's the big deal, you might ask. After all, what's wrong with shooting for doing your best and getting credit for it? For me, the problem is my motivation. I don't know where it came from or when it started, but somewhere down the line, I developed the Meteorologist's version of a borderline, superiority complex. When I first joined the National Weather Service (NWS) several years ago, I walked into that office acting like I had it all figured out. Turns out I didn't. Almost four years in, and those pesky thoughts of superiority keep trying to creep back in.

In my short time here at NWS Wichita (ICT), I am once again reminded that I don't know everything, that I'm not the best thing since sliced bread. But, here's the thing. Deep down, I've never actually believed that I know EVERYTHING, and yet if you could read my thoughts, you might think otherwise. The cause seems to be rooted in a poor self-assessment.

I have this tendency to analyze / assess people...their strengths and weaknesses, motivations, etc. There is a part of that that's enjoyable, especially when I am able to help others figure out what may be driving a person to do a certain thing - almost like a detective. But, when it comes to self-assessment, my effective analyzing seems to go out the window at times. On one hand, there are things that I actually do well, but struggle to believe it. On the other hand, there are things that I believe I do well that, in reality, I am not as good at.

Both sides create problems. Option "A" leads to an unrealistic lack of confidence which can cause others to believe I am not as good as I actually am at something. Think about the potential for missed opportunities there. Option "B" leads to an unrealistic surplus of confidence, potentially causing folks to trust me with something that is better suited for someone else. This can also cause me to miss out on opportunities, specifically opportunities to improve.

My self-assessment is the worst where my pride is the strongest. I can quickly say I am not the best fire weather forecaster out there because, frankly, I don't get wrapped up in what others think of my fire weather forecasting abilities, or lack thereof, and it isn't high on my passion list. But, ask me about convection, severe weather, or radar and that's a different ballgame. Those three are big on my passions list and maybe pride is the strongest where passion is the greatest. Passion is a great asset, but with great passion comes great responsibility...a challenge to properly assess myself and others. I say 'and others' because pride can cause me to not only give myself a poor assessment, but also others.

I believe it is always important to strive to do our best, whether it is high on our passion list or not. But, equally as important is the call to give ourselves and others a proper assessment. Be humbly confident in what you are good at, but honest enough to know where you could use some improvement. There are areas where you will, in fact, be better than someone else at something, but don't make that your goal. Instead, find ways to encourage others and help them succeed. Similarly, keep an open mind when people better at you in something come to help you succeed. Or, better yet, go ask someone more knowledgeable than you for guidance.

Being humble in this way and keeping our thoughts from going off the prideful deep end can aid in effective collaboration, improved service, and a stronger, more knowledgeable workforce.

Thursday, September 13, 2018

Can God Stop Florence?

Significant weather events always get me much so, I had to find somewhere to put all these thoughts (the good, bad, and the ugly) - hence the creation of this blog. Tonight's "Masterpiece Theater" is Florence. Hurricane Florence is not only forecast to clash with the Carolinas, but also the Bible Belt.

Satellite image of Hurricane Florence (Sept 12, 2018)
Strong language has been coming from the Meteorology community with this one - words like "life-threatening" and "catastrophic". We all know hurricanes can be damaging near and just inland of a coastline - nothing overly new there regarding Florence. The problem with this storm, though, is the potential for it to slow down after landfall, potentially dropping one to two FEET of rain along its path as it takes a scenic trip around the Southeast.

Because of this potential, Meteorologists are encouraging "prepare, prepare, prepare". Meanwhile, the Bible Belt is encouraging "pray, pray, pray". So, should we pray? Should we prepare? Maybe a 60/40 blend of prayer and preparation? Slap up a piece of plywood, then pray for 5 minutes kind of a deal sounds good. But, what about those who don't believe in God? I suppose they'll have more boards on their houses...

I jest a little there, but seriously, what do we make of this? I personally find myself in a unique place because I speak from the Meteorology community and the Bible Belt. I am a firm believer in the Creator of this place we call earth, but I am also a firm believer in the science of this thing we call weather.

The bridge between the two is socked in with fog, making for some difficult unknowns.

Is there actually a God on the other side? And, if so, will he stop Florence in it's tracks? I have never walked through the fog nor crossed the bridge, but I believe God exists on the other side. I also believe the fog exists and, for whatever reason, is shrouding the view of what is or isn't on the other side.

I 100% believe God can stop Florence dead in its tracks. But, will he? I 100% do not know. As a Meteorologist who believes in God, I have to wrestle with this unknown all the time. I regularly encourage people to's core to my job and passion to help people understand the weather. But also core to who I am is the belief that with God, all things are possible - I know, I know, the religious red flag just went up for some. Hang with me...

I don't know if God will stop Florence or not. But, maybe that's not the question we should be asking at a time like this. The better question may be, 'should I prepare?'. The Meteorologist and Bible Belt in me says, emphatically, yes! But, as LeVar Burton used to say on "Reading Rainbow", 'Don't take my word for it'...

"A prudent person foresees danger and takes precautions. The simpleton goes blindly on and suffers the consequences." - Proverbs 22:3 (aka. the Bible)

Thursday, August 2, 2018

The Target of Opportunity Trap

A buzz word/phrase in the NWS right now is "targets of opportunity". The idea is to find those areas of the forecast that need the most attention and where value can be added by the forecaster. Any part of the forecast not considered a target of opportunity can probably be left to the models to handle.

That last sentence can be a bit worrisome, though, because it seems to be the "beginning of the end" of the human element of forecasting as we know it. Whether it is or isn't, I don't know. What I do know is that the "end" has not arrived. My concern is that forecasters will let that last sentence be all they hear and start acting like the end has already come.

As Meteorologists, the support we provide to our clients, partners, and/or the public starts with a solid forecast...and a solid forecast starts with a sound, scientific approach. The models are certainly improving on the sound, scientific approach aspect, but they aren't perfect and there are times when the forecaster CAN add value. The key, in my opinion, is learning when to let the models do their thing and when to deviate. I believe finding this balance is in the best interest of those we serve.

While I support the target of opportunity concept, my concern, as mentioned earlier, is that it will have a negative impact on some forecasters. "If models are doing so well, why even bother anymore?" some might say. The problem here is that frame of mind can lead to missed opportunities to add value. Missing those opportunities may lead to a less-than-ideal forecast which could lead to a less-than-ideal service. Being a service industry means keeping the needs of those we serve at the forefront of what we do. Living in fear of losing our jobs to models, or assuming models are always best, can ultimately lead to a degraded service. The opportunity to add value may be lower than it was 5-10 years ago, but it isn't non-existent. Be intentional about finding those opportunities.

I believe keeping sharp on the science is one way to aid in finding those opportunities to add value. This can also help us as forecasters to know how much to deviate from the models and how to best message these impactful, or potentially impactful, periods/events.

One such target of opportunity I have often seen is with convection. Sometimes the models are spot-on, especially with all the recent CAM (Convective Allowing Models) development, but other times they are horribly wrong. When they are wrong, it is important to know why. Knowing why can help guide the forecast into later periods. Being intentional to keep up with the science can help answer the question 'why' and can provide guidance on the forecast. This, in turn, can lead to the best possible forecast and service.

At other times, a target of opportunity may simply be figuring out which model(s) handle certain impactful patterns/events better than others and leaning the forecast that direction. The various blends out there work great in many situations, but at other times, certain members of those blends out-perform the blend, itself. Learn when to deviate from the blends (research and model verification can help with this). 

I won't go into all the different targets of opportunity, but I strongly encourage anyone out there who is struggling with this concept to not let it become a motivation killer. Be intentional about finding that balance between model value and human value. Keep sharp on the science. Keep up with / research model performance/verification. With this approach, I believe we have the opportunity to provide the best service possible to those counting on us.

Thursday, April 26, 2018

Probabilistic vs Deterministic Messaging

This past Winter, our office (TFX) participated in the NWS' prob snow experiment. For those who might not be familiar with what that is or what it involved, it was a way to experiment with utilizing snowfall probability information within operations and decision-support activities. Perhaps in another post I'll ponder the good and the bad about the experiment, itself. But, for now, I wanted to take it a different direction.

Related to that experiment, a question was probabilistic or deterministic information better when it comes to messaging? At its core, this question is part of a larger and ongoing debate related to the effective communication of weather hazards. That debate is a fascinating and challenging one, but is probably too long for one post. For now, I'll just address the one piece of the puzzle that focuses on probabilistic vs deterministic messaging.

When it comes to snowfall amounts, what do we often see? Ranges. And, we seem to gravitate toward certain ranges at that. 1-3", 3-6", 6-12". If you are one of those rebel types, you might even use 2-5" or 3-7". Oh the humanity...

The interesting (note I said interesting and not necessarily bad) part about the end-user's use of ranges is the seemingly automatic focus on the high number. Knowing the worst-case scenario isn't a bad thing in of itself, but how it's used can be. Just before a winter storm a couple months ago, a friend of mine texted me and said, 'Hey! I heard we are supposed to get 7" of snow'. The winter product from our office said something to the effect of 2-4" with isolated amounts up to 7", if I remember correctly. My friend read that as we are getting 7" of snow. I doubt he was alone in that assessment.

I often give ranges when messaging upcoming snowfall events and I am not here to argue against that. My end-game is to think through the different possibilities. Recently, I decided to give the ole probability method a try. Prior to a winter event, a caller asked how much snow we expected for her area. With experimentation on the brain, I boldly informed her that there was an 80% chance of exceeding 4" at her house. To which she replied, 'So, do you think we might get a foot?'.

The sample size on my little experiment is incredibly small. But, how much would you be willing to wager against her response representing a large part of the population? One thing that stuck out to me in her response was the 12" amount. After talking with her more, I got the sense that 12" is when she starts having problems in her world. It's the point when her daily plans change. I believe that is why her mind immediately jumped to a foot. For her, my arbitrary percentage-greater-than-x-amount didn't help. Now, had I given her the probability of exceeding 12", well that could have been a different story. Would she have been able to interpret it effectively? I don't know.

When it comes to the general public, the thresholds for when action is taken is all over the place. That lady's threshold was 12". A recent transplant from the South would probably have a different response. So where does that leave us as Meteorologists? In a very challenging position. We have a responsibility to message hazardous weather, but to a group of people who don't even share a common breaking point.

On the flip side, we have individuals or groups (DOT, emergency managers, etc) that often DO have specific thresholds that we can know. I watched an enlightening presentation recently that looked at the potential effectiveness of probability information for decision-makers like the DOT. I get the sense that probability messaging works great for them. Honestly, I believe it could work great for the general public as well. The challenge is our inability to know each and every person's breaking point.

One part of the prob snow experiment that I really liked was that it gave probability information for several breaking points (2", 4", 6", 8", 12", 16"). We may not be able to know all thresholds, but we can certainly try to cover as many as possible in our messaging, within reason. But, that's just snow. What about rain, hail size, tornadoes, tornado strength, etc? Do we say "this storm will produce up to golf ball size hail" or "there is an 80% chance of exceeding quarter size hail?". I'm not sure a warning product is the place to put a lot of probability wording, if nothing else but for the sake of time/understanding. Imagine The Weather Channel scrolling the probability of multiple thresholds, or hearing those probabilities being read over Weather Radio broadcasts/statements?

My answer to the question of probabilistic or deterministic? The verdict is still out, but I imagine it involves some sort of a mix that relates to the known users, the product, and the event. I don't know if there will come a time when all of our messages are completely understood, used correctly, and heeded, but working through and experimenting with this piece of the puzzle is beneficial to the larger discussion regarding effective communication. In the spirit of probabilistic messaging, I will inform you that there is a 100% chance that I will blog more about effective communication down the road...