As I went through the meteorology program over at NIU (Northern Illinois University) one of the most important lessons I learned was the "20-minute rule". This rule was meant to drill into our heads the importance of 'Nowcasting' and not jumping into the models at the beginning of any forecast discussion without properly assessing the current observed weather conditions (for the area in question). At first, I thought that this rule was complete BS. Being a cocky, no-it-all underclassmen, I disregarded this philosophy as an unnecessary hurtle that professors were making be jump over just because they felt like implementing it. By the time I became a Junior, I was humbled very quickly when I had to present a forecast discussion for Telluride Colorado. During my discussion, the weather models had called for nearly a foot of snow near Telluride. I made two mistakes that day during my discussion: 1. I broke the 20-minute rule within 7 minutes of the presentation (that shouldn't surprise any of you given my distain towards the rule) 2. I never once assessed my forecast area and the topography that heavily influences this location (what proved to be fatal mistake). Because I relied too much on the models and not enough on 'Nowcasting', My snowfall forecast for Telluride was a complete dud! That forecast happened to be the best thing that could've happened to me as a student in the Meteorology program. It taught me to be humble and be willing to learn from those who walked this road before you. This sense of humility and curiosity is what I carry with me today and beyond!
The aftermath of Hurricane Otis was a gentle reminder of the lessons learned just a couple years ago. Just 24 hours ago, Otis was merely just a tropical storm. As of this morning, Otis had strengthened to a major category 5 hurricane within a 12-hour span. This rapid intensification cycle is not some new phenomena in tropical weather forecasting. What was unique, however, was how wrong the models were in forecasting this system. With just a shallow corridor of warm waters just off the coast of Acapulco Mexico, some of our most trusted tropical forecast models were not anticipating such a quick intensification of Otis. Even the forecasted intensification spread only had Otis reaching a high-end tropical storm by the time it made landfall in southwestern Mexico. We have seen models overperform and underperform forecasted weather events. Seeing weather models fail by such a large margin though is very rare. What you might be asking is this: how do we fix it? Unfortunately, there is no clear answer that would solve model inconsistencies. After all, weather models themselves are meant to give more of a probabilistic approach using varying algorithms that give different perspectives on a forecast (I.E GFS, Euro to name a few). Case in point, weather models are not meant to provide 100% accuracy (there would be no need for me if that was the case!). That said, to see model failure mode to this degree will certainly need to be addressed. Even with no clear solution, a short-term remedy would be for academia and researchers to use this as a case study opportunity to explore the failure mode of numerical weather prediction (relating to hurricane Otis). A better understanding of what went wrong will certainly lend well to future use cases! For now, we have to all understand that us as forecasters are susceptible to errors, and the only way to address it is by remembering this: we will always be students eager to learn more... Learning never stops!
*Imagery Courtesy of the GOES-18 Satellite*