Hurricane Sandy: Big Data Predicted Big Power Outages
What can be learned for future weather events? For starters, simulations must happen quickly and no single forecasting model will do.
Hurricane Sandy followed by the November nor'easter delivered a one-two punch that rival the Kathryn Hepburn hurricane of 1938 and the 1821 Norfolk-Long Island storm that temporarily created North and South Manhattan Islands. These back-to-backs provide a silver lining: a chance to showcase Big Data Analytics (coupled with catastrophe models) in forecasting events, assessing their impacts and mitigating future events.
Before getting into the Big Data Analytics technical side, I must first start by noting that the current level of pain experienced by the folks without power on Long Island is worse than it should have been. My research collaborator Chuck Watson of Kinetic Analysis Corporation did a pilot study back in 2006 for the Long Island Power Authority (LIPA) consisting of some simulations using a hypothetical storm quite similar to Hurricane Sandy. His results showed the vulnerability of their grid and indicated a prolonged recovery period. LIPA's response was dismissive as they claimed that both the outages were overestimated and the restoration of power capability was underestimated. LIPA claimed that it would take no more than 10 days to restore power! Here we are six years later and 150,000 LIPA customers are without power two weeks after landfall. Chuck notes that some customers may not even have power before Thanksgiving.
We would hope that decision makers could learn the lessons from previous disasters and take advantage of analytics that point the way to mitigation. Hurricane Hugo (1989) power outages dragged on for weeks, so it was not like there were no historical precedents. Validation is an essential aspect of hurricane modeling and the professionals in the hazard business do not make forecasts to feed the hype-machine. We are happy to discuss the assumptions of the models and their implementation; we are very disappointed when recommendations are dismissed merely because the deciders do not like the results -- let them ignore at their own peril.
[ Big Data is playing several roles in the wake of Hurricane Sandy. Read Big Data Supports Superstorm Sandy Relief Efforts. ]
Getting back to the analytics, a prerequisite for doing hurricane modeling is having the capability to deal with massive data bases integrated with geographic information systems. The databases include atmospheric conditions (current and recent wind speeds, pressures, temperatures), ocean temperatures, terrain elevation and land coverage for the hazard simulation and for the exposures (building types, heights, value, and so forth). The atmospheric databases require continual updating as new data arrives (from the hurricane hunters or satellite reconnaissance).
Another requirement is to have the computer firepower to perform numerical simulations in a timely fashion. The National Hurricane Center and the media are accustomed to a six-hour schedule of position updates and revised forecasts. This dictates that the track forecast given updated conditions needs to be accomplished prior to the next forecast announcement. There is also something to be said for simple, quick forecasting models that could perform updates at faster than six hour intervals. The six-hour window is awkward for fast moving storms (Sandy was clipping along at around 30 mph at times).
How did track forecasting play out for Hurricane Sandy? Sandy was identified by National Hurricane Center as a tropical depression on October 22. Sandy subsequently walloped the Caribbean as a hurricane causing at least 50 deaths and considerable pain to the already devastated Haiti. The storm weakened over the mountains of eastern Cuba and then strengthened again while straddling the Gulf Stream. The U.S. media began to get excited when the European model ECMWF (European Center for Medium-Range Weather Forecasting) showed a track that could make landfall in the mid-Atlantic states. Such a track is unusual since historically these hurricanes tend to re-curve to the northeast, becoming fish storms. The ECMWF predicted landfall near New York city was 5 days out, which is well beyond the reliable forecast window (five day error of at least 500 miles). If the ECMWF track turned out correct (nailed the landfall location), luck would be partly responsible. Moreover, when the collection of forecast track models are in disagreement, as they were at this point in time, it is ludicrous to bet on one specific track model. Of course, one must be vigilant and pay attention to subsequent track updates, but the evacuation decision for coastal/low-lying regions of the mid-Atlantic and New England states could wait. There is a tendency by some news media folks to pay extra attention to the scariest scenarios.
There are a slew of forecasting models for tracks. Also of interest but a lot harder to forecast is the intensity of storms (the sudden strengthening, weakening, eye wall collapse and reformation aspects of hurricanes is less well understood than tracks). The baseline -- no skill track forecast model -- is known as CLIPER (CLImatology and PERsistence) and is a statistical track forecast model based on the historical record with minimal inputs for prediction (date, position, intensity). Other track models that show up on forecast maps include GFS, GFDL, UKMET, BAMM, LBAR, etc. Having the various forecast tracks available from multiple media sources (Weather Channel, NHC news bulletins, weather websites) helps to educate the general public and demonstrates the difficulty in forecasting.
How did the track models perform for Hurricane Sandy? The statistical forecast model CLP5 (similar to CLIPER) was awful, reflecting the fact that most storms like Sandy re-curve to the northeast and spare the northeast U.S. ECMWF did fairly well for both Sandy and the nor'easter but then GFDL was not bad either. The attached figure (kindly provided by C. Watson) shows the track map for October 25. Several viable track models have forecast tracks all over the place!
There has been some silly media coverage that the good performance of ECMWF (for a few specific time points) is a reflection of the downtrodden state of U.S. forecasting efforts. Searching for the Holy Grail single best model is misguided. Complex phenomena require multi-pronged research attacks. A track model that does well for part of the lifetime of a storm could really tank for another part of it. Evaluating the model predictions following an event is a necessity in order to learn from the event. We cannot tell in advance which track models are going to work best -- hence, a research effort is required. We are also heavily involved in damage assessment and utility restoration. I hope to comment on these areas of applications from an analytics perspective in future columns.
Dr. Mark E. Johnson is Professor of Statistics at the University of Central Florida in Orlando. He is a Fellow of the American Statistical Association, an elected member of the International Statistical Institute, and a Chartered Statistician with the Royal Statistical Society. Mark does extensive consulting in the area of catastrophic risks (especially hurricanes) and regularly is retained as an expert witness in legal cases.
About the Author
You May Also Like
2024 InformationWeek US IT Salary Report
Aug 15, 20242024 InformationWeek US IT Salary Report
May 29, 20242022 State of ITOps and SecOps
Jun 21, 2022