WHAT WEATHER FORECASTING CAN TEACH US ABOUT BIG DATA (HINT: PEOPLE MATTER)
Weather forecasting provides a view into real clouds (the fluffy things in the sky). Weather forecasting also sheds light on decision-making in the age of Big Data. So there is an 80% chance of value below.
In his book “The Signal and the Noise”, statistician, psephologist and blogger Nate Silver delves into the world of predictability. (A psephologist is someone who scientifically analyzes elections.) Silver gained notoriety with his political blog www.fivethirtyeight.com, and more importantly, correctly predicting the outcome of race in 49 out of 50 states in the November 2008 presidential election (…for an encore, he correctly predicted the outcome of all 35 US Senate races). Mr. Silver’s blog was acquired by the New York Times in 2010, where it resides today .
Silver’s book about the art and science of predictions is peppered with thought-provoking examples from the real world. We’ll piggy-back on his story about weather forecasting because it’s clearly dealing with very Big Data and because it has the neat graphic below, reproduced with permission from the Royal Society.
The image shows 50 different variations of weather forecasts for France and Germany on December 24, 1999. All of these outputs were run from the same software running on the same machines with the same assumptions about how weather behaves. Tiny, incremental changes to assumptions – maybe wind conditions in Stuttgart were adjusted a fraction of a percent, or barometric pressure in Hanover was a trifle higher in one simulation – yielded dramatically different outcomes charted in the ‘stamp maps’ below. In most forecasts, Paris sees a pleasant winter day. In several, it sees a very dangerous storm.
Tragically, the result was Cyclone Lothar, a storm of enormous damage. Over 100 people lost their lives, and damage to property was estimated to cost well over $10 billion. While most forecasts of the storm were correct, some forecasters were criticized for having missed it.
Silver teases out insights into how forecasts actually get made. Surprisingly perhaps, it’s not just a software thing. Humans make adjustments and the final call. Silver asks “What is it, exactly, that humans can do better than computers that can crunch numbers at seventy-seven teraFLOPS?”
Turns out forecasters know where the flaws are in the computer model and adjust shortcomings in the output. A human may adjust a forecast to say that the fog at Acadia National Park in Maine will stick around a little longer that day if the wind is blowing a certain way. The seasoned forecaster coaxes out the best prediction “in the way that a skilled pool player can adjust to the dead spots on the table at his local bar”, says Silver. The National Weather Service, which tracks this stuff, tells Silver that humans improve the accuracy of temperature forecasts 10%, and precipitation forecasts by 25% over computers alone.
No skilled meteorologist would want to trade in the teraflop-churning servers to go it alone. But they will continue to leverage their experience and knowledge to improve what the computer delivers.
The lessons here for companies entering the age of Big Data could not be clearer. Yes, systems and solutions must be put in place to store, process and deal with the enormous mass of data coming at us from all directions, and to tease out patterns and detect new opportunities to deliver value. But it is to serve a still-human decision-making process.
Practically speaking, in the world of business to business relationships in the Networked Economy, this means that companies must complement Big Data investments with deep expertise and experience from sourcing, procurement, sales and marketing to make sure someone is around to manoeuvre the “dead spots in the pool table”.
In the end, Big Data may very well be about having humans help make computer decision-making better – not the other way around. Who knew?