Weather Computer Models
Weather computer models are basically programs running on supercomputers to forecast the weather. These models are
designed to make simulations of the atmosphere from variations of physics routines (read: Math) and "starting the program" with the
currently observed conditions across Earth. These current conditions are then extrapolated through time and space using these
mathematical representations of physics. The end results are forecast temperatures, winds, cloud cover, precipitation, etc.
Models can be designed to forecast the overall weather, or they can be designed to specialize in certain things. For example you can
have a hurricane model that forecasts a very small area around the hurricane at a very high resolution to pick up on subtle changes.
Another model might be a global model at a coarse resolution designed to pick up on the overall synoptic pattern. Or a high resolution
model at a smaller domain such as over the lower 48 US states that maybe is tuned toward more short term mesoscale patterns.
There are many models all with different configurations, and meteorologists look them over to help guide them in making a public
weather forecast. A computer model is just be one part of what a meteorologist looks at when making a forecast, along
with things like surface and upper air observations, synoptic pattern recognition, satellite and radar data, and their own local
climate knowledge to name a few.
In order for a meteorologist to best utilize the model guidance to their advantage, they should have a good understanding of how
the model is designed, its strengths and weaknesses, and its biases. Each model forecasts the weather slightly differently. One
reason is because there are things we know about the atmosphere and things we think we know about the atmosphere. By running these
forecasts with these different scenarios, we hopefully will get most of the possible outcomes in the various output to analyze.
Biases & Shortcomings
There are also computer limitations. Numerical Weather Prediction (NWP) is usually done by governments or private companies. But not
everyone can afford a supercomputer, and not all supercomputers are made equal. So to compensate there are corners cut to make the
model program run faster. It can take several hours for a model to run and you can speed that up by decreasing the resolution,
shortening the forecast period that it forecasts out to, making the area covered by the forecast (domain) smaller, using simpler
math, etc. These shortcuts can make the program more efficient to run, but it can also decrease the accuracy of the output.
These CPU restrictions and software shortcuts create biases in the model that meteorologists need to know and consider when looking
at the output. For example, the Canadian GEM notoriously would have excessively cold low temperatures, especially in
the western United States. However, I recently noticed this has vastly improved, so perhaps a software update corrected this bias.
Another bias might be that a model consistently hooks a storm to the left over the upper midwest in the winter time. These biases
can be seasonal, regional or even pattern-specific. It's an art to really understand all these issues.
By looking at actual observations after a forecast has been made, we can determine which models are more accurate than others in
specific regions & situations. From this we can learn what the biases are, how to potentially fix problems in the model, and maybe
learn which models we should lean on when doing a forecast.
Currently, as a whole, there are a list of primary models that most meteorologists go to worldwide. In the weather model arms race
the European ECMWF is #1, the British UKMET (also known as Unified Model) is #2 and the United States' GFS is expected to moved up to
#3 in March 2019 with its version 15 (FV3) update. This doesn't mean that you should always use the ECMWF. It might mean you should
take a look at it, but its always best to go over all
the information you can and make an informed decision. On a localized
and short-term level, you might be better off looking at a localized high-resolution model like HRRR or HOP-WRF
. These models can
run smaller domains so they are able to increase the resolution and update more frequently to improve their value.
There are also specific things you might be looking for that are best handled by one model over another. For example, if you already
established from looking at synoptic models that a severe weather threat exists, you might now move to looking at higher-resolution,
short term models like HRRR for a more accurate depiction of when and where initiation of an undeveloped supercell might occur.
To learn more about specific models, why and when one model might be better than another, and more details about weather forecast
models, continue reading the links below.
Next » Understand Individual Models