weather forecasting

weather forecasting
Prediction of the weather through application of the principles of physics and meteorology.

Weather forecasting predicts atmospheric phenomena and changes on the Earth's surface caused by atmospheric conditions (snow and ice cover, storm tides, floods, etc.). Scientific weather forecasting relies on empirical and statistical techniques, such as measurements of temperature, humidity, atmospheric pressure, wind speed and direction, and precipitation, and computer-controlled mathematical models.

* * *

Introduction

      the prediction of the weather through application of the principles of physics, supplemented by a variety of statistical and empirical techniques. In addition to predictions of atmospheric phenomena themselves, weather forecasting includes predictions of changes on the Earth's surface caused by atmospheric conditions—e.g., snow and ice cover, storm tides, and floods.

General considerations

Measurements and ideas as the basis for weather prediction
      The observations of few other scientific enterprises are as vital or affect as many people as those related to weather forecasting. From the days when early humans ventured from caves and other natural shelters, perceptive individuals in all likelihood became leaders by being able to detect nature's signs of impending snow, rain, or wind, indeed of any change in weather. With such information they must have enjoyed greater success in the search for food and safety, the major objectives of that time.

      In a sense, weather forecasting is still carried out in basically the same way as it was by the earliest humans—namely, by making observations and predicting changes. The modern tools used to measure temperature, pressure, wind, and humidity in the 21st century would certainly amaze them, and the results obviously are better. Yet, even the most sophisticated numerically calculated forecast made on a supercomputer requires a set of measurements of the condition of the atmosphere—an initial picture of temperature, wind, and other basic elements, somewhat comparable to that formed by our forebears when they looked out of their cave dwellings. The primeval approach entailed insights based on the accumulated experience of the perceptive observer, while the modern technique consists of solving equations. Although seemingly quite different, there are underlying similarities between both practices. In each case the forecaster asks “What is?” in the sense of “What kind of weather prevails today?” and then seeks to determine how it will change in order to extrapolate what it will be.

      Because observations are so critical to weather prediction, an account of meteorological measurements and weather forecasting is a story in which ideas and technology are closely intertwined, with creative thinkers drawing new insights from available observations and pointing to the need for new or better measurements, and technology providing the means for making new observations and for processing the data derived from measurements. The basis for weather prediction started with the theories of the ancient Greek philosophers and continued with Renaissance scientists, the scientific revolution of the 17th and 18th centuries, and the theoretical models of 20th- and 21st-century atmospheric scientists and meteorologists. Likewise, it tells of the development of the “synoptic” idea—that of characterizing the weather over a large region at exactly the same time in order to organize information about prevailing conditions. In synoptic meteorology, simultaneous observations for a specific time are plotted on a map for a broad area whereby a general view of the weather in that region is gained. (The term synoptic is derived from the Greek word meaning “general or comprehensive view.”) The so-called synoptic weather map came to be the principal tool of 19th-century meteorologists and continues to be used today in weather stations and on television weather reports around the world.

      Since the mid-20th century, digital computers (computer) have made it possible to calculate changes in atmospheric conditions mathematically and objectively—i.e., in such a way that anyone can obtain the same result from the same initial conditions. The widespread adoption of numerical (numerical analysis) weather prediction models brought a whole new group of players—computer specialists and experts in numerical processing and statistics—to the scene to work with atmospheric scientists and meteorologists. Moreover, the enhanced capability to process and analyze weather data stimulated the long-standing interest of meteorologists in securing more observations of greater accuracy. Technological advances since the 1960s have led to a growing reliance on remote sensing, particularly the gathering of data with specially instrumented Earth-orbiting satellites. By the late 1980s, forecasts of weather were largely based on the determinations of numerical models integrated by high-speed supercomputers, except some shorter-range predictions, particularly those related to local thunderstorm activity, were made by specialists directly interpreting radar and satellite measurements.

Practical applications of weather forecasting
      Systematic weather records were kept after instruments for measuring atmospheric conditions became available during the 17th century. Undoubtedly these early records were employed mainly by those engaged in agriculture (agricultural technology). Planting and harvesting obviously can be planned better and carried out more efficiently if long-term weather patterns can be estimated. In the United States, national weather services were first provided by the Army Signal Corps beginning in 1870. These operations were taken over by the Department of Agriculture in 1891. By the early 1900s free mail service and telephone were providing forecasts daily to millions of American farmers. The U.S. Weather Bureau established a Fruit-Frost (forecasting) Service during World War I, and by the 1920s radio broadcasts to agricultural interests were being made in most states.

      Weather forecasting became an important tool for aviation during the 1920s and '30s. Its application in this area gained in importance after Francis W. Reichelderfer was appointed chief of the U.S. Weather Bureau in 1939. Reichelderfer had previously modernized the navy's meteorological service and made it a model of support for naval aviation. During World War II the discovery of very strong wind currents at high altitudes (the jet streams, which can affect aircraft speed) and the general susceptibility of military operations in Europe to weather led to a special interest in weather forecasting.

      One of the most famous wartime forecasting problems was for Operation Overlord (Normandy Invasion), the invasion of the European mainland at Normandy by Allied forces. An unusually intense June storm brought high seas and gales to the French coast, but a moderation of the weather that was successfully predicted by Colonel J.M. Stagg of the British forces (after consultation with both British and American forecasters) enabled General Dwight D. Eisenhower, supreme commander of the Allied Expeditionary Forces, to make his critical decision to invade on June 6, 1944.

      The second half of the 20th century has seen unprecedented growth of commercial weather-forecasting firms in the United States and elsewhere. Marketing organizations and stores commonly hire weather-forecasting consultants to help with the timing of sales and promotions of products ranging from snow tires and roofing materials to summer clothes and resort vacations. Many oceangoing shipping (transportation) vessels as well as military ships use optimum ship routing forecasts to plan their routes in order to minimize lost time, potential damage, and fuel consumption in heavy seas. Similarly, airlines carefully consider atmospheric conditions when planning long-distance flights so as to avoid the strongest head winds and to ride with the strongest tail winds.

      International trading (international trade) of foodstuffs such as wheat, corn (maize), beans, sugar, cocoa, and coffee can be severely affected by weather news. For example, in 1975 a severe freeze in Brazil caused the price of coffee to increase substantially within just a few weeks, and in 1977 a freeze in Florida nearly doubled the price of frozen concentrated orange juice in a matter of days. Weather-forecasting organizations are thus frequently called upon by banks, commodity traders, and food companies to give them advance knowledge of the possibility of such sudden changes.

      The cost of all sorts of commodities and services, whether they are tents for outdoor events or plastic covers for the daily newspapers, can be reduced or eliminated if reliable information about possible precipitation can be obtained in advance.

      Forecasts must be quite precise for applications that are tailored to specific industries. Gas and electric utilities, for example, may require forecasts of temperature within one or two degrees a day ahead of time, or ski-resort operators may need predictions of nighttime relative humidity on the slopes within 5 to 10 percent in order to schedule snow making.

History of weather forecasting

Early measurements and ideas
      The Greek philosophers had much to say about meteorology, and many who subsequently engaged in weather forecasting no doubt made use of their ideas. Unfortunately, they probably made many bad forecasts, because Aristotle, who was the most influential, did not believe that wind is air in motion. He did believe, however, that west winds are cold because they blow from the sunset.

      The scientific study of meteorology did not develop until measuring instruments became available. Its beginning is commonly associated with the invention of the mercury barometer by Evangelista Torricelli (Torricelli, Evangelista), an Italian physicist-mathematician, in the mid-17th century and the nearly concurrent development of a reliable thermometer. (Galileo had constructed an elementary form of gas thermometer in 1607, but it was defective; the efforts of many others finally resulted in a reasonably accurate liquid-in-glass device.)

      A succession of notable achievements by chemists and physicists of the 17th and 18th centuries contributed significantly to meteorological research. The formulation of the laws of gas pressure, temperature, and density by Robert Boyle and Jacques-Alexandre-César Charles, the development of calculus by Isaac Newton and Gottfried Wilhelm Leibniz, the development of the law of partial pressures of mixed gases by John Dalton, and the formulation of the doctrine of latent heat (i.e., heat release by condensation or freezing) by Joseph Black are just a few of the major scientific breakthroughs of the period that made it possible to measure and better understand theretofore unknown aspects of the atmosphere and its behaviour. During the 19th century, all of these brilliant ideas began to produce results in terms of useful weather (weather map) forecasts.

The emergence of synoptic forecasting methods
Analysis of synoptic weather reports
      An observant person who has learned nature's signs can interpret the appearance of the sky, the wind, and other local effects and “foretell the weather.” A scientist can use instruments at one location to do so even more effectively. The modern approach to weather forecasting, however, can only be realized when many such observations are exchanged quickly by experts at various weather stations and entered on a synoptic weather map to depict the patterns of pressure, wind, temperature, clouds, and precipitation at a specific time. Such a rapid exchange of weather data became feasible with the development of the electric telegraph in 1837 by Samuel F.B. Morse of the United States. By 1849 Joseph Henry of the Smithsonian Institution in Washington, D.C., was plotting daily weather maps based on telegraphic reports, and in 1869 Cleveland Abbe at the Cincinnati Observatory began to provide regular weather forecasts using data received telegraphically.

      Synoptic weather maps resolved one of the great controversies of meteorology—namely, the rotary storm dispute. By the early decades of the 19th century, it was known that storms were associated with low barometric readings, but the relation of the winds to low-pressure systems, called cyclones, remained unrecognized. William Redfield, a self-taught meteorologist from Middletown, Conn., noticed the pattern of fallen trees after a New England hurricane and suggested in 1831 that the wind flow was a rotary counterclockwise circulation around the centre of lowest pressure. The American meteorologist James P. Espy (Espy, James Pollard) subsequently proposed in his Philosophy of Storms (1841) that air would flow toward the regions of lowest pressure and then would be forced upward, causing clouds and precipitation. Both Redfield and Espy proved to be right. The air does spin around the cyclone, as Redfield believed, while the layers close to the ground flow inward and upward as well. The net result is a rotational wind circulation that is slightly modified at the Earth's surface to produce inflow toward the storm centre, just as Espy had proposed. Further, the inflow is associated with clouds and precipitation in regions of low pressure, though that is not the only cause of clouds there.

      In Europe the writings of Heinrich Dove, a Polish scientist who directed the Prussian Meteorological Institute, greatly influenced views concerning wind behaviour in storms. Unlike the Americans, Dove did not focus on the pattern of the winds around the storm but rather on how the wind should change at one place as a storm passed. It was many years before his followers understood the complexity of the possible changes.

Establishment of weather-station networks and services
      Routine production of synoptic weather maps became possible after networks of stations were organized to take measurements and report them to some type of central observatory. As early as 1814, U.S. Army Medical Corps personnel were ordered to record weather (weather bureau) data at their posts; this activity was subsequently expanded and made more systematic. Actual weather-station networks were established in the United States by New York University, the Franklin Institute, and the Smithsonian Institution during the early decades of the 19th century.

      In Britain, James Glaisher organized a similar network, as did Christophorus H.D. Buys Ballot in The Netherlands. Other such networks of weather stations were developed near Vienna, Paris, and St. Petersburg.

      It was not long before national meteorological services were established on the Continent and in the United Kingdom. The first national weather service in the United States commenced operations in 1871, with responsibility assigned to the U.S. Army Signal Corps. The original purpose of the service was to provide storm warnings for the Atlantic and Gulf coasts and for the Great Lakes. Within the next few decades, national meteorological services were established in such countries as Japan, India, and Brazil. The importance of international cooperation in weather prognostication was recognized by the directors of such national services. By 1880 they had formed the International Meteorological Organization (IMO).

      The proliferation of weather-station networks linked by telegraphy made synoptic forecasting a reality by the close of the 19th century. Yet, the daily weather forecasts generated left much to be desired. Many errors occurred as predictions were largely based on the experience that each individual forecaster had accumulated over several years of practice, vaguely formulated rules of thumb (e.g., of how pressure systems move from one region to another), and associations that were poorly understood, if at all.

Progress during the early 20th century
      An important aspect of weather prediction is to calculate the atmospheric pressure pattern—the positions of the highs and lows and their changes. Modern research has shown that sea-level pressure patterns respond to the motions of the upper-atmospheric winds, with their narrow, fast-moving jet streams and waves that propagate through the air and pass air through themselves.

      Frequent surprises and errors in estimating surface atmospheric pressure patterns undoubtedly caused 19th-century forecasters to seek information about the upper atmosphere for possible explanations. The British meteorologist Glaisher made a series of ascents by balloon during the 1860s, reaching an unprecedented height of nine kilometres. At about this time investigators on the Continent began using unmanned balloons to carry recording barographs, thermographs, and hygrographs to high altitudes. During the late 1890s meteorologists in both the United States and Europe used kites equipped with instruments to probe the atmosphere up to altitudes of about three kilometres. Notwithstanding these efforts, knowledge about the upper atmosphere remained very limited at the turn of the century. The situation was aggravated by the confusion created by observations from weather stations located on mountains or hilltops. Such observations often did not show what was expected, partly because so little was known about the upper atmosphere and partly because the mountains themselves affect measurements, producing results that are not representative of what would be found in the free atmosphere at the same altitude.

      Fortunately, a large enough number of scientists had already put forth ideas that would make it possible for weather forecasters to think three-dimensionally, even if sufficient meteorological measurements were lacking. Henrik Mohn, the first of a long line of highly creative Norwegian meteorologists, Wladimir Köppen, the noted German climatologist, and Max Margules, an influential Russian-born meteorologist, all contributed to the view that mechanisms of the upper air generate the energy of storms.

      In 1911 William H. Dines (Dines, William Henry), a British meteorologist, published data that showed how the upper atmosphere compensates for the fact that the low-level winds carry air toward low-pressure centres. Dines recognized that the inflow near the ground is more or less balanced by a circulation upward and outward aloft. Indeed, for a cyclone to intensify, which would require a lowering of central pressure, the outflow must exceed the inflow; the surface winds can converge quite strongly toward the cyclone, but sufficient outflow aloft can produce falling pressure at the centre.

      Meteorologists of the time were now aware that vertical circulations and upper-air phenomena were important, but they still had not determined how such knowledge could improve weather forecasting. Then, in 1919, the Norwegian meteorologist Jacob Bjerknes (Bjerknes, Jacob) introduced what has been referred to as the Norwegian cyclone model. This theory pulled together many earlier ideas and related the patterns of wind and weather to a low-pressure system that exhibited fronts (front)—which are rather sharp sloping boundaries between cold and warm air masses. Bjerknes pointed out the rainfall/snowfall patterns that are characteristically associated with the fronts in cyclones: the rain or snow occurs over large areas on the cold side of an advancing warm front poleward of a low-pressure centre. Here, the winds are from the lower latitudes, and the warm air, being light, glides up over a large region of cold air. Widespread, sloping clouds spread ahead of the cyclone; barometers fall as the storm approaches, and precipitation from the rising warm air falls through the cold air below. Where the cold air advances to the rear of the storm, squalls and showers mark the abrupt lifting of the warm air being displaced. Thus, the concept of fronts focused attention on the action at air mass boundaries. The Norwegian cyclone model could be called the frontal model, for the idea of warm air masses being lifted over cold air along their edges (fronts) became a major forecasting tool. The model not only emphasized the idea but it also showed how and where to apply it.

 In later work, Bjerknes and several other members of the so-called Bergen school of meteorology expanded the model to show that cyclones grow from weak disturbances on fronts, pass through a regular life cycle, and ultimately die by the inflow filling them. Both the Norwegian cyclone model and the associated life-cycle concept are still used today by weather forecasters.

      While Bjerknes and his Bergen colleagues refined the cyclone model, other Scandinavian meteorologists provided much of the theoretical basis for modern weather prediction. Foremost among them were Vilhelm Bjerknes, Jacob's father, and Carl-Gustaf Rossby. Their ideas helped make it possible to understand and carefully calculate the changes in atmospheric circulation and the motion of the upper-air waves that control the behaviour of cyclones.

Modern trends and developments
Upper-air observations by means of balloon-borne sounding equipment
      Once again technology provided the means with which to test the new scientific ideas and stimulate yet newer ones. During the late 1920s and '30s, several groups of investigators (those headed by Yrjö Väisälä of Finland and Pavel Aleksandrovich Malchanov of the Soviet Union, for example) began using small radio transmitters with balloon-borne instruments, eliminating the need to recover the instruments and speeding up access to the upper-air data. These radiosondes (radiosonde), as they came to be called, gave rise to the upper-air observation networks that still exist today. Approximately 75 stations in the United States and more than 500 worldwide release, twice daily, balloons that reach heights of 30,000 metres or more. Observations of temperature and relative humidity at various pressures are radioed back to the station from which the balloons are released as they ascend at a predetermined rate. The balloons also are tracked by radar and global positioning system (GPS) satellites to ascertain the behaviour of winds from their drift.

      Forecasters are able to produce synoptic weather maps of the upper atmosphere twice each day on the basis of radiosonde observations. While new methods of upper-air measurement have been developed, the primary synoptic clock times for producing upper-air maps are still the radiosonde-observation times—namely, 0000 (midnight) and 1200 (noon) Greenwich Mean Time (GMT). Furthermore, modern computer-based forecasts use 0000 and 1200 GMT as the starting times from which they calculate the changes that are at the heart of modern forecasts. It is, in effect, the synoptic approach carried out in a different way, intimately linked to the radiosonde networks developed during the 1930s and '40s.

Application of radar
 As in many fields of endeavour, weather prediction experienced several breakthroughs during and immediately after World War II. The British began using microwave radar in the late 1930s to monitor enemy aircraft, but it was soon learned that radar gave excellent returns from raindrops at certain wavelengths (five to 10 centimetres). As a result it became possible to track and study the evolution of individual showers or thunderstorms, as well as to “see” the precipitation structure of larger storms. The photograph—> shows an image of the rain bands (not clouds) in a hurricane.

      Since its initial application in meteorological work, radar has grown as a forecaster's tool. Virtually all tornadoes and severe thunderstorms over the United States and in some other parts of the world are monitored by radar. Radar observation of the growth, motion, and characteristics of such storms provide clues as to their severity. Modern radar systems use the Doppler principle of frequency shift associated with movement toward or away from the radar transmitter/receiver to determine wind speeds as well as storm motions.

      Using radar and other observations, the Japanese-American meteorologist Tetsuya T. Fujita (Fujita, T. Theodore) discovered many details of severe thunderstorm behaviour and of the structure of the violent local storms common to the Midwest region of the United States. His Doppler-radar analyses of winds revealed “ microburst” gusts. These gusts cause the large wind shears (wind shear) (differences) associated with strong rains that have been responsible for some plane crashes.

      Other types of radar have been used increasingly for detecting winds continuously, as opposed to twice a day. These wind-profiling radar systems actually pick up signals “reflected” by clear air and so can function even when no clouds or rain are present.

Meteorological measurements from satellites (weather satellite) and aircraft
      A major breakthrough in meteorological measurement came with the launching of the first meteorological satellite, the TIROS (Television and Infrared Observation Satellite), by the United States on April 1, 1960. The impact of global quantitative views of temperature, cloud, and moisture distributions, as well as of surface properties (e.g., ice cover and soil moisture), has already been substantial. Furthermore, new ideas and new methods may very well make the 21st century the “age of the satellite” in weather prediction.

      Medium-range forecasts that provide information five to seven days in advance were impossible before satellites began making global observations—particularly over the ocean waters of the Southern Hemisphere—routinely available in real time. Global forecasting models developed at the U.S. National Center for Atmospheric Research (NCAR), the European Centre for Medium Range Weather Forecasts (ECMWF), and the U.S. National Meteorological Center (NMC) became the standard during the 1980s, making medium-range forecasting a reality. Global weather forecasting models are routinely run by national weather services around the world, including those of Japan, the United Kingdom, and Canada.

      Meteorological satellites travel in various orbits and carry a wide variety of sensors. They are of two principal types: the low-flying polar orbiter, and the geostationary orbiter.

      The first type circle the Earth at altitudes of 500–1,000 kilometres and in roughly north–south orbits. They appear overhead at any one locality twice a day and provide very high-resolution data because they fly close to the Earth. Such satellites are vitally necessary for much of Europe and other high-latitude locations because they orbit near the poles. These satellites do, however, suffer from one major limitation: they can provide a sampling of atmospheric conditions only twice daily.

      The geostationary (geostationary orbit) satellite is made to orbit the Earth along its equatorial plane at an altitude of about 36,000 kilometres. At that height the eastward motion of the satellite coincides exactly with the Earth's rotation, so that the satellite remains in one position above the Equator. Satellites of this type are able to provide an almost continuous view of a wide area. Because of this capability, geostationary satellites have yielded new information about the rapid changes that occur in thunderstorms, hurricanes, and certain types of fronts, making them invaluable to weather forecasting as well as meteorological research.

      One weakness common to virtually all satellite-borne sensors and to some ground-based radars that use UHF/VHF waves is an inability to measure thin layers of the atmosphere. One such layer is the tropopause, the boundary between the relatively dry stratosphere and the more meteorologically active layer below. This is often the region of the jet streams. Important information about these kinds of high-speed air currents is obtained with sensors mounted on high-flying commercial aircraft and is routinely included in global weather analyses.

Numerical (numerical analysis) weather prediction (NWP) models
      Thinkers frequently advance ideas long before the technology exists to implement them. Few better examples exist than that of numerical weather forecasting. Instead of mental estimates or rules of thumb about the movement of storms, numerical forecasts are objective calculations of changes to the weather map based on sets of physics-based equations called models (mathematical model). Shortly after World War I a British scientist named Lewis F. Richardson (Richardson, Lewis Fry) completed such a forecast that he had been working on for years by tedious and difficult hand calculations. Although the forecast proved to be incorrect, Richardson's general approach was accepted decades later when the electronic computer became available. In fact, it has become the basis for nearly all present-day weather forecasts. Human forecasters may interpret or even modify the results of the computer models, but there are few forecasts that do not begin with numerical-model calculations of pressure, temperature, wind, and humidity for some future time.

      The method is closely related to the synoptic approach (see above). Data are collected rapidly by a Global Telecommunications System for 0000 or 1200 GMT to specify the initial conditions. The model equations are then solved for various segments of the weather map—often a global map—to calculate how much conditions are expected to change in a given time, say, 10 minutes. With such changes added to the initial conditions, a new map is generated (in the computer's memory) valid for 0010 or 1210 GMT. This map is treated as a new set of initial conditions, probably not quite as accurate as the measurements for 0000 and 1200 GMT but still very accurate. A new step is undertaken to generate a forecast for 0020 or 1220. This process is repeated step after step. In principle, the process could continue indefinitely. In practice, small errors creep into the calculations, and they accumulate. Eventually, the errors become so large by this cumulative process that there is no point in continuing.

      Global numerical forecasts are produced regularly (once or twice daily) at the ECMWF, the NMC, and the U.S. military facilities in Omaha, Neb., and Monterey, Calif., Tokyo, Moscow, London, Melbourne, and elsewhere. In addition, specialized numerical forecasts designed to predict more details of the weather are made for many smaller regions of the world by various national weather services, military organizations, and even a few private companies. Finally, research versions of numerical weather prediction models are constantly under review, development, and testing at NCAR and at the Goddard Space Flight Center in the United States and at universities in several nations.

      The capacity and complexity of numerical weather prediction models have increased dramatically since the mid-1940s when the earliest modeling work was done by the mathematician John von Neumann and the meteorologist Jule Charney at the Institute for Advanced Study in Princeton, N.J. Because of their pioneering work and the discovery of important simplifying relationships by other scientists (notably Arnt Eliassen of Norway and Reginald Sutcliffe of Britain), a joint U.S. Weather Bureau, Navy, and Air Force numerical forecasting unit was formed in 1954 in Washington, D.C. Referred to as JNWP, this unit was charged with producing operational numerical forecasts on a daily basis.

      The era of numerical weather prediction thus really began in the 1950s. As computing power grew, so did the complexity, speed, and capacity for detail of the models. And as new observations became available from such sources as Earth-orbiting satellites, radar systems, and drifting weather balloons, so too did methods sophisticated enough to ingest the data into the models as improved initial synoptic maps.

      Numerical forecasts have improved steadily over the years. The vast Global Weather Experiment, first conceived by Charney, was carried out by many nations in 1979 under the leadership of the World Meteorological Organization to demonstrate what high-quality global observations could do to improve forecasting by numerical prediction models. The results of that effort continue to effect further improvement.

      A relatively recent development has been the construction of mesoscale numerical prediction models. The prefix meso- means “middle” and here refers to middle-sized features in the atmosphere, between large cyclonic storms and individual clouds. Fronts, clusters of thunderstorms, sea breezes, hurricane bands, and jet streams are mesoscale structures, and their evolution and behaviour are crucial forecasting problems that only recently have been dealt with in numerical prediction. An example of such a model is the meso-eta model, which was developed by Serbian atmospheric scientist Fedor Mesinger. The meso-eta model is a finer-scale version of a regional numerical weather prediction model used by the National Weather Service in the United States. The national weather services of several countries produce numerical forecasts of considerable detail by means of such limited-area mesoscale models.

Principles and methodology of weather forecasting

Short-range forecasting
Objective predictions
      When people wait under a shelter for a downpour to end, they are making a very-short-range weather forecast. They are assuming, based on past experience, that such hard rain usually does not last very long. In short-term predictions the challenge for the forecaster is to improve on what the layperson can do. For years the type of situation represented in the above example proved particularly vexing for forecasters, but since the mid-1980s they have been developing a method called nowcasting to meet precisely this sort of challenge. In this method, radar and satellite observations of local atmospheric conditions are processed and displayed rapidly by computers to project weather several hours in advance. The U.S. National Oceanic and Atmospheric Administration operates a facility known as PROFS (Program for Regional Observing and Forecasting Services) in Boulder, Colo., specially equipped for nowcasting.

      Meteorologists can make somewhat longer-term forecasts (those for six, 12, 24, or even 48 hours) with considerable skill because they are able to measure and predict atmospheric conditions for large areas by computer. Using models that apply their accumulated expert knowledge quickly, accurately, and in a statistically valid form, meteorologists are now capable of making forecasts objectively. As a consequence, the same results are produced time after time from the same data inputs, with all analysis accomplished mathematically. Unlike the prognostications of the past made with subjective methods, objective forecasts are consistent and can be studied, reevaluated, and improved.

      Another technique for objective short-range forecasting is called MOS (for Model Output Statistics). Conceived by Harry R. Glahn and D.A. Lowry of the U.S. National Weather Service, this method involves the use of data relating to past weather phenomena and developments to extrapolate the values of certain weather elements, usually for a specific location and time period. It overcomes the weaknesses of numerical models by developing statistical relations between model forecasts and observed weather. These relations are then used to translate the model forecasts directly to specific weather forecasts. For example, a numerical model might not predict the occurrence of surface winds at all, and whatever winds it did predict might always be too strong. MOS relations can automatically correct for errors in wind speed and produce quite accurate forecasts of wind occurrence at a specific point, such as Heathrow Airport near London. As long as numerical weather prediction models are imperfect, there may be many uses for the MOS technique.

Predictive skills and procedures
      Short-range weather forecasts generally tend to lose accuracy as forecasters attempt to look farther ahead in time. Predictive skill is greatest for periods of about 12 hours and is still quite substantial for 48-hour predictions. An increasingly important group of short-range forecasts are economically motivated. Their reliability is determined in the marketplace by the economic gains they produce (or the losses they avert).

      Weather warnings are a special kind of short-range forecast; the protection of human life is the forecaster's greatest challenge and source of pride. The first national weather forecasting service in the United States (the predecessor of the Weather Bureau) was in fact formed, in 1870, in response to the need for storm warnings on the Great Lakes. Increase Lapham of Milwaukee urged Congress to take action to reduce the loss of hundreds of lives incurred each year by Great Lakes shipping during the 1860s. The effectiveness of the warnings and other forecasts assured the future of the American public weather service.

      Weather warnings are issued by government and military organizations throughout the world for all kinds of threatening weather events: tropical storms variously called hurricanes, typhoons, or tropical cyclones, depending on location; great oceanic gales outside the tropics spanning hundreds of kilometres and at times packing winds comparable to those of tropical storms; and, on land, flash floods, high winds, fog, blizzards, ice, and snowstorms.

      A particular effort is made to warn of hail, lightning, and wind gusts associated with severe thunderstorms, sometimes called severe local storms (SELS) or simply severe weather. Forecasts and warnings also are made for tornadoes (tornado), those intense, rotating windstorms that represent the most violent end of the weather scale. Destruction of property and the risk of injury and death are extremely high in the path of a tornado, especially in the case of the largest systems (sometimes called maxi-tornadoes).

      Because tornadoes are so uniquely life-threatening and because they are so common in various regions of the United States, the National Weather Service operates a National Severe Storms Forecasting Center (NSSFC) in Kansas City, Mo., where SELS forecasters survey the atmosphere for the conditions that can spawn tornadoes or severe thunderstorms. This group of SELS forecasters, assembled in 1952, monitors temperature and water vapour in an effort to identify the warm, moist regions where thunderstorms may form and studies maps of pressure and winds to find regions where the storms may organize into mesoscale structures. The group also monitors jet streams and dry air aloft that can combine to distort ordinary thunderstorms (thunderstorm) into rare rotating ones with tilted chimneys of upward rushing air that, because of the tilt, are unimpeded by heavy falling rain. These high-speed updrafts can quickly transport vast quantities of moisture to the cold upper regions of the storms, thereby promoting the formation of large hailstones. The hail and rain drag down air from aloft to complete a circuit of violent, cooperating updrafts and downdrafts.

      By correctly anticipating such conditions, SELS forecasters are able to provide time for the mobilization of special observing networks and personnel. If the storms actually develop, specific warnings are issued based on direct observations. This two-step process consists of the tornado or severe thunderstorm watch, which is the forecast prepared by the SELS forecaster, and the warning, which is usually released by a local observing facility. The watch may be issued when the skies are clear, and it usually covers a number of counties. It alerts the affected area to the threat but does not attempt to pinpoint which communities will be affected.

      By contrast, the warning is very specific to a locality and calls for immediate action. Radar of various types can be used to detect the large hailstones, the heavy load of raindrops, the relatively clear region of rapid updraft, and even the rotation in a tornado. These indicators, or an actual sighting, often trigger the tornado warning. In effect, a warning is a specific statement that danger is imminent, whereas a watch is a forecast that warnings may be necessary later in a given region.

Long-range forecasting
Techniques
      Extended-range, or long-range, weather forecasting has had a different history and a different approach from short- or medium-range forecasting. In most cases, it has not applied the synoptic method of going forward in time from a specific initial map. Instead, long-range forecasters have tended to use the climatological approach, often concerning themselves with the broad weather picture over a period of time rather than attempting to forecast day-to-day details.

      There is good reason to believe that the limit of day-to-day forecasts based on the “initial map” approach is about two weeks. Most long-range forecasts thus attempt to predict the departures from normal conditions for a given month or season. Such departures are called anomalies. A forecast might state that “spring temperatures in Minneapolis have a 65 percent probability of being above normal.” It would likely be based on a forecast anomaly map, which shows temperature anomaly patterns. The maps do not attempt to predict the weather for a particular day, but rather forecast trends (i.e., warmer than normal) for an extended amount of time, such as a season (i.e., spring).

      The U.S. Weather Bureau began making experimental long-range forecasts just before the beginning of World War II, and its successor, the National Weather Service, continues to express such predictions in probabilistic terms, making it clear that they are subject to uncertainty. Verification shows that forecasts of temperature anomalies are more reliable than those of precipitation, that monthly forecasts are better than seasonal ones, and that winter months are predicted somewhat more accurately than other seasons.

      Prior to the 1980s the technique commonly used in long-range forecasting relied heavily on the analog method, in which groups of weather situations (maps) from previous years were compared to those of the current year to determine similarities with the atmosphere's present patterns (or “habits”). An association was then made between what had happened subsequently in those “similar” years and what was going to happen in the current year. Most of the techniques were quite subjective, and there were often disagreements of interpretation and consequently uneven quality and marginal reliability.

      Persistence (warm summers follow warm springs) or anti-persistence (cold springs follow warm winters) also were used, even though, strictly speaking, most forecasters consider persistence forecasts “no-skill” forecasts. Yet, they too have had limited success.

Prospects for new procedures
      In the last quarter of the 20th century the approach of and prospects for long-range weather forecasting have changed significantly. Stimulated by the work of Jerome Namias, who headed the U.S. Weather Bureau's Long-Range Forecast Division for 30 years, scientists began to look at ocean-surface temperature anomalies as a potential cause for the temperature anomalies of the atmosphere in succeeding seasons and at distant locations. At the same time, other American meteorologists, most notably John M. Wallace, showed how certain repetitive patterns of atmospheric flow were related to each other in different parts of the world. With satellite-based observations available, investigators began to study the El Niño phenomenon. Atmospheric scientists also revived the work of Gilbert Walker, an early 20th-century British climatologist who had studied the Southern Oscillation, the aforementioned up-and-down fluctuation of atmospheric pressure in the Southern Hemisphere. Walker had investigated related air circulations (later called the Walker Circulation) that resulted from abnormally high pressures in Australia and low pressures in Argentina or vice versa.

      All of this led to new knowledge about how the occurrence of abnormally warm or cold ocean waters and of abnormally high or low atmospheric pressures could be interrelated in vast global connections. Knowledge about these links—El Niño/Southern Oscillation (ENSO)—and about the behaviour of parts of these vast systems enables forecasters to make better long-range predictions, at least in part, because the ENSO features change slowly and somewhat regularly. This approach of studying interconnections between the atmosphere and the ocean may represent the beginning of a revolutionary stage in long-range forecasting.

      Since the mid-1980s, interest has grown in applying numerical weather prediction models to long-range forecasting. In this case, the concern is not with the details of weather predicted 20 or 30 days in advance but rather with objectively predicted anomalies. The reliability of long-range forecasts, like that of short- and medium-range projections, has improved substantially in recent years. Yet, many significant problems remain unsolved, posing interesting challenges for all those engaged in the field.

John J. Cahir

Additional Reading
Richard A. Anthes et al., The Atmosphere, 3rd ed. (1981), contains general discussions. The history of weather forecasting is recounted in Gisela Kutzbach, The Thermal Theory of Cyclones: A History of Meteorological Thought in the Nineteenth Century (1979); A. Kh. Khrgian, Meteorology: A Historical Survey, 2nd ed. rev. (1970; originally published in Russian, 2nd ed. rev., 1959); Patrick Hughes, “American Weather Services,” Weatherwise, 33(3):100–111 (June 1980); and Frederick G. Shuman, “Numerical Weather Prediction,” Bulletin of the American Meteorological Society, 59(1):5–17 (January 1978). Instruments used in weather analysis are examined in the classic texts by W.E. Knowles Middleton and Athelstan F. Spilhaus, Meteorological Instruments, 3rd ed. rev. (1953, reprinted 1960); and W.E. Knowles Middleton, History of the Barometer (1964), A History of the Thermometer and Its Uses in Meteorology (1966), and Invention of the Meteorological Instruments (1969); as well as in Leo J. Fritschen and Lloyd W. Gay, Environmental Instrumentation (1979), with coverage limited to measurements at the Earth's surface; Stuart G. Bigler, “Radar: A Short History,” Weatherwise, 34(4):158–163 (August 1981); Louis J. Battan, Radar Observation of the Atmosphere, rev. ed. (1973); R.S. Scorer, Cloud Investigation by Satellite (1986); Vincent J. Oliver, “A Primer: Using Satellites to Study the Weather,” Weatherwise, 34(4):164–170 (August 1981); Eric C. Barrett and David W. Martin, The Use of Satellite Data in Rainfall Monitoring (1981); and Eric C. Barrett, Climatology from Satellites (1974). A good overview of forecasting is found in Lance F. Bosart, “Weather Forecasting,” ch. 4 in David D. Houghton (ed.), Handbook of Applied Meteorology (1985), pp. 205–279. Methods and problems of forecasting are presented in Jaromir Nemec, Hydrological Forecasting: Design and Operation of Hydrological Forecasting Systems (1986); D.M. Burridge and E. Källén (eds.), Problems and Prospects in Long and Medium Range Weather Forecasting (1984); K.A. Browning (ed.), Nowcasting (1982); and three articles from Bulletin of the American Meteorological Society: Donald L. Gilman, “Long-Range Forecasting: The Present and the Future,” 66(2):159–164 (February 1985); Jerome Namias, “Remarks on the Potential for Long-Range Forecasting,” 66(2):165–173 (February 1985); and L. Bengtsson, “Medium-Range Forecasting—The Experience at ECMWF,” 66(9):1133–46 (September 1985).John J. Cahir

* * *


Universalium. 2010.

Игры ⚽ Нужно решить контрольную?

Look at other dictionaries:

  • Weather forecasting — weather forecaster redirects here. For other uses, see Weatherman (disambiguation). Forecast of surface pressures five days into the future for the north Pacific, North America, and north Atlantic ocean. Weather forecasting is the application of… …   Wikipedia

  • weather forecasting — noun predicting what the weather will be • Syn: ↑meteorology • Derivationally related forms: ↑meteorologist (for: ↑meteorology) • Topics: ↑meteorology …   Useful english dictionary

  • weather forecasting — noun The science of using meteorology to predict future weather …   Wiktionary

  • weather forecasting — predicting the weather …   English contemporary dictionary

  • National Centre for Medium Range Weather Forecasting — (NCMRWF) is a national agency for weather forecasting under the Ministry of Earth Sciences, (transferred from its former parent Ministry of Science and Technology), Government of India. It is a premier[clarification needed] institute in India to… …   Wikipedia

  • National Collegiate Weather Forecasting Contest — The National Collegiate Weather Forecasting Contest, or NCWFC, was a yearly competition among colleges and Universities in the US run by Penn State. There were over 1000 participants from about 45 institutions. In 2006, the competition was… …   Wikipedia

  • Forecasting — is the process of estimation in unknown situations. Prediction is a similar, but more general term. Both can refer to estimation of time series, cross sectional or longitudinal data. Usage can differ between areas of application: for example in… …   Wikipedia

  • Weather station — at Mildura Airport, Victoria, Australia. A weather station is a facility, either on land or sea, with instruments and equipment for observing atmospheric conditions to provide information for weather forecasts and to study the weather and climate …   Wikipedia

  • Weather lore — is the body of informal folklore related to the prediction of the weather.It has been a human desire for millennia to make accurate weather predictions. Oral and written history is full of rhymes, anecdotes, and adages meant to guide the… …   Wikipedia

  • Weather control — is the act of manipulating or altering certain aspects of the environment to produce desirable changes in weather.History of weather controlSome American Indians had rituals which they believed could induce rain. The Finnish people, on the other… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”