William H. White
ECONOMISTS have long been trying to provide the policymaker with guidance on what kinds of economic policy to adopt; recently they have been attempting to provide more precise guidance on how strong the policy measures should be. For success in economic planning in the developing countries and in minimizing unemployment and maintaining reasonable stability in the rate of economic growth in developed countries—and for other purposes as well—this is potentially an important advance. The means of providing this advice are based on the derivation of mathematical formulas, analogous to those expressing the laws of physics. These formulas, or “models,” represent the operation of the most important economic cause-and-effect relationships—the laws (or rules) of economic behavior—and show the chains of reactions which are initiated by changes in particular causal factors. Econometric models are relatively new; they have only recently been developed and are now constantly being refined.
Success in the development of such mathematical models depends on the ability to measure the strength of the current influence of the various causal factors. Because it is patterns of human behavior rather than mechanical laws of physics that must be measured, the economic laws will never be known with the accuracy of the laws of physics. Nevertheless, the prospect now exists for improving the guidance offered to economic policymakers beyond the guidance provided by their own experience or the judgments of premodel era economists.
What is an Econometric Model?
Although it may help those who have a knowledge of physics to think of an econometric model as resembling one of the laws of physics (e.g., Boyle’s law), yet econometrics differs from physics in two ways. (1) It is impossible (with some exceptions) to conduct controlled laboratory experiments in which different amounts of the causal factors of special interest can be introduced, while the strength of all other factors is held unchanged, so that the effect of each factor can be directly measured. (2) The laws of classical physics are known with certainty: for given amounts of cause, the same effect will always be produced. This rigidity of physical laws reflects the mechanical nature of the physical relationships. In contrast, economic “laws” involve the rather flexible reactions of human behavior to a cause. Here the relationship between cause and effect cannot be either rigid or precise: the laws of behavior can be verified therefore only in an approximate sense; they are probabilities rather than certainties. They will tend to be accurate on the average over time (unless the passage of time has caused a change in the strength of the human reactions), but they will not be completely accurate at any single time.1 Where the inaccuracy found proves to be sufficiently small, the information derived from the econometric model will be useful; often, however, it will have to be considered merely as a best guess that carries with it a large margin of error, and frequently the margin of error will be so large that the econometrician will not issue any finding at all for the use of the policymaker or even as guide to further econometric research.
Discovering the Model Without Benefit of Laboratory Experiment
The econometrician usually meets the first disadvantage in the discovery of economic laws—the inability to conduct controlled laboratory experiments—through the use of series of statistical data on likely candidates as causal factors which are collected over time.
In a primitive way, this sort of extraction of economic cause-and-effect relationships was used even before the development of econometrics. If, for example, it was observed that after a 20 per cent reduction of the price of bread, consumers began spending 10 per cent less for bread, the economist would offer as his best-guess estimate that any change in the price of bread would affect the amount of money spent for bread in the same direction (less money spent when the price was cut, more when the price was raised) and by an amount which was roughly half the amount of the change in the price. The confidence with which such a conclusion could be reached would be improved, of course, if later on the price returned to its original level and it was observed that the amount of money spent also rose to roughly its original level.
Usually, however, even the pre-econometrician had to try to deal with more complex relationships than this. For example, the price of bread might fall at a time when personal incomes were also falling, so that the economist had to try to distinguish the depressing effect of the price decline on the amount spent from that depressing effect which was presumably exerted by a decline in the consumer’s spendable income. If there was one time at which either the price of bread or the income level was different from the initial value while the level of the other factor was not different from its initial level, then the pre-econometrician still had a chance to distinguish the influence of the two causal factors, consumer income level and price, in the amount of money spent for bread. Ordinarily, however, he could not count on having measured the influences of these two factors accurately, because the influence of other factors which he could not take into account (the price of alternative food products such as potatoes, change in the average age of the population or in its concern about being overweight) would have exerted differing influences on the total spent between the observations made at the three different times. In more general terms, the fact that even the true strengths of causal influences are—as described earlier—only probable relationships, valid on the average but not valid at any particular point of time, had to be allowed for. It meant that any inference drawn from comparing the sizes of causes with the sizes of their apparent effects on behavior at only a few points of time was very likely to yield estimates of the strength of effects which misrepresented the true (i.e., true on average) strength of effect. With several sets of three observation periods—each set containing three observations of the two factors (bread price, consumer income) and of the behavior which they had influenced (spending for bread)—the economist might have been able to find such an average value for the influence of each of the two factors on the third; but these results are likely to be very difficult to obtain, and run the risk of being distorted by subjective judgment or other influences. Of course, if the several estimates for each of the two factors turned out to be close to each other, the method could be accepted, but such good results cannot be routinely expected. Also, as the number of factors included increases, the procedure is less feasible and reliable. Moreover, it may then become necessary to allow for “feedbacks”—the elaborate interconnections among economic events which make the size of some causal factors in part the result of the size of other causal factors or even of the size of the very effect under study. And the problems increase further when it is necessary to compare the results of alternative “models” which use different sets of causal factors to see which gives the best results.
To be able to derive results that are at all worthy of serious consideration, the economist has had to turn to the econometrician, a specialist in applying the mathematical techniques developed by the statistician. With these techniques he can quickly extract estimates of the influence of various relevant causal factors which are a sort of average of the influences which would have been derived from separate sets of observations. Moreover, this procedure yields objective measurements of how closely the estimated effects of the movements of the several causal factors add up to the actual, observed movements over time for the behavior under study (the spending for bread). If, for example, the model succeeds in deriving estimated values for the amount of expenditure on bread which are reasonably close to the expenditures actually observed during each of the separate months, quarters, or years which constitute the separate observations, then the achievement of a successful model may be assured.
Uses of an Econometric Model
If an econometric model can help in forecasting economic events, its value to policymakers is obvious. For example, in some countries, the government wants to improve farmers’ incomes through buying enough of each grain harvest until the grain price is pushed up to a level which yields the desired money income. The question of how much money the government should plan to spend to purchase grain then becomes an important one. The change in the price of grain can be converted into the associated change in the price of bread, and knowledge of the effect of the change in bread price on the amount the public spends for bread (described earlier) is easily used to show the government how much money it must spend in buying up the grain which the public will stop buying when the price rises. If we consider for simplicity only the bread price, government attempts to raise the bakers’ receipts from the sale of bread by 20 per cent would require a 20 per cent rise in the selling price on the pre-existing quantities of bread sales. But we know that the public will raise its spending by only 10 per cent when the price rises 20 per cent. Hence, to achieve the desired 20 per cent rise in the proceeds to bakers, the government will itself have to spend the missing 10 per cent for the purchase of bread. With this kind of information, the government can judge better whether aiming at a given improvement in the prices that farmers receive will be too costly for its budget; alternatively, it can prepare its budgetary requests with a greater degree of accuracy.
In countries where the necessary statistical data exist, the policymaking uses of the econometric models will of course extend far beyond the question of the price of a given product. The amount of a necessary exchange devaluation may be better estimated with the use of that part of a model’s results which shows the influence of a change in the price foreigners pay for the country’s exports upon the quantity of those exports they buy, and the comparable effect of changes in the cost of imports on the country’s own imports. When expenditure on goods and services within the country is foreseen to be larger than the capacity to produce goods and services, the appropriate strength of measures to restrict expenditures—such as raising interest rates (to restrict expenditures for business investment) or raising taxes on consumer income (to restrict consumer spending)—becomes of major concern to the government policymaker. Models which show how and how much interest rates and taxes affect investment and consumption expenditures therefore are of great importance to the authorities.
In questions of economic development, also, the effect of increases in taxes on consumers is of great interest. For example, the extent to which taxes cause them to reduce their savings rather than their consumption expenditure is important because the savings are necessary for financing the investment which will lead to economic development. Guidance on what export industries should be established can be found in forecasts of the growth of world demand for various products and in the amount that will be bought of particular products when different prices are charged. Similarly, the amount of addition to total productive capacity which investment expenditures will produce obviously is important, and knowledge about it too can be obtained from observation of past experience with growth in production as affected by the size and the industrial distribution of total investment spending. Finally, the faster the rate of development, the faster is the growth in incomes and hence in the portion of incomes which is saved and can finance further investment for development. Because of this relationship between income growth and growth in the financeable level of investment spending, information on the effect of income growth on savings would also be extremely valuable for development planning.
The Problem of Lags
The reader will already suspect that information as valuable as this is not easily obtained—and he is right. One of the major difficulties is that a cause may not exert its effects until some time has passed, as when a reduction in taxes which leaves consumers with more spendable income leads them at first to spend only a small part of the extra resources; the consumer may require some time to adjust his standard of living to meet the new possibilities in full. The pre-econometrician had very little hope of making proper allowance for the presence of these so-called lags, but the econometrician has been able to test efficiently models which provide a variety of alternative lags and select the one which yields the best results (the estimated values for the effect which on average came closest to the observed values of the effect). In this way, very complex lag patterns have been derived, consumption in the current quarter of the year being found to be influenced relatively little by spendable income in the current quarter, with most of the influence being assigned to the spendable income of, e.g., the preceding 12-month period and the 12 months before that.
The fact that economic causes produce their effects with such lags gives the econometric model real advantage as a device for forecasting economic developments. To know how much consumer spending is likely to rise from the current quarter to the next one, it need not be very important to know what the spendable income will be in that next quarter. From what has just been said, the information on spendable income in the current and preceding quarters is (almost) completely sufficient for a good forecast of next-quarter consumption. And since consumption accounts for the greater part of spending on the country’s production of goods and services, this information carries us a long way toward having a forecast of the levels of gross national product (GNP), employment, etc., in the near future. With similar lagging causal factors being important in the determination of other major segments of the GNP total—e.g., fixed investment and exports—a forecast of most of the total of near-future GNP can be made without need for accurate estimates of the strengths of the various factors that will be found to exist during that future period.
Government spending (or the size of the government budget surplus or deficit)—another important factor in the GNP—cannot be so readily forecast by use of lags. However, since the future values of the government factor are (at least ideally) under the control of the policymakers, no forecast of this factor is necessary. For example, the combination of the forecast levels of consumption, investment, and exports plus, e.g., the same figure for the government budget as the current one might yield an unsatisfactory level of GNP. But the combination of the forecast for the private sector and the assumed value for government expenditure has provided a guide to policy. If the forecast GNP derived under the assumption that the current level of government spending will continue in the future is too high, the policymakers can adjust the budgeted future expenditure until the forecast results yield a suitable level.
Validity of Group Behavior Forecasts
Another problem related to econometric models which may occur to many readers is whether the erratic, “lawless” economic behavior of the individuals making up a society can permit the conduct of the whole group to conform sufficiently to such a model. The apparently erratic behavior of each individual may be considered a reflection of the influence of many causal factors—economic and noneconomic—which cannot be considered strong ones for all members of the community taken together and which therefore cannot be easily measured. If these various secondary factors could be considered as more or less self-neutralizing in their net effect on the behavior of an economic group, the plausibility of stable aggregate behavior despite erratic individual behavior would be established. And at least a tendency for such self-cancellation among the secondary factors can in fact be established, provided that one condition is satisfied: these factors must be more or less random. Randomness exists when no one strength (including zero strength) for a particular factor is more likely to be experienced among the individuals in the group than any other; and all strengths of any particular factor are equally likely to be experienced by an individual, regardless of the strength of the other factors affecting him at the same time. While it is not possible to prove that the neglected secondary factors will always satisfy the randomness requirement sufficiently well, it is nevertheless reasonable to assume that in many instances most of the secondary factors producing deviations from a supposed norm will be distributed among the population in a sufficiently close-to-random way. It will be noted that imposing the above randomness condition will yield stable behavior in spite of the fact that it exaggerates the difficulty of establishing stable group behavior in one important respect, namely, its requirement that any given factor yield equal frequencies of strong and weak distorting effects. In reality, just as huge distorting influences are not expected at all from secondary factors, so also relatively large influences are expected to occur with relatively low frequency.
To illustrate the operation of the various secondary factors, assume two influences on the rate of bread consumption which may cause the rate to deviate from the norm and which seem reasonably likely to be randomly distributed: differing amounts of increase of current physical exertion, starting from no increase, which will raise bread consumption by 0 per cent, 1 per cent, 2 per cent, or 3 per cent; different amounts of recent tendency for increases in the individual’s weight which will again have no effect (zero weight increase) or reduce bread consumption by 1 per cent, 2 per cent, or 3 per cent. Given our crucial assumption that these changes occur randomly—each one having as much chance to be experienced as any other and any combination of changes in the two factors together being as likely to be experienced as any other combination—there are just 16 possible combinations of distorting factors, all of which are assumed to be equally likely to affect individuals making up the bread-consuming population. To simplify, we can, therefore, assume that there are just 16 individuals in the population, one for each possible combination of strengths of the random factors—see Table 1.
The tables show that from these assumptions it follows that most individuals will be clustered in the neighborhood of zero deviation from the normal rate of bread consumption, with relatively few having extreme deviations. Moreover, the deviations on one side of the norm will be just counterbalanced by those on the other side, so that the group as a whole shows a zero deviation from normal behavior.
Why is it that the randomly distorted cases tend to cluster around the undistorted, normal value? The logical basis for so neat a result from a set of random factors can be seen from Table 2. There is only one possible instance of the two maximum divergences from norm—those when the 3 per cent effect from one factor are accompanied by a zero effect from the other, offsetting factor—but there are four possible cases of zero deviation from norm—the four cases in which the four possible strengths of one factor are accompanied by equal opposite strengths of the other factor. Thus, it is seen that opposing random factors must tend to coincide in a relatively large proportion of cases, thereby holding a relatively large proportion of individuals close to the norm.
This demonstration may seem limited because it assumes that the two illustrative random factors are either neutral or counterbalancing but never reinforce each other’s distorting effects. Reinforcing distortions could be expected to produce more disruption of the average result. The interested reader can construct a three-dimensional version of Table 1, that is to say a set of tables like Table 1 but each allowing for a different size of the influence of another factor affecting food consumption, such as the ages of the population. With the age factor starting at neutral and then reducing bread consumption by three successive 1 percentage-point steps, the three new tables would stand behind Table 1 but would be shifted successively to the right by one step. When this is done and the totals for each size of net deviation from norm taken (as in the bottom row of Table 2), it is found, as expected, that the maximum deviations from norm are larger than before (4½ per cent instead of just 3 per cent); the bread consumption norm itself is also depressed to a somewhat lower figure; but the same clustering of cases around the (new) norm is found, with again only a single case at the extreme “corners,” and the same counterbalancing of deviations from the norm remains. (Instead of the distribution of cases among the various sizes of net effect found in Table 2—1, 2, 3, 4, 3, 2, 1—we find the symmetrical “bell-shaped curve” distribution of probability theory: 1, 3, 6, 10, 12, 12, 10, 6, 3, 1.)
These examples represent ideal cases. The laws of random chance require that every possible combination of ignored, random factors tend to be influencing the average (and making that average conform to the norm). But they also recognize the existence of a certain probability that on some occasions certain of the 16 combinations of factors will be absent while others will be overrepresented, so that the observed average result for the whole population will at times differ from the underlying norm. It is only when several observations of the population’s behavior are used that the results taken together can be counted on to approximate to the symmetrical distributions described above; only then will the average result for all the individuals in the population approximate the norm.
The number of observations does not, on the other hand, have to be extremely large, for, analogously with the clustering of individuals around the norm described above, large deviations of a group’s behavior at a given time from the underlying group norm tend to be relatively infrequent, with small deviations occurring the greater part of the time. However, a further need for including additional observations before a trustworthy estimate of the norm can be reached is created by the fact that economic life is subjected from time to time to unpredictable shocks (such as climatic or political disturbance) which cause a substantial proportion of the population to depart in the same direction from its normal behavior pattern. Comparable distortions must be expected from time to time because of errors in the statistics themselves. If enough separate observations of society’s behavior are made, either the observations which include the influence of such shocks or errors will not have a chance to dominate the results, or else the distortions created by several such shocks will tend to be in opposite directions and therefore counterbalancing.
In recent years, the increasing availability of the varieties of economic data necessary for estimating econometric models and the increasing availability of electronic computers to perform the otherwise almost impossible task of making the estimates (and making them for a large variety of lag patterns, of alternative selections of causal factors, etc., in order to see which ones yield good results) has led to great optimism about the possibilities for producing sufficiently reliable models. Nevertheless, it remains true that the estimates must be taken at best as merely probabilities—the most likely value given the information available, but a value which is correct only in a probabilistic sense: if all goes well and the patterns of human behavior remain unchanged, the effects estimated through the models will be correct on average, although at any given point of time they will not be correct. By some scale of size, small errors in the indicated effects will be quite frequent, and large errors quite infrequent. But for the economist’s purposes even the small error may be too large to permit the model’s finding to be relied on, or the medium-size errors may be so serious that even their moderate frequency of occurrence will be excessive. In these conditions the model will be of little more use to the policymaker than the “educated guesses” he had to rely on before the recent flowering of econometric models. Some use may exist even here because, even at the worst, models can be used to show the ultimate effects of policy actions under a plausible range of assumed values for the strengths of factors; and sometimes the range of results found will exclude certain outcomes which otherwise would have seemed to be plausible ones.
In spite of the limitations, the results now being derived from econometric models are often an improvement over the other information available to the policymaker, and refinement of techniques and of standards of reliability will assure that increasingly the policymaker will be presented with usable results. For the present, however, the policymaker should be placed in position to evaluate correctly the econometrician’s measures of the reliability of the results presented; even if they are clearly an improvement over those obtainable by educated guessing, these results will commonly still have sufficient margin of error that the policymaker may want to adopt a compromise, “safety first” policy rather than the one for which the model evidence would call if it could be taken at its face value.
A further article will provide guidance to help the user of evidence from econometric models to evaluate and adjust the results.
quoting or reprinting Finance and Development articles …
No permission is required to quote or reproduce material appearing in Finance and Development. A credit line or other acknowledgment is requested.
The Editor would be glad to receive two copies of publications containing reprints or quotations.
On a determinist view, this inaccuracy of the laws of human behavior could be construed simply as a lack of sufficient information on all the causal factors operating on the kind of human action at issue. The laws derived are imprecise ones simply because it has not been possible to measure—or even to identify—the influence of a very large number of very weak factors which participate in the determination of human action. Without necessarily accepting the determinist philosophy, econometricians usually do believe that they are likely to be able to reduce the degree of inaccuracy of laws of behavior they derive through developing statistics on additional factors which up to now have been unavailable or of too low quality to be trustworthy.