Research Article Volume 2 Issue 1
1Department of Biostatistics, Uludag University, Turkey
2Division of Epidemiology and Biostatistics, University of Illinois, USA
Correspondence: Ilker Ercan, Department of Biostatistics, Uludag University, Medical Faculty, Bursa, Turkey, Tel 90 224 2953881
Received: November 25, 2014 | Published: February 26, 2015
Citation: McCormack K. Diagnostics for hedonic models using an example for cars(hedonic regression). Biom Biostat Int J. 2015;2(1):23-37. DOI: 10.15406/bbij.2015.02.00022
This paper provides a detailed account of the steps involved in the development and diagnostics of an OLS regression model (hedonic) using the Irish Central Statistics Offices’ Consumer Price Index for New Cars as an example. The areas of Collinearity: effects on parameter estimates, effects on inference, effects on prediction, what to do about collinearity and Model Diagnostics: residuals’ “standardized residuals, residual plots, outliers, studentized residuals (t-residuals), influential observations, leverage, cooks’ distance and transformations are discussed in detail with examples.
CPI, consumer price indices; HICP, harmonized index of consumer price index; ECB, european central bank’s; OLS, ordinary least squares
Hedonic regression
Hedonic regression is a method used to determine the value of a good or service by breaking it down into its component parts. The value of each component is then determined separately through regression analysis. See Lancaster K,1 Griliches Z2,3 and Diewert4-6 for detail discussions on the development and application of hedonic prices. In addition, Moulton7 provides a information on the importance and expanded use of Hedonic Methods.
The term “hedonic methods” refers to the use in economic measurement of a “hedonic function,” h ( ),
Where p is the price of a variety (or model) iof a good and ci is a vector of characteristics associated with the variety. One of the more widely-recognised examples of hedonic regression is the Consumer Price Index, which examines changes to the value of a basket of goods over time and the hedonic function is used to adjust for differences in characteristics between varieties of the good in calculating its price index. The hedonic function is normally estimated by regression analysis.
Consumer price indices
All the goods and services that consumers purchase have a price and that price may vary over time, and these changes are a measure of inflation. Consumer Price Indices (CPI) are designed to measure such changes. A useful way to understand the nature of these indices is to imagine a very large shopping basket comprising of a set, or basket of fixed composition, quantity and as far as possible quality of goods and services bought by a typical private household.
The European CPI is titled the Harmonized Index of Consumer Price Index (HICP - Council regulation (EC) No 2494/95).8 The HICP is an important contributor the European Central Bank’s (ECB) monetary policy and is compiled monthly using a Laspeyres index formula, see Allen RGD.9 This HICP it expresses the current cost of a fixed market basket of consumer goods and services as a percentage of the cost of the same identical basket at a base (normally mid- December of the year previous to the reference date).
The HICP is defined as
Where is the weight assigned to item j determined by the base period consumer expenditure shares and refers to the price of item j in period t.
Whenever a comparison between current and base period products is not possible, or when the current period basket reflects new market developments (i.e. quality improvements). This then leads to the well-known problem of how to measure price developments when the quality of the underlying goods and services is changing over time.
It is important the appropriate methods are employed to take account of quality change in the HICP as this index need to be a credible and transparent. In this context, this paper provides a detailed account of the steps involved in the development of an OLS regression model (hedonic) and associated diagnostics using the Irish Central Statistics Office’s Consumer Price Index for New Cars as an example.
This paper is designed to be a reference document for those challenged with using regression analysis to determine the value of each characteristic of a goods or service. It is divided into six sections, each dealing with a particular aspect of model development. Section one deals with summarising relationships, section two discusses fitting curves – regression analysis, while section three presents the hedonic regression model. In section four, a test for Collinearity within the model is undertaken and a strategy for dealing with Collinearity is presented. In section five, the model diagnostics: residuals – standardized residuals, residual plots, outliers, studentized residuals (t-residuals), influential observations, leverage, cook’s distance, and transformations are undertaken and discussed in detail.
Finally, section six provides an illustrative example of how hedonic based quality adjustment can be applied in a situation when the price an individual car model was available in a January of a particular year, but was not available in the February of the same year. It is shown that without the application of the quality adjustment method the New Car Price Index would provide an incorrect measurement of the price changes of new cars over the period, which in turn would miss-inform the ECB’s monetary policy.
Statistical analysis is used to document relationships among variables. Relationships that yield dependable predications can be exploited commercially or used to eliminate waste from processes. A marketing study done to learn the impact of price changes on coffee purchases is a commercial use. A study to document the relationship between moisture content of raw material and yield of usable final product in a manufacturing plant can result from finding acceptable limits on moisture content and working with suppliers to provide raw material reliably within these litmus. Such efforts can improve the efficiency of manufacturing process.
We strive to formulate statistical problems in terms of comparisons. For example, the marketing study in the preceding paragraph was conducted by measuring coffee purchases when prices were set a several different levels over a period of time. Similarly, the raw material study was conducted by comparing the yields from batches of raw materials that exhibited moisture content.
Scatter plots display statistical relationship relationships between two metric variables (e.g. price and cc) in this section the details of scatter plotting are presented using the data in (Table 1). The data were collected for used in compiling the New Car Index in the Irish CPI Scatter plots are used to try to discover a tendency for plotted variables to be related in a simple way. Thus the more the scatter plot reminds us of a mathematical curve, the more closely related we infer the variables are. In the scatter plot a direct relationship between the two variables is inferred.
The above graph shows a relatively a linear relationship between the two metric variables (price and cc). However to investigate further the relationship between these two variables we can apply the universal method of logarithmic transformation to the cc variable. This transformation discounts larger values of cc and leaves smaller and intermediate ones intact and has the effect ofincreasing the linearity of relationship (Figure 1).
The descriptive statistic most widely used to summarise a relationship between metric variables is a measure of the degree of linearity in the relationship. It is called product-moment correlation coefficient denoted by the symbol r and it is defined by
Where and are the mean and standard deviation of the x variable and and are the mean and standard deviation of the y variable.
These properties emphasise the role of r as a measure of linearity. Essentially, the more the scatter plot looks like a positively sloping straight line, the closer r is to +1, and the more the scatter plot looks like a negatively sloping straight line, the closer r is to -1.
Using the equation above, r is estimated for the relationship shown in (Figure 2) to be 0.92 indicating a strong linear relationship between the price of a new car and cylinder capacity. For the relationship shown in (Figure 1), r is estimated to be 0.93, indicating that using the logarithmictransformation does indeed increase the linearity of relationship between the two metric variables. The LINEST function in EXCEL was used to estimate r in both of the above relationships.
In the sections above we showed how to summarise the relationships between metric variables using correlations. Although correlations are valuable tools, they are not powerful enough to handle many complex problems in practice (Table 2). Correlations have two major limitations:
|
Price |
CC |
No. of doors |
Horse |
Weight |
Length |
Power steering (pst) |
ABS |
Air bags |
Toyota Corolla 1.3L Xli Saloon |
13,390 |
1300 |
4 |
78 |
1200 |
400 |
1 |
0 |
0 |
Toyota Carina 1.6L SLi Saloon |
15,990 |
1600 |
4 |
100 |
1400 |
450 |
1 |
1 |
1 |
Toyota Starlet 1.3L |
10,780 |
1300 |
3 |
78 |
1000 |
370 |
0 |
0 |
0 |
Ford Fiesta Classic 1.3L |
9,810 |
1100 |
3 |
60 |
1000 |
370 |
0 |
0 |
1 |
Ford Mondeo LX 1.6I |
15,770 |
1600 |
4 |
90 |
1400 |
450 |
1 |
1 |
1 |
Ford Escort CL 1.3I |
12,095 |
1300 |
5 |
75 |
1200 |
400 |
1 |
0 |
0 |
Mondeo CLX 1.6i |
16,255 |
1600 |
5 |
90 |
1400 |
450 |
1 |
1 |
1 |
Opel Astra GL X1.4NZ |
12,935 |
1400 |
5 |
60 |
1200 |
400 |
1 |
0 |
0 |
Opel Corsa City X1.2SZ |
9,885 |
1200 |
3 |
45 |
1000 |
370 |
1 |
0 |
1 |
Opel Vectra GL X1.6XEL |
16,130 |
1600 |
4 |
100 |
1400 |
450 |
1 |
1 |
1 |
Nissan Micra 1.0L |
9,780 |
1000 |
3 |
54 |
1000 |
370 |
0 |
0 |
0 |
Nissan Almera 1.4GX 5sp |
13,445 |
1400 |
5 |
87 |
1200 |
400 |
1 |
0 |
0 |
Nissan Primera SLX |
16,400 |
1600 |
4 |
100 |
1400 |
450 |
1 |
1 |
1 |
Fiat Punto 55 SX |
8,790 |
1100 |
3 |
60 |
1000 |
370 |
0 |
0 |
0 |
VW Golf CL 1.4 |
12,995 |
1400 |
5 |
60 |
1200 |
400 |
1 |
0 |
0 |
VW Vento CL 1.9D |
15,100 |
1900 |
4 |
64 |
1400 |
450 |
1 |
1 |
1 |
Mazda 323 LX 1.3 |
12,700 |
1300 |
3 |
75 |
1200 |
400 |
1 |
0 |
0 |
Mazda 626 GLX 2.0I S/R |
17,970 |
2000 |
5 |
115 |
1400 |
450 |
1 |
1 |
1 |
Mitsubushi Lancer 1.3 GLX |
13,150 |
1300 |
4 |
74 |
1200 |
400 |
1 |
0 |
1 |
Mitsubushi Gallant 1.8 GLSi |
16,600 |
1800 |
5 |
115 |
1400 |
450 |
1 |
1 |
1 |
Peugeot 106 XN 1.1 5sp |
9,795 |
1100 |
5 |
45 |
1000 |
370 |
0 |
0 |
0 |
Peugeot 306 XN 1.4 DAB |
12,295 |
1400 |
4 |
75 |
1200 |
400 |
1 |
0 |
0 |
406 SL 1.8 DAB S/R |
16,495 |
1800 |
4 |
112 |
1400 |
450 |
1 |
1 |
1 |
Rover 214 Si |
12,895 |
1400 |
3 |
103 |
1200 |
400 |
1 |
0 |
1 |
Renault Clio 1.2 RN |
10,990 |
1200 |
5 |
60 |
1000 |
370 |
1 |
0 |
1 |
Renault Laguna |
15,990 |
1800 |
5 |
95 |
1400 |
450 |
1 |
1 |
1 |
Volvo 440 1.6 Intro Version |
14,575 |
1600 |
5 |
100 |
1400 |
450 |
1 |
0 |
1 |
Honda Civic 1.4I SRS |
14,485 |
1400 |
4 |
90 |
1200 |
400 |
1 |
0 |
0 |
Table 1 Characteristics of cars used in the Irish new car price index
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Table 2 Fitted values and residuals for new car data
A model describes how a process works. For scientific purposes, the most useful models are statements of the form “if certain conditions apply, then certain consequences follow”. The simplest of such statements assert that the list of conditions result in a single consequence without fails. For example, we learn in physics that if an object falls toward earth, then it accelerates at about 981 centimetres per second.
A less simple statement is one that assesses a tendency: “Loss in competition tends to arouse anger.” While admitting the existence of exceptions, this statement is intended to be universal, that is anger is the expected to loss in competition.
To be useful in documenting the behaviour of processes, models must allow for a range of consequences or outcomes. They must also be able to describe a range of conditions, fixed levels of predictor variables (x), for it is impossible to hold conditions constant in practice. When a model describes the range of consequences corresponding to a fixed set of conditions, it describes local behaviour. A summary of the local behaviours for a range of conditions is called global behaviour. Models are most useful if they describe global behaviour over a range of conditions encountered in practice. When they do, they allow us to make predictions about the consequences corresponding to conditions that have not actually been observed. In such cases, the models help us reason about processes despite being unable to observe them in complete detail.
To illustrate this topic refer back to the sample of cars in (Table 1) and (Figure 2) (the outcome of the scatter plot of price vs. cc) above. Our eyes detect a marked linear rend in the plot. Before reading further, use a straight-edge to draw a line through the points that appear to you to be the best description of the trend. Roughly estimate the co-ordinates of two points (not necessarily points corresponding to data points) that lie on the line. From these two estimated points, estimate the slope and y intercept of the line as follows:
Let and denote two points, with on a line whose equation is y = a + bx. Then the slope of the line is And the y intercept is Next, describe the manner in which the data points deviate from your estimated line. Finally, suppose you are told that car has cylinder capacity of 1600cc, and you are asked to use your model to predict the price of the car. Give your best guess at the range of plausible market values. If you do all these things, you will have performed the essential operations of a linear regression analysis of the data.
If you followed the suggestions in the previous paragraph, you were probably pleased to find that regression analysis is really quite simple. On the other hand, you may not be pleased with the prospect of analysing many large data sets “by eye” or trying to determine a complex model that relates price cylinder capacity, horse power, weight and length simultaneously. To do any but the most rudimentary, the help of a computer is needed.
Statistical software does regression calculations quickly, reliably, and efficiently. In practice one never has to do more than enter data, manipulate data, issue command that ask for calculations and graphs, and interpret output. Consequently, computational formulas are not presented here.
The most widely available routines for regression computations use least squares methods. In this section the ideals behind ordinary least squares (OLS) are explained. Ordinary least squares fit a curve to data pairs by minimising the sum of the squared vertical distances between the y values and the curve. Ordinary least squares are fundamental building block of most other fitting methods.
Fitting a line by ordinary least squares
When a computer program (in this case the LINEST function in EXCEL) is asked to fit a straight-line model to the data in (Figure 2) using the method of ordinary least squares, the following equation is obtained
The symbol y stands for a value of Price (response variable), and the symbol ^ over the y indicates that the model gives only an estimated value. The symbol x (predictor variable) stands for a value of cylinder capacity. This result can be put into the representation
Observation = Fit and Residual
Where y is the observation, is the fit (ted) value and is the residual. Consider car No. 1 in (Table 1) that has a cylinder capacity x = 1300. The corresponding observed price is y = 13,390. The fitted value given by the ordinary least square line is = 307 + 9.11(1300)
= 307 + 11,843
= 12,150
The vertical distance between the actual price and the fitted price is = 13,390 – 12,150 = 1,240, which is the residual. The positive sign indicated the actual price is above the fitted line. If the sign was negative it means the actual price is below the fitted line.
Figure 3 below shows a scatter plot of the data with ordinary least squares line fitted through the points. This plot confirms that the computer can be trained to do the job of fitting a line. The OLS line was fitted using the linear trend line option in WORD for a chart. Another output from the statistical software is a measure of variation: s = 1028. This measure of variation is the standard deviation of the vertical differences between the data points and the fitted line, that is, the standard deviation of the residuals (Figure 4).
An interesting characteristic of the method of least squares is: for any data set, the residuals from fitting a straight line by the method of OLS sum to zero (assuming the model includes a y – intercept term). Also because the mean of the OLS residuals is zero, their standard deviation is the square root of the sum of their squares divided by the degrees of freedom. When fitting a straight line by OLS, the number of degrees of freedom is two less than the number of cases, denoted by n-2 because
The residuals sum to zero
The sum of the products of the fitted values and residuals, case by case is zero.
Analysis of residuals
Two fundamental tests are applied to residuals from a regression analysis: a test for normality and a scatter plot of residuals versus fitted values. The first test can be performed by checking the percentage of residuals within in one, two and three standard deviations of their mean, which is zero. The second test gives visual cues of model inadequacy.
What do we look for in a plot of residuals versus fitted values? We look for a plot that suggests random scatter. As we noted above, the residuals satisfy the constraint
where the summation is done over all cases in the data set. The constraint, in turn, implies that the product-moment correlation coefficient between the residuals and the fitted values is zero. If the scatter plot is somehow not consistent with this fact because it exhibits a trend or other peculiar behaviour, then we have evidence that the model has not adequately captured the relationship between x and y. This is the primary purpose of residual analysis; to seek evidence of inadequacy.
The scatter plot in (Figure 3) suggests random scatter and the regression equation above is, therefore, consistent with the above constraint.
The inclusion of additional metric variables
So far the variable used to account for the variation in the price of a new car is a measure of a physical characteristic which is more or less permanent, though cylinder capacity can change with improvements or deterioration’s. This variable does not link up directly with economic factors in the market place, however. Regardless of the cylinder capacity of the car the price of a new is also related to the horse power, weight, length and number of doors.
Defining the characteristics of a new car as follows
= cylinder capacity (cc)
= number of doors (d)
= horse power (ps)
We propose to fit a model of the form
(3.1)
When applying regression models (Hedonic regression) to a car index it is usual to fit a semi logarithmic form as it has been proven to fit the data best. That is
(3.2)
This model relates the logarithm of the price of a new car to absolute values of the characteristics. Natural logarithms are used, because in such a model an b coefficient, if multiplied by a hundred measures, will provide an estimate of the percentage increase in price due to a one unit change in the particular characteristic or “quality”, holding the level of the other characteristics constant.
Using the LINEST function in EXCEL (or PROC REG in SAS) the following estimates for the b coefficients are obtained when the above model is applied to the data in (Table 1).
(3.3)
The interpretation of the above equation is as follows. Keeping the level of other characteristics constant
The inclusion of categorical variables
The next step is to incorporate power steering, ABS system and air bags into the model. The variables are categorical variables: their numeric values 1, and 0 stand for the inclusion or exclusion of these features in a car.
The semi logarithmic form of the model is now.
(3.4)
Where
= weight
= length
= power steering (pst)
= ABS system (abs)
= air bags (ab)
Using the LINEST function in EXCEL (or PROC REG in SAS) the following estimates for the b coefficients are obtained when the above model is applied to the relevant data in (Table 1).
Equation (3.5)
The regression coefficients obtained from Equation (3.5) are interpreted as follows. Keeping the level of other characteristics constant
In Section 4 below it is shown that there is strong Collinearity between weight and length in the above regression model and, therefore, length will be omitted from the model.
The regression model now becomes(3.6)
Where
= weight
= power steering (pst)
= ABS system (abs)
= air bags (ab)
Using the LINEST function in EXCEL (or PROC REG in SAS) the following estimates for the b coefficients are obtained when the above model is applied to the relevant data in (Table 1). (3.7)
The interpretation of the above equation is as follows. Keeping the level of other characteristics constant
Section 4 below shows that Collinearity in not an issue in the regression model described in Equation (3.7).
The output of the regression results for Equation (3.7) is displayed below. All the regression coefficients are significantly different from zero with t statistics (t ratios) greater than 0.8. An R-square of 96% indicates that almost all of the variation in the price of new cars is explained by the selected predictors.
As part of any statistical analysis is to stand back and criticise the regression model and its assumptions. This phase is calledmodel diagnosis. If under close scrutiny the assumptions seem to be approximately satisfied and the model can be used to predict and understand the relationship between response and the response and the predictors.
In Section 5 below the regression model, as described in Equation (3.7), is proven to be adequate for predicting and to understanding the relationship between response and predictors for the new car data described in (Table 1).
Classic definition of hedonic regression
As we can see above, the hedonic hypothesis assumes that a commodity (e.g. a new car) can be viewed as a bundle of characteristics or attributes (e.g. cc, horse power, weight, etc.) for which implicit prices can be derived from prices of different versions of the same commodity containing different levels of specific characteristics.
The ability to desegregate a commodity and price its components facilities the construction of price indices and the measurement of price change across versions of the same commodity. A number of issues arise when trying to accomplish this.
Much criticism of the hedonic approach has focused on the last two questions, pointing out the restrictive nature of the assumptions required to establish the “existence” and meaning of such indices. However, what the hedonic approach attempts to do is provide a tool for estimating “missing” prices, prices of particular bundles not observed in the base or later periods. It does not pretend to dispose of the question of whether various observed differences are demand or supply determined, how the observed variety of model in the market is generated, and whether the resulting indices have an unambiguous interpretation of their purpose.
Suppose that in the car data (Table 1) the car weight in pounds in addition to the car weight in kilograms is used as a predictor variable. Let denote the weight in kilograms and let denote the weight in pounds. Now since one kilogram is the same as 2.2046 pounds,
with . Here represents the “true” regression coefficient associated with the predictor weigh when measured in pounds. Regardless of the value of , there are infinitely many different values for and that produce the same value for . If both and are included in the model, then and cannot be uniquely defined and cannot be estimated from the data.
The same difficulty occurs if there is a linear relationship among any of the predictor variables. If some set of predictor variables and some set of constants not all zero
(4.1)
For all values of in the data set, then the predictors are said to be collinear. Exact collinearity rarely occurs with actual data, but approximate collinearity occurs when predictors are nearly linearly related. As discussed later, approximate collinearity also causes substantial difficulties in regression analysis. Variables are said to be collinear even if Equation (4.1) holds only approximately. Setting aside for the moment the assessment of the effects of collinearity, how is it detected?
The search for collinearity between predictor variables are assessed by calculating the correlation coefficients between all pairs of predictor variables and displaying them in a table.
The above table of correlations are only between pairs of predictors and cannot assess more complicated (near) linear relationships among several predictors and expressed in Equation (4.1). To do so the multiple coefficient of determination, , obtained from regressing the jth predictor variable on all the other predictor variables is calculated. That is, is temporarily treated as the response in this regression. The closer this is to 1 (or 100%), the more serious the collinearity problem is with respect to the jth predictor.
Effects on parameter estimates.
The effect of collinearity on the estimates of regression coefficients may be best seen from the expression giving the standard errors of those coefficients. Standard errors give a measure of expected variability for coefficients – the smaller the standard error the better the coefficient tends to be estimated. It may be shown that the standard error of the jth coefficient,
, is given by
(4.2)
Where, as before,
is the
value obtained from regressing the jth predictor variable on all other predictors? Equation (4.2) shows that, with respect to collinearity, the standard error will be smallest when
zero, that is, is the jth predictor is not linearly related to the other predictors. Conversely, if
is near 1, then the standard error of
is large and the estimate is much more likely to be far from the true value of
.
The quantity
(4.3)
Is called the variance inflation factor (VIF). The large the value of VIF for a predictor , the more severe the collinearity problem. As a guideline, many authors recommend that a VIF greater than 10 suggests a collinearity difficulty worthy of further study. This is equivalent to flagging predictors with greater than 90%.
Table 3 below presents the results of the collinearity diagnostics for the regression model outlined in Equation 3.5 (using PROC REG in SAS). From Table 4 and Table 5 above it is obvious that there is a strong liner relationship between the predictor variables w and l in the regression model in Equation (3.5) and they are collinear. To overcome this collinearity problem the predictor variable l (length) will be omitted from the regression model.
Variance Inflation Factors |
|
cc |
7.36028546 |
d |
1.54999525 |
ps |
3.07094954 |
w |
221.98482270 |
l |
246.25396926 |
pst |
4.73427617 |
abs |
7.87431815 |
ab |
3.28577252 |
Table 3 Variance Inflation Factors (VIP)
Correlation table for predictor variables |
|||||||
|
cc |
d |
ps |
w |
l |
pst |
abs |
d |
0.42754 |
|
|
|
|
|
|
ps |
0.75827 |
0.23098 |
|
|
|
|
|
w |
0.90567 |
0.42623 |
0.78991 |
|
|
|
|
l |
0.91509 |
0.40526 |
0.78197 |
0.98931 |
|
|
|
pst |
0.59539 |
0.43901 |
0.48691 |
0.67540 |
0.60361 |
|
|
abs |
0.82684 |
0.2129 |
0.63492 |
0.80978 |
0.86680 |
0.34752 |
|
ab |
0.55255 |
0.7412 |
0.46523 |
0.52271 |
0.58908 |
0.34995 |
0.64500 |
Table 4 Correlation table for predictor variables
|
Dep Var |
Predict |
|
Standard |
t- |
Hat Diag |
Cook's |
Obs |
P |
Value |
Residual |
Residual |
Residual |
D |
|
1 |
9.5000 |
9.4673 |
0.03270 |
0.772 |
0.7644 |
0.1408 |
0.012 |
2 |
9.6800 |
9.6865 |
-0.00647 |
-0.156 |
-0.1518 |
0.1704 |
0.001 |
3 |
9.2900 |
9.2477 |
0.04230 |
1.207 |
1.2213 |
0.4099 |
0.126 |
4 |
9.1900 |
9.1546 |
0.03540 |
0.981 |
0.9799 |
0.3767 |
0.073 |
5 |
9.6700 |
9.6622 |
0.00780 |
0.188 |
0.1833 |
0.1739 |
0.001 |
6 |
9.4000 |
9.4753 |
-0.07530 |
-1.814 |
-1.9346 |
0.1730 |
0.086 |
7 |
9.7000 |
9.6775 |
0.02250 |
0.554 |
0.5441 |
0.2094 |
0.010 |
8 |
9.4700 |
9.4466 |
0.02340 |
0.571 |
0.5615 |
0.1974 |
0.010 |
9 |
9.2000 |
9.2328 |
-0.03280 |
-1.022 |
-1.0232 |
0.5059 |
0.134 |
10 |
9.6900 |
9.6865 |
0.00353 |
0.085 |
0.0827 |
0.1704 |
0.000 |
11 |
9.1900 |
9.1663 |
0.02370 |
0.608 |
0.5978 |
0.2712 |
0.017 |
12 |
9.5100 |
9.5122 |
-0.00216 |
-0.053 |
-0.0512 |
0.1870 |
0.000 |
13 |
9.7100 |
9.6865 |
0.02350 |
0.566 |
0.5558 |
0.1704 |
0.008 |
14 |
9.0800 |
9.1886 |
-0.10860 |
-2.706 |
-3.3131 |
0.2279 |
0.270 |
15 |
9.4700 |
9.4466 |
0.02340 |
0.571 |
0.5615 |
0.1974 |
0.010 |
16 |
9.6200 |
9.6222 |
-0.00220 |
-0.083 |
-0.0809 |
0.6615 |
0.002 |
17 |
9.4500 |
9.4447 |
0.00528 |
0.136 |
0.1321 |
0.2707 |
0.001 |
18 |
9.8000 |
9.7690 |
0.03100 |
0.877 |
0.8714 |
0.4005 |
0.064 |
19 |
9.4800 |
9.4237 |
0.05630 |
1.389 |
1.4241 |
0.2106 |
0.064 |
20 |
9.7200 |
9.7536 |
-0.03360 |
-0.830 |
-0.8228 |
0.2132 |
0.023 |
21 |
9.1900 |
9.1828 |
0.00721 |
0.207 |
0.2021 |
0.4189 |
0.004 |
22 |
9.4200 |
9.4677 |
-0.04770 |
-1.125 |
-1.1329 |
0.1367 |
0.025 |
23 |
9.7100 |
9.7310 |
-0.02100 |
-0.503 |
-0.4938 |
0.1646 |
0.006 |
24 |
9.4600 |
9.4864 |
-0.02640 |
-0.725 |
-0.7160 |
0.3618 |
0.037 |
25 |
9.3000 |
9.2998 |
0.000171 |
0.006 |
0.0055 |
0.5679 |
0.000 |
26 |
9.6800 |
9.7051 |
-0.02510 |
-0.592 |
-0.5821 |
0.1409 |
0.007 |
27 |
9.5900 |
9.6226 |
-0.03260 |
-1.302 |
-1.3267 |
0.6988 |
0.492 |
28
|
9.5800 |
9.5041 |
0.07590 |
1.826 |
1.9500 |
0.1723 |
0.087 |
Table 5 Diagnostic statistics for regression model
Variance Inflation Factors |
|
cc |
7.28722659 |
d |
1.45581431 |
ps |
3.01093302 |
w |
10.43520369 |
pst |
2.68458544 |
abs |
5.83911068 |
ab |
1.92580528 |
Table 6 Variance Inflation Factors (VIP)
Effects on inference
If collinearity affects parameter estimates and their standard error then it follows that t- ratios will also be affected.
Effects on prediction
The effect of collinearity on prediction depends on the particular values specified for the predictors. If the relationship among the predictors used in fitting the model is preserved in the predictor values used for prediction, then the predictions will be little affected by collinearity. ON the other hand, if the specified predictor values are contrary to the observed relationships among the predictors in the model, then the predictions will be poor.
What to do about collinearity
The best defence against the problems associated with collinear predictors is to keep the models as simple as possible. Variables that add little to the usefulness of a regression model should be deleted from the model. When collinearity is detected among variables, none of which can reasonably be deleted from a regression model, avoid extrapolation and beware of inference on individual regression coefficients.
Table 6 below presents the results of the collinearity diagnostics for the regression model outlined in Equation 3.7 (using PROC REG in SAS).
Note that the predictor variable w (weight)does nothave a VIP value sufficiently greater than 10to warrant exclusion from the model. Table 4 and Table 6 above indicate that the regression model as described in Equation (3.7) does not have a problem with collinearity among the variables.
All the regression theory and methods presented above rely to a certain extent on the standard regression assumptions. In particular it was assumed that the data were generated by a process that could be modelled according to
For i = 1,2, ….,n (5.1)
Where the error terms are independent of one another and are each normally distributed with mean 0 and common standard deviation. But in any practical situation, assumptions are always in doubt and can only hold approximately at best. The second part of any statistical analysis is to stand back and criticize the model and its assumptions. This phase is frequently called model diagnosis. If under close scrutiny the assumptions seem to be approximately satisfied, then the model can be used to predict and to understand the relationship between response and predictors. Otherwise, ways to improve the model are sought, once more checking the assumptions of the new model. This process is continued until either a satisfactory model is found or it is determined that none of the models are completely satisfactory. Ideally, the adequacy of the model is assessed by checking it with a new set of data. However, that is a rare luxury; most often diagnostics based on the original set must suffice. The study of diagnostics begins with the important topic of residuals (Model 1).
Analysis of Variance |
|||||
Source |
DF |
Sum of Squares |
Mean |
F Value |
Prob>F |
Model |
7 |
1.04297 |
0.14900 |
71.462 |
0.0001 |
Error |
20 |
0.04170 |
0.00208 |
|
|
C Total |
27 |
1.08467 |
|
|
|
Root MSE |
0.04566 |
R-square |
0.9616 |
||
Dep Mean |
9.49107 |
Adj R-sq |
0.9481 |
||
C.V. |
0.48110 |
|
|
Model 1 Analysis of variance
Parameter Estimates |
|||||
Variable |
DF |
Parameter Estimate |
Standard Error |
T for H0: Parameter=0 |
Prob > |T| |
Intercep |
1 |
8.425031 |
0.14860470 |
56.694 |
0.0001 |
CC |
1 |
0.000077042 |
0.00009113 |
0.845 |
0.4079 |
D |
1 |
0.015308 |
0.01319688 |
1.160 |
0.2597 |
PS |
1 |
0.002427 |
0.00073364 |
3.309 |
0.0035 |
W |
1 |
0.000487 |
0.00017666 |
2.758 |
0.0121 |
PST |
1 |
0.106869 |
0.03691626 |
2.895 |
0.0090 |
ABS |
1 |
0.079148 |
0.04351762 |
1.819 |
0.0840 |
AB |
1 |
-0.033942 |
-0.02419823 |
1.403 |
0.1761 |
Variable |
DF |
Variance Inflation |
|
|
|
Intercep |
1 |
0.00000000 |
|
|
|
CC |
1 |
7.28722659 |
|
|
|
D |
1 |
1.45581431 |
|
|
|
PS |
1 |
3.01093302 |
|
|
|
W |
1 |
10.43520369 |
|
|
|
PST |
1 |
2.68458544 |
|
|
|
ABS |
1 |
5.83911068 |
|
|
|
AB |
1 |
1.92580528 |
|
|
|
Model 2 Parameter estimates
Residuals – standardised residuals
Most of the regression assumptions apply to the error terms. However the error terms cannot be obtained and the assessment of the errors must be based on the residuals obtained as the actual value minus the fitted value that the model predicts with all unknown parameters estimated (Model 2) for the data. Recall that in symbols the ith residual is
For i = 1,2, ….,n (5.2)To analyse residuals (or any other diagnostic statistic), their behaviour when the model assumption do hold and, if possible, when at least some of the assumptions do not hold must be understood. If the regression assumptions all hold, it may be shown that the residuals have normal distributions with 0 means. It may also be shown that the distribution of the ith residual has the standard deviation , where is the ith diagonal element of the “hat matrix” determined by the values of the set of predictor variables. (See Appendix I), but the particular formula given there is not needed here. In the simple case of a single predictor model it may be shown that
(5.3)
Note in particular that the standard deviation of the distribution of the ith residual is not , the standard deviation of the distribution of the ith error term . It may be shown that, in general,
(5.4)
So that
is at its minimum value, 1/n, when the predictors are all equal to their mean values. On the other hand, approaches its maximum value, 1, when the predictors are very far from their mean values. Thus residuals obtained from data points that are far from the centre of the data set will tend to be smaller than the corresponding error terms. Curves fit by least squares will usually fit better at extreme values for the predicators than in the central part of the data.
Table 5 below, displays the
values (along with many other diagnostic statistics that will be discussed) for the regression of log (price) on the seven predicators described above (cc, d, ps, w, l, pst, abs and ab). To compensate for the differences in dispersion among the distributions of the different residuals, it is usually better to consider the standardized residuals defined by
ith standardized residual =
for i = 1, 2, …, n (5.6)
Notice that the unknown has been estimated by s. If n is large and if the regression assumptions are all approximately satisfied, then the standardized residuals should behave about like standard normal variables. Table 5 also lists the residuals and standardized residuals for all 28 observations.
Even if all the regression assumptions are met, the residuals (and the standardized residuals) are not independent. For example, the residuals for a model that includes an intercept term always add to zero. This alone implies they are negatively correlated. It may be shown that, in fact, the theoretical correlation coefficient between the ith and jth residuals (or standard residuals) is
(5.7)
Where is the ijth element of the hat matrix? Again, the general formula for these elements is not needed here. For the simple single-predictor case it may be shown that
(5.8)
From Equations (5.3), (5.7) and (5.8) (and in general) we see that the correlations will be small except for small data sets and/or residuals associated with data points very far from the central part of the predictor values. From a practical point of view this small correlation can usually be ignored, and the assumptions on the error terms can be assessed by comparing the properties of the standardized residuals to those of independent, standard normal variables.
Residual plots
Plots of the standardized residuals against other variables are very useful in detecting departures from the standard regression assumptions. Many of the most common problems may be seen by plotting (standardized) residuals against the corresponding fitted values. In this plot, residuals associated with approximately equal-sized fitted values are visually grouped together. In this way it is relatively easy to see if mostly negative (or mostly positive) residuals are associated with the largest and smallest fitted values. Such a plot would indicate curvature that the chosen regression curve did not capture. Figure 5 displays the plot of Standardized Residuals versus fitted values for the new car data.
Another important use for the plot of residuals versus fitted values is to detect lack of common standard deviation among different error terms. Contrary to the assumption of common standard deviation, it is not uncommon for variability to increase as the values for response variables increase. This situation does not occur for the data contained in (Figure 5).
In regression analysis the model is assumed to be appropriate for all the observations. However, it is not unusual for one or two cases to be inconsistent with the general pattern of the data in one way or another. When a single predictor is used such cases may be easily spotted in the scatter plot data. When several predictors are employed such cases will be much more difficult to detect. The non conforming data points are usually called outliers. Sometimes it is possible to retrace the steps leading to the suspect data point and isolate the reason for the outlier. For example, it could be the result of a recording error, If this is the case the data can be corrected. At other times the outlier may be due to a response obtained when variables not measured were quite different than when the rest of the data were obtained. Regardless of the reason for the outlier, its effect on regression analysis can be substantial.
Outliers that have unusual response values are the focus here. Unusual responses should be detectable by looking for unusual residuals preferably by checking for unusually large standardized residuals. If the normality of the error terms is not in question, then a standard residual larger than 3 in magnitudes certainly is unusual and the corresponding case should be investigated for a special cause for this value.
A difficulty with looking at standardized residuals is that an outlier, if present, will also affect the estimate of that enters into the denominator of the standardized residual. Typically, an outlier will inflate s and thus deflate the standardized residual and mask the outlier. One way to circumvent this problem is to estimate the value of use in calculating the ith standard residual using all the data except the ith case. Let denote such an estimate where the subscript (i) indicates that the ith case has been deleted. This leads to the Studentized residual defined by
ith Studentized residual = for i = 1, 2, …, n (5.6)
The next question to be asked is “how do these diagnostic methods work the new car data?” Table 5 lists diagnostic statistics for the regression model as applied to the new car data. Notice that there is only case (observation No. 14) where the standardized and studentized residuals have a significant difference and where the standard or studentized residuals are above 3 in magnitude. These results indicate that, in general, there are no outlier problems associated with the regression model described in (3.5) above.
Influential observations
The principle of ordinary least squares gives equal weight to each case. On the other hand, each case does not have the same effect on the fitted regression curve. For example, observations with extreme predictor values can have substantial influence on the regression analysis. A number of diagnostic statistics have been invented to quantify the amount of influence (or at least potential influence) that individual cases have in a regression analysis. The first measure of influence is provided by the diagonal elements of the hat matrix.
Leverage
When considering the influence of individual cases on regression analysis, the ith diagonal element of the hat matrix
is often called the leverage for the ith case, which means a measure of the ith data point’s influence in a regression with respect to the predicator variables. In what sense does
measure influence? It may be shown that
, that is
is the rate of change of the ith fitted value with respect to the ith response value. If
is small, then a small change in the ith response results in a small change in the corresponding fitted value. However, if is large, then a small change in the ith response produces a large change in the corresponding
.
Further interpretation of as leverage is based on the discussion in Section 5.1. There is shown that the standard deviation of the sampling distribution of the ith residual is not but . Furthermore, is equal to its smallest value 1/n, when all the predicators are equal to their mean values. These are the values for the predictors that have the least influence on the regression curve and imply, in general, the largest residuals. On the other hand, if the predictors are far from their means, then approaches its largest value of 1 and the standard deviation of such residuals are quite small. In turn this implies a tendency for small residuals, and the regression curve is pulled toward these influential observations.
How large might a leverage value be before a case is considered to have large influence? It may be shown algebraically that the average leverage over all cases is (k+1)/n, that is,
(5.10)
Where k is the number of predictors in the model. On the basis of this result, many authors suggest making cases as influential if their leverage exceeds two or three time (k+1)/n.
For the new car data displayed in Table 5and using the regression model as described in Equation (3.7) we estimate;
In Table 5 only two observations (No.’s 16 and 27) are above the 2 x (k+1)/n threshold and none of observations are above the 3 x (k+1)/n threshold. This result indicates that there are no observations with extreme predictor’s value impacting on the slope of the fitted values and hence none of the observation has undue influence on the regression results.
Cook’s distance
As good as large leverage values are in detecting cases influential on the regression analysis, this criterion is not without faults. Leverage values are completely determined by the values of the predictor variables and do not involve the response values at all. A data point that possesses large leverage but also lies close to the trend of the other data will not have undue influence on the regression results.
Several statistics have been proposed to better measure the influence of individual cases. One of the most popular is called Cook’s Distance, which is a measure of a data point’s influence on regression results that considers both the predictor variables and the response variables. The basic idea is to compare the predictions of the model when the ith case is and is not included in the calculations.
In particular, Cook’s Distance, for the ith case is defined to be
(5.11)
Where is the predicted or fitted value for case j using the regression curve obtained when i is omitted. Large values of indicate that case i has large influence on the regression results, as then differ substantially for many cases. The deletion of a case with a large value of will alter conclusions substantially. If is not large, regression results will not change dramatically even if the leverage for the ith case is large. In general, if the largest value of is substantially less than 1, then no cases are especially influential. On the other hand, cases with greater than 1 should certainly be investigated further to more carefully assess their influence on the regression analysis results.
In Table 5 only one observation No. 27 has the largest value of Cook’s Distance, 0.722 and leverage value, 0.6988. However, neither value is high enough to influence the regression analysis results.
What are next once influential observations have been detected? If the influential observation is due to incorrect recording of the data point, an attempt to correct that observation should be made and the regression analysis rerun. If the data point is known to be faulty but cannot be corrected, then that observation should be excluded for the data set. If it is determined that the influential data point is indeed accurate, it is likely that the proposed regression model is not appropriate for the problem at hand. Perhaps an important predictor variable has been neglected or the form of the regression curve is not adequate.
Transformations
So far a variety of methods for detecting the failure of some of the underlying assumptions of regression analysis have been discussed. Transformations of the data, either of the response and/or the predictor variables, provide a powerful method for turning marginally useful regression models into quire valuable models in which the assumptions are much more credible and hence the predictions much more reliable. Some of the most common and most useful transformations include logarithms, square roots, and reciprocals. Careful consideration of various transformations for data can clarify and simplify the structure of relationships among variables.
Sometimes transformations occur “naturally” in the ordinary reporting of data. As an example, consider a bicycle computer that displays, among other things, the current speed of the bicycle in miles per hour. What is really measured is the time it takes for each revolution of the wheel. Since the exact circumference of the tire is stored in the computer, the reported speed is calculated as a constant divided by the measured time per revolution of the wheel. The speed reported is basically a reciprocal transformation of the measured variable.
As a second example, consider petrol consumption in a car. Usually these values are reported in miles per gallon. However, they are obtained by measuring the fuel consumption on a test drive of fixed distance. Miles per gallon are then calculated by computing the reciprocal of the gallons per mile figure.
A very common transformation is the logarithm transformation. It may be shown that a logarithm transformation will tend to correct the problem of non constant standard deviation in case the standard deviation of is proportional to the mean of . If the mean of y doubles, then so does the standard deviation of e and so forth.
Once a regression model has successfully passed the set of diagnostic tests presents in Section 5, one can then apply the model with confidence in the construction a price index, the new car price index in this paper. The prices that are constructed from the Hedonic Regression Model are referred to as the predicted prices.
An illustrative example follows of how hedonic based quality adjustment can be applied in a situation when the price an individual car model (Toyota Corolla 1.3L saloon) was available in a January of a particular year, but was not available in the February of the same year. A replacement car model is available and is close in quality, but has three changes in specification: an increase in cylinder capacity from 1,300cc to 1,400cc, with the inclusion of ABS and air bags.
The first stage of applying the model to the price index is the calculation of predicated old and new prices. The second stage is to adjust the base price to reflect new features (Model 3).
Change to January due to changes in quality
New base price
The third stage is to compare current price with new price,
The value of 97.6 indicates that the there has been a reduction in the price index for new cars over the one month period, January to February.
The unadjusted index is 105.6 (Appendix I), indicating a price increase, which, if it is used, would provide an incorrect measurement of the price changes of new cars over the period.
None.
Authors declare that there are no conflicts of interests.
©2015 McCormack, et al. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.
2 7