Dummy variable analysis of variance technique is an alternative approach to the non-parametric Friedman’s two-way analysis of variance test by ranks used to analyze sample data appropriate for use in parametric statistics for two factor random and mixed effects or analysis of variance models with one replication or observation per treatment combinations.1,2
To develop a non-parametric alternative method for the analysis of matched samples that are appropriate for use with two factor random and mixed-effects analysis of variance models with only one observation per cell or treatment combination, we may suppose that a researcher has collected a random sample of size ’a’ observations randomly drawn from a population ‘A’ of subjects or blocks of subjects exposed to or observed at some ‘c’ time periods, points in space, experimental conditions, tests, or treatments that are either fixed or randomly drawn from population B experimental conditions, points in time, tests or experiments comprising numerical measurements.
The proposed method
Let
be the
observation drawn from population A, that is the observation on the
subject or block of subjects exposed to or observed at the
level of factor B that is
treatment or time period for i=1,2,…,a; j=1,2,…,c.
Now to set up a dummy variable multiple regression model for use with a two factor analysis of variance problem, we as usual present each factor or the so called parent independent variable with one dummy variable of 1s and 0s less than the number of its categories or levels.2 Thus factor A, namely subject or block of subjects with ‘a’ levels is represented with a-1 dummy variables of 1s and 0s, while factor B with c levels is represented by c-1 dummy variables of 1s and 0s.
Hence we may let
(1)
(2)
Then the resulting dummy variable multiple regression model fitting or regressing the dependent or criterion variable
on the dummy variables representing factors A (subject or block of subjects) and B (treatment) is
(3)
For
sample observations where
is the
response or observation on the criterion or dependent variable;
are dummy variables of 1s and 0s representing levels of factors A and B;
are partial regression coefficients and
are error terms, with
,for
. Note that since there are only one observation per row by column, that is factor A (subject or block of subjects) by factor B (treatment) combination; for one to be able to have an estimate for the error sum of squares for the regression model, and hence be able to test desired hypotheses, it is necessary to assume that there are no factors A by B interactions or that such interactions have been removed by an appropriate data transformation. Also note that an advantage of the present method over the extended median test for dependent or matched samples and also over the Friedmans two –way analysis of variance test by ranks is that the problem of tied observations within subjects or blocks of subjects does not arise, and hence unlike in the other two non-parametric methods under reference there is no need to find ways to adjust for or break ties between scores within blocks of subjects.3 The expected or mean value of the criterion variable is from equation 3.
(4)
To find the expected or mean effect of any of the factors or parent independent variables, we set all the dummy variables representing that factor equal to 1 and all the other dummy variables found in equation 4 equal to 0.Thus for example the expected or mean effect or value of factor A (subject or block of subjects) on the dependent variable is obtained by setting
in equation 4 for
.
Similarly the expected or mean value of factor B (treatment) is obtained by setting
in equation 4 for
thereby obtaining
(5)
Now the dummy variable multiple regression model of equation 3 can equivalently be expressed in matrix form as
(6)
Where
is an nx1 column vector of observations or scores on the dependent or criterion variables; X is an nxr design matrix of ‘r’ dummy variables of 1s and 0s;
is an rx1 column vector of partial regression coefficients; and
is on nx1 column vector of error terms, with
where ‘n’=a.c observations and ‘n’=(a-1)+(c-1)=a+c-2 dummy variables of 1s and 0s included in the regression model.
Similarly the expected value of
is from equation 4.
(7)
Application of the usual methods of least squares to either equation 3 or 6 yields an unbiased estimate of the regression parameter
as
(8)
Where
is the inverse matrix of the non-singular variance-covariance matrix
. A hypothesis that is usually of research interest is that the regression model of either equation 3 or 6 fits, or equivalently that the independent variables or factors have no effects on the dependent or criterion variable, meaning that the partial regression coefficient is equal to zero stated symbolically that we have the null hypothesis.
(9)
As in equation 3 this null hypothesis is tested using the usual F-test presented in an analysis of variance Table where the total sum of squares is calculated in the usual way as
(10)
With n-1=a.c-1 degrees of freedom where
is the mean value of the dependent variables.
Similarly the treatment sum of squares in analysis of variance parlance which is the same as the regression sum of squares in regression models is calculated as
(11)
With (a-1)+(c-1) =a+c-2 degrees of freedom. The error sum of squares SSE indicates the difference between the total sum of squares, SST and the sum of squares regression SSR; thus,
(12)
With
degrees of freedom.
These results are summarized in an analysis of variance Table (Table 1)
The null hypotheses H0 of Equation 13 is tested using the F-ratio of Table 1. The null hypothesis is rejected at the if the calculated F-ratio is greater than the tabulated or critical F-ratio at a specified
-level of significance, otherwise the null hypothesis H0 is accepted.
If the model fits, that if not all the elements of
are equal to zero, that is if the null hypothesis H0 of equation 9 is rejected, then one may proceed to test further hypothesis concerning factor level effects, that is one may proceed to test the null hypothesis that factors A (subject or block of subjects) and B (treatment) separately have no effects on the dependent or criterion variable. In other words, the null hypotheses
(13, 14)
Where
are respectively the (a-1) and (c-1) vectors of partial regression coefficients or effects of factor A (subject or block of subjects) and B (treatment) on the criterion or dependent variable. However a null hypothesis that is usually of greater interest here is that of equation 14, that is that treatments, points in time or space of tests or experiments do not have differential effects on subjects.
Source of variation |
Sum of squares |
Degrees of freedom |
Mean sum of squares |
F-ratio |
Regression(treatment) |
|
a+c-2 |
|
|
Error |
|
(a-1)(c-1) |
|
|
Total |
|
(a.c)-1 |
|
|
Table 1 Two factor analysis of variance Table for the full model of Equation 6
Now to obtain appropriate test statistics for use in testing these null hypothesis we apply the extra sum of squares principle to partition the treatment or regression sum of squares SSR into its two component parts namely, the sum of squares due to factor A (subject or block of subjects), SSA and the sum of squares due to factor B (treatment), SSB, to enable the calculation of the appropriate F-ratios.
Now the nxr matrix X for the full model of equation 6 can be partitioned into its two component sub-matrices namely
, an nx(a-1) design matrix of a-1 dummy variables of 1s and 0s representing the included a-1 levels of factor A (subject or block of subjects) and
, an nx(c-1) matrix of the c-1 dummy variables of 1s and 0s representing the included c-1 levels of factor B (treatment). The partial regression coefficient
, estimated being an rx1 column vector of regression effects of equation 8 can also be partitioned into the corresponding partial regression coefficients estimated such as,
,which is an (a-1)x1 column vector of partial regression coefficients or effects of factor A and
which is a (c-1)x1 column vector of the effects of factor B on the dependent variable. Hence the treatment sum of squares SST, that is the sum of squares regression SSR of equation 11 can be equivalently expressed as
(15)
or equivalently
(16)
Which when interpreted is the same as the statement
(17)
Where SSR is the sum of squares of regression for the full model with r=a+c-2 degrees of freedom; SSA is the sum of squares due to factor A (subject or block of subject); with a-1 degrees of freedom; SSB is the sum of squares due to factor B (treatment) with c-1 degrees of freedom; and
is an additive correction factor due to mean effect. These sums of squares namely SSR, SSA and SSB are obtained by separately fitting the full model of equations 6 with X, and the reduced regression models of
again separately on the criterion or dependent variable
.
Now if the full model of equation 6 fits, that is if the null hypothesis of equation 9 is rejected, then the additional null hypotheses of equations 13 and 14 may be tested using the extra sum of squares principle.4,5 If we denote the sums of squares due to the full model of equation 6 and the reduced models due to the fitting of the criterion variables
to any of the reduced design matrices
by SS(F) and SS(R) respectively then following the extra sum of squares principle4,5 the extra sum of squares due to a given factor is calculated as
(18)
With degrees of freedom obtained as the difference between the degrees of freedom of SS(F) and SS(R); that is as Edf=df(F)-df(R). Thus the extra sums of squares for factors A (subject or block of subjects) and B (treatment) are obtained as follows respectively
(19)
With
degrees of freedom and
degrees of freedom.
Note that since each of the reduced models and the full model have the same total sum of squares SST, the extra sum of squares may alternatively be obtained as the difference between the error sum of squares of each reduced model and the error sum of squares of the full model. In other words, the extra sum of squares is equivalently calculated as
(20)
With degrees of freedom similarly obtained. Thus the extra sum of squares due to factors A (subject or block of subjects) and B (treatment) are alternatively obtained as follows respectively.
(21)
With c-1 and a-1 degrees of freedom. Where SSR and SSE are respectively the regression sum of squares and the error sum of squares for the full model and SSEA and SSEB are respectively the error sums of squares for the reduced models for factors A and B. The null hypotheses of equations 13 and 14 are tested using the F-ratios
(22)
With a-1 and (a-1)(c-1) degrees of freedom where
(23)
Is the mean extra sum of squares due to factor A (subject or block of subjects) and
(24)
With a-1 and (a-1)(c-1) degrees of freedom where
(25)
Is the mean extra sum of squares due to factor B (treatment).These results are summarized in Table 2a which for ease of presentation also includes the sum of squares and other values of Table 1 for the full models.
If the various F–ratios and in particular the F-ratios based on the extra sums of squares of Table 2b indicate that the independent variables or factor levels have differential effects on the response, dependent, or criterion variable, that is if the null hypotheses of either equation 13 or 14 or both are rejected, then one may proceed further to estimate desired factor level effects and test hypotheses concerning them.
Source of variation |
Sum of squares (SS) |
Degrees of freedom(DF) |
Mean sum of squares(MS) |
F-ratio |
Full model |
|
|
|
|
Regression |
|
a+c-2 |
|
|
Error |
MCEP0028 |
(a-1)(c-1) |
|
|
Factor A (Subjects on block of subjects) |
Regression |
|
a-1 |
|
|
Error |
|
a(c-1) |
|
|
Factor B(Treatment) |
Regression |
|
c-1 |
|
|
Error |
|
c(a-1) |
|
|
Total |
|
a.c-1 |
|
|
Table 2a Table showing two factor Analysis of Variance for Sums of Squares for the full model and due to reduced models and other statistics
Extra sum of squares (ESS=SS(F)-SS(R) |
Degrees of freedom(DF) |
Extra mean sum of squares (EMSA) |
F-ratio |
ESR=SSR |
|
|
|
ESER=SSE |
(a-1)(c-1) |
|
|
Factor A |
ESSA=SSR-SSA |
c-1 |
|
|
ESSEA=SSEA-SSE=ESSA |
c-1 |
|
|
Factor B |
ESSB=SSR-SSB |
a-1 |
|
|
ESSEB=SSEB-SSE=ESSB |
a-1 |
|
|
|
a.c-1 |
|
|
Table 2b Two-factor Analysis of Variance Table for the Extra sums of Squares due to reduced models and other statistics (Continuation)
In fact an additional advantage of using dummy variable regression models in two factor or multiple factor analysis of variance type problems is that the method also more easily enables the estimation of factor level effects separately of several factors on a specified dependent or criterion variable. For example it enables the estimation of the total or absolute effect, the partial regression coefficient or the so called direct effect of a given independent variable here referred to as the parent independent variable on the dependent variable through the effect of its representative dummy variables as well as the indirect effect of that parent independent variable through the mediation of other independent variables in the model.6 The total or absolute effect of a parent independent variable on a dependent variable is estimated as the simple regression coefficient of that independent variable represented by codes assigned to its various categories when regressed on the dependent variable. The direct effect of a parent independent variable on a dependent variable is the weighted sum of the partial regression coefficients or effects of the dummy variables representing that parent independent variable on the dependent variable where the weights are the simple regression coefficients of each representative dummy variable regressing on the specified parent independent variable represented by codes. The indirect effect of a given parent independent variable on a dependent variable is then simply the difference between its total and direct effects.6
Now the direct effect or partial regression coefficient of a given parent independent variable on a dependent variable is obtained by taking the partial derivative of the expected value of the corresponding regression model with respect to that parent independent variable. For example the direct effect of the parent independent variable ‘A’ say on the dependent variable Y is obtained from equation 5 as
(26)
For all other independent variable ‘z’ in the model different from ‘A’.
The weight
is estimated by fitting a simple regression line of dummy variable.
regressing on its parent independent variable, A represented by codes and taking the derivative of its expected value with respect to ‘A’. Thus, if the expected value of the dummy variable
regressing on its parent independent variable ‘A’ is expressed as
Then the derivative of this expected value with respect to A is
(27)
Hence using Equation 27 in Equation 26 gives the direct effect of the parent independent variable A on the dependent variable Y as
(28)
Whose sample estimate is from Equation 8
(29)
The total or absolute effect of ‘A’ on ‘Y’ is estimated as the simple regression coefficient or effect of the parent independent variable ‘A’ represented by codes on the dependent variable ‘Y’ as
(30)
Where
is the estimated simple regression coefficient or effect of ‘A’ on ‘Y’. The indirect effect of ‘A’ on ‘Y’ is then estimated as the difference between
and
, that is as
(31)
The total, direct and indirect effects of factor B are similarly estimated.
The body weights of a random sample of 10 Broilers here termed “ subject or block of subjects” regarded as factor ‘A’ with ten levels and types of weighing machine here termed “treatment” regarded as factor ‘B’ with five levels are shown below.
To set up a dummy variable regression model of body weight (y) regressing on “subject or block of subjects” here termed factor ‘A’ with ten levels and types of weighing machine, here termed “treatments” treated as factor ‘B’ with five levels, we as usual represent factor ‘A’ with nine dummy variables of 1s and 0s and factor ‘B’ with four dummy variables of 1s and 0s, using Equation 1.
The resulting design matrix ‘X’ for the full model is presented in Table 3 where
represents level 1 or broiler No.1;
represents levels 9 or broiler No.9 and so on. Similarly
represents weighing machine No.1 or treatment 1,
represents weighing machine No.2 or treatment 2 and so on, until
represents weighing machine No.4 or treatment 4.
Using the design matrix X of Table 3 for the full model of Equation 6 we obtain the fitted regression Equation expressing the dependent of broiler body weight on, that is as a function of broiler (subject) treated as factor A and type of weighing machine (treatment) treated as factor B, both represented by dummy variables of 1s and 0s, as
Now to estimate the total or absolute effect of type of weighing machine (treatment), ‘B; or body weight y of broilers, we regress
on ‘B’ represented by codes to obtain
. The weights
to be applied to Equation 6 to determine the direct effect are obtained as explained above by taking the derivative with respect to ‘B’ of the expected value of the simple regression equation expressing the dependence of the dummy variable
of 1s and 0s on its parent variable ‘B’ represented by codes yielding
.
Using these values in Equation 6, we obtain with Equation 6 the partial or the so called direct effect of type of weighing machine (treatment) ‘B’ on body weight ‘y’ of broilers as
Hence the corresponding indirect effect is estimated using Equation 6 as
.
The total or absolute, direct and indirect effects of the subjects or block of subjects called factor A are similarly calculated.
It would for comparative purpose be instructive to also analyze the data of example 1 using Friedman two-way analysis of variance test by ranks.
To do this we first rank for each broiler (subject) the body weight as obtained using the five weighing machines (treatment) from the smallest ranked ‘1’ to the largest ranked ‘5’. All tied body weights for each broiler are as usual assigned their mean ranks. The results are presented in Table 4.
Using the ranks shown in Table 4, we calculate the Friedmans test statistic as
Which with c-1=5-1=4 degrees of freedom is statistically significant
,indicating that weighing machines probability differ in the values of body weights of broilers obtained using them. This is the same conclusion that is also reached using the present method.
S/no (l)
|
Body weight (yi) |
|
1
|
2
|
3
|
4
|
5
|
6
|
7
|
8
|
9
|
1
|
2
|
3
|
4
|
1 |
1.9 |
1 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
2 |
2 |
1 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
3 |
2.1 |
1 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
4 |
2.1 |
1 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
5 |
1.9 |
1 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
6 |
1.7 |
1 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
7 |
2 |
1 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
8 |
1.8 |
1 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
9 |
2.1 |
1 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
10 |
2 |
1 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
11 |
1.9 |
1 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
12 |
2.2 |
1 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
13 |
1.9 |
1 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
14 |
2.2 |
1 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
15 |
2.2 |
1 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
16 |
1.8 |
1 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
17 |
2.2 |
1 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
18 |
2.1 |
1 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
19 |
2 |
1 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
20 |
2.1 |
1 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
21 |
1.9 |
1 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
22 |
1.8 |
1 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
23 |
1.9 |
1 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
24 |
2.2 |
1 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
25 |
2.1 |
1 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
26 |
1.8 |
1 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
27 |
2 |
1 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
28 |
2.1 |
1 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
29 |
2.1 |
1 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
30 |
2.1 |
1 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
31 |
1.8 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
1 |
0 |
0 |
0 |
32 |
2.1 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
1 |
1 |
0 |
0 |
33 |
1.9 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
1 |
0 |
34 |
2.2 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
1 |
35 |
2 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
36 |
1.7 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
1 |
0 |
0 |
0 |
37 |
2.1 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
1 |
0 |
0 |
38 |
1.9 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
1 |
0 |
39 |
1.9 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
1 |
40 |
2.1 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
41 |
1.8 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
1 |
0 |
0 |
0 |
42 |
1.9 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
1 |
0 |
0 |
43 |
2 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
1 |
0 |
44 |
2.1 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
1 |
45 |
2.1 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
46 |
2 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
47 |
2.1 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
48 |
2 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
49 |
2.1 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
50 |
2.1 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
Table 3 Design matrix for the sample data of example 1
|
Body weight(treatment) |
Broiler(subject) |
1 |
2 |
3 |
4 |
5 |
1 |
1.5 |
3 |
4.5 |
4.5 |
1.5 |
2 |
1 |
3.5 |
2 |
5 |
3.5 |
3 |
1.5 |
4 |
1.5 |
4 |
4 |
4 |
1 |
5 |
3.5 |
2 |
3.5 |
5 |
2.5 |
1 |
2.5 |
5 |
4 |
6 |
1 |
2 |
4 |
4 |
4 |
7 |
1 |
4 |
2 |
5 |
3 |
8 |
1 |
4.5 |
2.5 |
2.5 |
4.5 |
9 |
1 |
2 |
3 |
4.5 |
4.5 |
10 |
1.5 |
4 |
1.5 |
4 |
4 |
Total |
13 |
33 |
27 |
40.5 |
36.5 |
Table 4 Ranks of body weights of broilers in Table 1