C Program Chi Square Test

Posted on -

A chi-square test (also called chi-square d test) is a common statistical technique used when you have data that consists of counts in categories. For example, you might have counts of the number of HTTP requests a server gets in each hour during a day, or you might have counts of the number of employees in each job category at your company.There are several kinds of chi-square tests. The three most common forms are a test for equal counts, a test for counts given specified probabilities, and a test for independence of two factors.

In this article I'll show you how to write R language programs (technically scripts because R is interpreted) to perform each of the three basic chi-square tests.A good way to see where this article is headed is to take a look at a screenshot of a demo program in Figure 1. The demo program is named chisqdemo.R and is running in the R Console shell in the RGui program. The R language system is open source and you can find a simple self-extracting installation executable for Windows systems.Click on image for larger view. Figure 1. Chi-Square Tests Using RTo execute the demo R program, I first used the setwd (set working directory) command to point to the location of the program.

Then I used the source command to run the program.The first example in the demo tests whether a single dice (OK, I know it's a 'die' but that just doesn't sound right) is fair or not. You roll the dice 60 times. Because there are six possible results on each roll, if the dice is fair you'd expect to get a count of about 10 for each of the possible results.The demo displays the number of times each result actually occurred, then behind the scenes, a chi-square test for equal counts is performed.

The results of the test suggest that the dice is not fair. Notice that a six-spot occurred only four times. It's unlikely that you'll be analyzing rolls of a dice at work but gambling examples are traditional when explaining chi-square tests and they easily generalize to useful problems.

IntroductionThis page shows how to perform a number of statistical tests using SPSS. Eachsection gives a brief description of the aim of the statistical test, when it is used, anexample showing the SPSS commands and SPSS (often abbreviated) output with a brief interpretation of theoutput. You can see the page for a table that shows an overview of when each test isappropriate to use. In deciding which test is appropriate to use, it is important toconsider the type of variables that you have (i.e., whether your variables are categorical,ordinal or interval and whether they are normally distributed), see for more information on this.

About the hsb data fileMost of the examples in this page will use a data file called hsb2, high schooland beyond. This data file contains 200 observations from a sample of high schoolstudents with demographic information about the students, such as their gender ( female),socio-economic status ( ses) and ethnic background ( race). It also contains anumber of scores on standardized tests, including tests of reading ( read), writing( write), mathematics ( math) and social studies ( socst).You can get the hsb data file by clicking on. One sample t-testA one sample t-test allows us to test whether a sample mean (of a normallydistributed interval variable) significantly differs from a hypothesizedvalue. For example, using the, say we wish to testwhether the average writing score ( write) differs significantly from 50.

Wecan do this as shown below. T-test/testval = 50/variable = write.The mean of the variable write for this particular sample of students is 52.775,which is statistically significantly different from the test value of 50. We wouldconclude that this group of students has a significantly higher mean on the writing testthan 50. One sample median testA one sample median test allows us to test whether a sample median differssignificantly from a hypothesized value. We will use the same variable, write,as we did in the example above, but we do not needto assume that it is interval and normally distributed (we only need to assume that writeis an ordinal variable). Nptests/onesample test (write) wilcoxon(testvalue = 50).Binomial testA one sample binomial test allows us to test whether the proportion of successes on atwo-level categorical dependent variable significantly differs from a hypothesizedvalue. For example, using the, say we wish to testwhether the proportion of females ( female) differs significantly from 50%, i.e.,from.5.

We can do this as shown below. Npar tests/binomial (.5) = female.The results indicate that there is no statistically significant difference (p =.229). In other words, the proportion of females in this sample does notsignificantly differ from the hypothesized value of 50%.

Chi-square goodness of fitA chi-square goodness of fit test allows us to test whether the observed proportionsfor a categorical variable differ from hypothesized proportions. For example, let’ssuppose that we believe that the general population consists of 10% Hispanic, 10% Asian,10% African American and 70% White folks. We want to test whether the observedproportions from our sample differ significantly from these hypothesized proportions. Npar test/chisquare = race/expected = 10 10 10 70.These results show that racial composition in our sample does not differ significantlyfrom the hypothesized values that we supplied (chi-square with three degrees of freedom =5.029, p =.170). Two independent samples t-testAn independent samples t-test is used when you want to compare the means of a normallydistributed interval dependent variable for two independent groups.

Test

For example,using the, say we wish to test whether the mean for writeis the same for males and females. T-test groups = female(0 1)/variables = write.Because the standard deviations for the two groups are similar (10.3 and8.1), we will use the “equal variances assumed” test. The results indicate that there is a statistically significant difference between themean writing score for males and females (t = -3.734, p =.000).

In other words,females have a statistically significantly higher mean score on writing (54.99) than males(50.12).See also.Wilcoxon-Mann-Whitney testThe Wilcoxon-Mann-Whitney test is a non-parametric analog to the independent samplest-test and can be used when you do not assume that the dependent variable is a normallydistributed interval variable (you only assume that the variable is at least ordinal). Youwill notice that the SPSS syntax for the Wilcoxon-Mann-Whitney test is almost identicalto that of the independent samples t-test.

We will use the same data file (the ) and the same variables in this example as we did in the above and will not assume that write,our dependent variable, is normally distributed. Npar test/m-w = write by female(0 1).The results suggest that there is a statistically significant differencebetween the underlying distributions of the write scores of males andthe write scores of females (z = -3.329, p = 0.001).See also.Chi-square testA chi-square test is used when you want to see if there is a relationship between twocategorical variables. In SPSS, the chisq option is used on thestatistics subcommand of the crosstabscommand to obtain the test statistic and its associated p-value. Using the, let’s see if there is a relationship between the type ofschool attended ( schtyp) and students’ gender ( female). Remember thatthe chi-square test assumes that the expected value for each cell is five orhigher. Thisassumption is easily met in the examples below. However, if this assumption is notmet in your data, please see the section on Fisher’s exact test below.

Crosstabs/tables = schtyp by female/statistic = chisq.These results indicate that there is no statistically significant relationship betweenthe type of school attended and gender (chi-square with one degree of freedom =0.047, p= 0.828).Let’s look at another example, this time looking at the linear relationship between gender ( female)and socio-economic status ( ses). The point of this example is that one (orboth) variables may have more than two levels, and that the variables do not have to havethe same number of levels.

C Program Chi Square Test

In this example, female has two levels (male andfemale) and ses has three levels (low, medium and high). Crosstabs/tables = female by ses/statistic = chisq.Again we find that there is no statistically significant relationship between thevariables (chi-square with two degrees of freedom = 4.577, p = 0.101). See also.Fisher’s exact testThe Fisher’s exact test is used when you want to conduct a chi-square test but one ormore of your cells has an expected frequency of five or less. Remember that thechi-square test assumes that each cell has an expected frequency of five or more, but theFisher’s exact test has no such assumption and can be used regardless of how small theexpected frequency is.

In SPSS unless you have the SPSS Exact Test Module, youcan only perform a Fisher’s exact test on a 2×2 table, and these results arepresented by default. Please see the results from the chi squaredexample above. One-way ANOVAA one-way analysis of variance (ANOVA) is used when you have a categorical independentvariable (with two or more categories) and a normally distributed interval dependentvariable and you wish to test for differences in the means of the dependent variablebroken down by the levels of the independent variable.

For example, using the, say we wish to test whether the mean of writediffers between the three program types ( prog). The command for this testwould be: oneway write by prog.The mean of the dependent variable differs significantly among the levels of programtype. However, we do not know if the difference is between only two of the levels orall three of the levels. (The F test for the Model is the same as the F testfor prog because prog was the only variable entered into the model.

Ifother variables had also been entered, the F test for the Model would have beendifferent from prog.) To see the mean of write for each level ofprogram type, means tables = write by prog.From this we can see that the students in the academic program have the highest meanwriting score, while students in the vocational program have the lowest. See also.Kruskal Wallis testThe Kruskal Wallis test is used when you have one independent variable withtwo or morelevels and an ordinal dependent variable. In other words, it is the non-parametric versionof ANOVA and a generalized form of the Mann-Whitney test method since it permitstwo or moregroups. We will use the same data file as the above (the ) and the same variables as in theexample above, but we will not assume that write is a normally distributed intervalvariable. Npar tests/k-w = write by prog (1,3).If some of the scores receive tied ranks, then a correction factor is used, yielding aslightly different value of chi-squared. With or without ties, the results indicatethat there is a statistically significant difference among the three type of programs.

Paired t-testA paired (samples) t-test is used when you have two related observations(i.e., two observations per subject) and you want to see if the means on these two normallydistributed interval variables differ from one another. For example, using the we will test whether the mean of read is equal tothe mean of write. T-test pairs = read with write (paired).These results indicate that the mean of read is not statistically significantlydifferent from the mean of write (t = -0.867, p = 0.387). Wilcoxon signed rank sum testThe Wilcoxon signed rank sum test is the non-parametric version of a paired samplest-test.

You use the Wilcoxon signed rank sum test when you do not wish to assumethat the difference between the two variables is interval and normally distributed (butyou do assume the difference is ordinal). We will use the same example as above, but wewill not assume that the difference between read and write is interval andnormally distributed. Npar test/wilcoxon = write with read (paired).The results suggest that there is not a statistically significant difference between readand write.If you believe the differences between read and write were not ordinalbut could merely be classified as positive and negative, then you may want to consider asign test in lieu of sign rank test.

Again, we will use the same variables in thisexample and assume that this difference is not ordinal. Npar test/sign = read with write (paired).Weconclude that no statistically significant difference was found (p=.556). McNemar testYou would perform McNemar’s testif you were interested in the marginal frequencies of two binary outcomes.These binary outcomes may be the same outcome variable on matched pairs(like a case-control study) or two outcomevariables from a single group. Continuing with the dataset usedin several above examples, let us create two binary outcomes in our dataset:himath andhiread. These outcomes can be considered in atwo-way contingency table. The null hypothesis is that the proportionof students in the himath group is the same as the proportion ofstudents in hiread group (i.e., that the contingency table issymmetric).

Compute himath = (math60).compute hiread = (read60).execute.crosstabs/tables=himath BY hiread/statistic=mcnemar/cells=count.McNemar’s chi-square statistic suggests that there is not a statisticallysignificant difference in the proportion of students in thehimath groupand the proportion of students in thehiread group. One-way repeated measures ANOVAYou would perform a one-way repeated measures analysis of variance if you had onecategorical independent variable and a normally distributed interval dependent variablethat was repeated at least twice for each subject. This is the equivalent of thepaired samples t-test, but allows for two or more levels of the categorical variable.

Thistests whether the mean of the dependent variable differs by the categoricalvariable. We have an example data set called,which is used in Kirk’s book Experimental Design. In this data set, y is thedependent variable, a is the repeated measure and s is the variable thatindicates the subject number. Glm y1 y2 y3 y4/wsfactor a(4).You will notice that this output gives four different p-values. Star wars fsx s for mac. Theoutput labeled “sphericity assumed” is the p-value (0.000) that you would get if you assumed compoundsymmetry in the variance-covariance matrix.

Because that assumption is often notvalid, the three other p-values offer various corrections (the Huynh-Feldt, H-F,Greenhouse-Geisser, G-G and Lower-bound). No matter which p-value youuse, our results indicate that we have a statistically significant effect of a atthe.05 level. See also.Repeated measures logistic regressionIf you have a binary outcomemeasured repeatedly for each subject and you wish to run a logisticregression that accounts for the effect of multiple measures from singlesubjects, you can perform a repeated measures logistic regression. InSPSS, this can be done using theGENLIN command and indicating binomialas the probability distribution and logit as the link function to be used inthe model. The contains3 pulse measurements from each of 30 people assigned to 2 different diet regiments and3 different exercise regiments. If we define a “high” pulse as being over100, we can then predict the probability of a high pulse using dietregiment. GET FILE='C:mydataGENLIN highpulse (REFERENCE=LAST)BY diet (order = DESCENDING)/MODEL dietDISTRIBUTION=BINOMIALLINK=LOGIT/REPEATED SUBJECT=id CORRTYPE = EXCHANGEABLE.These results indicate that diet is not statisticallysignificant (Wald Chi-Square = 1.562, p = 0.211).

C Program Chi Square Test Online

Factorial ANOVAA factorial ANOVA has two or more categorical independent variables (either with orwithout the interactions) and a single normally distributed interval dependentvariable. For example, using the we will look atwriting scores ( write) as the dependent variable and gender ( female) andsocio-economic status ( ses) as independent variables, and we will include aninteraction of female by ses.

Chi Square Test In Excel

Note that inSPSS,you do not need to have the interaction term(s) in your data set. Rather, you canhave SPSS create it/them temporarily by placing an asterisk between the variables thatwill make up the interaction term(s).

C Program Chi Square Test Results

Glm write by female ses.These results indicate that the overall model is statistically significant (F =5.666, p= 0.00). The variables female and ses are also statisticallysignificant (F = 16.595, p = 0.000 and F = 6.611, p = 0.002, respectively). However,that interaction between female and ses is not statistically significant (F= 0.133, p = 0.875). See also.Friedman testYou perform a Friedman test when you have one within-subjects independentvariable with two or more levels and a dependent variable that is not intervaland normally distributed (but at least ordinal). We will use this testto determine if there is a difference in the reading, writing and mathscores.

The null hypothesis in this test is that the distribution of theranks of each type of score (i.e., reading, writing and math) are thesame. To conduct a Friedman test, the data needto be in a long format. SPSS handles this for you, but in otherstatistical packages you will have to reshape the data before you can conductthis test. Npar tests/friedman = read write math.Friedman’s chi-square has a value of 0.645 and a p-value of 0.724 and is not statisticallysignificant. Hence, there is no evidence that the distributions of thethree types of scores are different. Ordered logistic regressionOrdered logistic regression is used when the dependent variable isordered, but not continuous.

For example, using the we will create an ordered variable called write3. This variable will have the values 1, 2 and 3, indicating alow, medium or high writing score. We do not generally recommendcategorizing a continuous variable in this way; we are simply creating avariable to use for this example. We will use gender ( female),reading score ( read) and social studies score ( socst) aspredictor variables in this model. We will use a logit link and on theprint subcommand we have requested the parameter estimates, the (model)summary statistics and the test of the parallel lines assumption.

If write ge 30 and write le 48 write3 = 1.if write ge 49 and write le 57 write3 = 2.if write ge 58 and write le 70 write3 = 3.execute.plum write3 with female read socst/link = logit/print = parameter summary tparallel.The results indicate that the overall model is statistically significant(p.