Data Analysis for Quasi-Experimental Research


Data Analysis for Quasi-Experimental Research


The following module provides an overview of data analysis methods used in quasi-experimental research.

Learning Objectives

  • Describe the difference in data analysis between experimental and quasi-experimental research projects
  • List and briefly describe common data analysis methods used in quasi-experimental research
  • List the factors that should be considered when choosing a statistical analysis

 

Experimental and Quasi-experimental research designs are quantitative research studies. Quantitative studies result in data that provides quantifiable, objective, and easy to interpret results. The data can typically be summarized in a way that allows for generalizations that can be applied to the greater population and the results can be reproduced. The design of most quantitative studies also helps to ensure that personal bias does not impact the data. Quantitative data can be analyzed in several ways. Experimental designs typically lend themselves to more straightforward and simpler types of statistical analysis. Primarily due to the lack of randomization, quasi-experimental studies usually require more advanced statistical procedures. Quasi-experimental designs may also utilize surveys, interviews, and observations which may further complicate the data analysis. This module will focus on the most common statistical procedures utilized in quasi-experimental analyses.

The first step in any quantitative data analysis is to identify the levels or scales of measurement as nominal, ordinal, interval, or ratio. See the Research Ready: Quantitative Scales of Measurement module for more information on the scales of measurement. This is an important first step because it will help you determine how best to organize the data. The data can typically be entered into a spreadsheet and organized or “coded” in some way that begins to give meaning to the data.

The next step would be to use descriptive statistics to summarize or “describe” the data. It can be difficult to identify patterns or visualize what the data is showing when just examining raw data. Following is a list of commonly used descriptive statistics:

  • Frequencies – a count of the number of times a particular score or value is found in the data set
  • Percentages – used to express a set of scores or values as a percentage of the whole
  • Mean – numerical average of the scores or values for a particular variable
  • Median – the numerical midpoint of the scores or values that is at the center of the distribution of the scores
  • Mode – the most common score or value for a particular variable
  • Minimum and maximum values (range) – the highest and lowest values or scores for any variable

 

It is now apparent why determining the scale of measurement is important before beginning to utilize descriptive statistics. For example, nominal scales where data is coded, as in the case of gender, would not have a mean score. Therefore, you must first use the scale of measurement to determine what type of descriptive statistic may be appropriate. The results are then expressed as exact numbers and allow you to begin to give meaning to the data. For some studies, descriptive statistics may be sufficient if you do not need to generalize the results to a larger population. For example, if you are comparing the percentage of teenagers that smoke in private versus public high schools, descriptive statistics may be sufficient.

However, if you want to utilize the data to make inferences or predictions about the population, you will need to go another step farther and use inferential statistics. Inferential statistics examine the differences and relationships between two or more samples of the population. These are more complex analyses and are looking for significant differences between variables and the sample groups of the population. Inferential statistics allow you test hypotheses and generalize results to population as whole. Following is a list of basic inferential statistical tests used for group comparison analyses:

  • T-Test: This is most basic form of group comparison and is used to compare to independent groups of participants and the data collected from those groups. A t-test compares the means of the data sets to determine if there is a statistically significant difference. The data sets are independent of one another and not related, therefore, this is sometimes referred to as the independent-sample t-test. An example would be to compare the test scores of students who took advantage of tutoring services with the test scores of students that do not use tutoring services.
  • Paired-Sample T-Test: This test is used when the data sets are related in some way. This type of statistical test may be applied to look at the pre-test and post-test scores for a group of students taking a physics course.   The pre-test and post-test scores are related in that they belong to the same person.
  • Single-Sample T-Test: If the comparison needed is between a data set and fixed value, this test may be used.   For example, final test scores in a chemistry course may be compared to a national average.
  • ANOVA (Analysis of Variance): This is basic test that is used when comparing three or more sets of data in a way that requires several pair comparisons to be made. A researcher may need to compare the students test scores from 4 different elementary schools. Scores from each school will need to be compared with the scores from every other school. The ANOVA will tell you if the difference is significant, but it does not speculate regarding “why”.
  • Regression – This test is used to determine whether one variable is a predictor of another variable. For example, a regression analysis may indicate to you whether participating in a test preparation program results in higher ACT scores for high school students. It is important to note that regression analyses are like correlations in that causation cannot be inferred from the analyses.
  • Multiple Regression Analysis – This test is utilized if several variables are being tested in relation to an outcome. It is commonly used when the impact of multiple variables is being examined.
  • Factor Analysis – This method is commonly used when data is collected through a survey and contains a large number of items. Factor analysis allows for a reduction in the number of variables, while at the same time looking detecting possible relationships between those variables.
  • ANCOVA (Analysis of Covariance) – This method increases the strength of a quasi-experimental design. The ANCOVA reduces the initial differences between groups, which is important due to the lack of randomization, by making compensating adjustments to the data.

The type of data analysis will also depend on the number of variables in the study. Studies may be univariate, bivariate or multivariate in nature. The following SlideShare presentation, Quantitative Data Analysis explains the use of appropriate statistical analyses in relation to the number of variables being examined.

 

Quantitative Data Analysis from Asma Muhamad

 

The following SlideShare, Experimental and Quasi-Experimental Research, provides an additional overview of the two types of research designs but also includes a basic discussion of statistical analyses commonly used in quasi-experimental research. In addition, the formulation and use of null and alternate hypotheses are discussed in detail. The slides also include links for additional resources that further explain statistical analyses including the t-test, ANOVA and Multiple Regression tests.

 

Experimental from Carla Piper

 

Suggested Readings

Bernard, H. R., & Bernard, H. R. (2012). Social research methods: Qualitative and quantitative approaches. Sage.

Brown, L. (2010). Quasi-experimental research. Doing Early Childhood Research: International perspectives on theory and practice, 345.

Campbell, D. T., & Stanley, J. C. (2015). Experimental and quasi-experimental designs for research. Ravenio Books.

Creswell, J. W. (2013). Research design: Qualitative, quantitative, and mixed methods approaches. Sage publications.

Isaac, S., & Michael, W. B. (1971). Handbook in research and evaluation.

Lipsey, M. W. (1990). Design sensitivity: Statistical power for experimental research (Vol. 19). Sage.

Punch, K. F. (2013). Introduction to social research: Quantitative and qualitative approaches. Sage.

Rosenthal, R., & Rosnow, R. L. (1991). Essentials of behavioral research: Methods and data analysis. McGraw-Hill Humanities Social.

William R.. Shadish, Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Wadsworth Cengage learning.




Viewed 15 times