Chapter 15

      1 Comment on Chapter 15

 

Chapter 15 Quantitative Analysis: Inferential Statistics

In this chapter we see this idea that theories can never be proven only disproved. As described in this chapter with the example of the sun, we accept that it will rise again tomorrow because it has everyday so far but that does not mean we can actually prove it to be fact. We cannot prove theories because we do not know everything, therefore it could be that we have not found the evidence that disproves the theory, only those that support it. This is what inferential statistics is used for, allowing you to make predictions such as, the sun will rise again tomorrow.

In order to make these predictions, we use things such as the P-Value and the Sampling distribution. The P-Value shows if a value is statistically significant if the probability of that value is less than 5%. The Sampling distribution is defined as the theoretical distribution of an infinite number of samples from the population. There will always be some error which is considered the standard error, if it is small, then it can be accepted as a good estimate for the sample. How close that estimate is for the data set gives us the confidence interval.  of out sample estimates is defined as the confidence interval. This allows us to see the accuracy, if the value falls within the limits of the confidence interval we know this value is appropriate. The goal of the p-value and the confidence interval allow us to see the probability of our result and how close in terms of value.

General Linear Model

In this section, we look various linear models. The general linear model is said to be used to represent linear patterns of relationships in observed data.

page139image21772928

Most examine the relationship between one independent variable and one dependent variable. We can use the GLM to determine slope and intercept using this equation: y = β0 + β1 x + ε. Slope plus intercept, plus error. A line that describes the relationship between two or more variables is called a regression line, which is seen above.

y = β0 + β1 x1 + β2 x2 + β3 x3 + … + βn xn + ε.

Two-Group Comparison

In this section, we looked at the t-test which is defined as examining whether the means of two groups are statistically different from each other, or whether one group has a statistically larger mean than the other.

page142image21875344

The denominator is the standard error which can be calculated using this formula below.

page142image21873264

The degrees of freedom can be calculated using this formula:

page142image21882000

The degree of freedom is considered the number of values that can vary freely in the calculation of the statistic. It is the number of values that can be assigned to a statistical distribution.

1 thought on “Chapter 15

  1. Anastasia Dzura

    Emily – I am in awe of your ability to synthesize this chapter so concisely! Fabulous job on taking a very technically detailed chapter and providing a pragmatic summary. I think Professor Hocutt should provide extra credit for tackling this one! My primary take away is that if one is conducting quantitative research there are a number of options to robustly and credibly analyze the data with established algorithms.

    I especially appreciated the inclusion of Popper’s perspective that “theories can never be proven, only disproven”. While the philosopher in me, would say “then why try?” but pragmatically I believe it’s helpful to understand that there will not ever be a precise, definitive answer. With that understanding, the techniques presented in this chapter assist the researcher to eliminate as much bias and error as possible in their analysis.

Comments are closed.