Two sample t test - equal variances assumed - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Two sample $t$ test - equal variances assumed
Two sample $t$ test - equal variances assumed
One sample $t$ test for the mean
You cannot compare more than 3 methods
Independent/grouping variableIndependent/grouping variableIndependent variable
One categorical with 2 independent groupsOne categorical with 2 independent groupsNone
Dependent variableDependent variableDependent variable
One quantitative of interval or ratio levelOne quantitative of interval or ratio levelOne quantitative of interval or ratio level
Null hypothesisNull hypothesisNull hypothesis
H0: $\mu_1 = \mu_2$

Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.
H0: $\mu_1 = \mu_2$

Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.
H0: $\mu = \mu_0$

Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis.
Alternative hypothesisAlternative hypothesisAlternative hypothesis
H1 two sided: $\mu_1 \neq \mu_2$
H1 right sided: $\mu_1 > \mu_2$
H1 left sided: $\mu_1 < \mu_2$
H1 two sided: $\mu_1 \neq \mu_2$
H1 right sided: $\mu_1 > \mu_2$
H1 left sided: $\mu_1 < \mu_2$
H1 two sided: $\mu \neq \mu_0$
H1 right sided: $\mu > \mu_0$
H1 left sided: $\mu < \mu_0$
AssumptionsAssumptionsAssumptions
  • Within each population, the scores on the dependent variable are normally distributed
  • The standard deviation of the scores on the dependent variable is the same in both populations: $\sigma_1 = \sigma_2$
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
  • Within each population, the scores on the dependent variable are normally distributed
  • The standard deviation of the scores on the dependent variable is the same in both populations: $\sigma_1 = \sigma_2$
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
  • Scores are normally distributed in the population
  • Sample is a simple random sample from the population. That is, observations are independent of one another
Test statisticTest statisticTest statistic
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s_p$ is the pooled standard deviation, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis.

The denominator $s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0.

Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s_p$ is the pooled standard deviation, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis.

The denominator $s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0.

Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.
$t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $s$ is the sample standard deviation, and $N$ is the sample size.

The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$.
Pooled standard deviationPooled standard deviationn.a.
$s_p = \sqrt{\dfrac{(n_1 - 1) \times s^2_1 + (n_2 - 1) \times s^2_2}{n_1 + n_2 - 2}}$$s_p = \sqrt{\dfrac{(n_1 - 1) \times s^2_1 + (n_2 - 1) \times s^2_2}{n_1 + n_2 - 2}}$-
Sampling distribution of $t$ if H0 were trueSampling distribution of $t$ if H0 were trueSampling distribution of $t$ if H0 were true
$t$ distribution with $n_1 + n_2 - 2$ degrees of freedom$t$ distribution with $n_1 + n_2 - 2$ degrees of freedom$t$ distribution with $N - 1$ degrees of freedom
Significant?Significant?Significant?
Two sided: Right sided: Left sided: Two sided: Right sided: Left sided: Two sided: Right sided: Left sided:
$C\%$ confidence interval for $\mu_1 - \mu_2$$C\%$ confidence interval for $\mu_1 - \mu_2$$C\%$ confidence interval for $\mu$
$(\bar{y}_1 - \bar{y}_2) \pm t^* \times s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$
where the critical value $t^*$ is the value under the $t_{n_1 + n_2 - 2}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).

The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test.
$(\bar{y}_1 - \bar{y}_2) \pm t^* \times s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$
where the critical value $t^*$ is the value under the $t_{n_1 + n_2 - 2}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).

The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test.
$\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).

The confidence interval for $\mu$ can also be used as significance test.
Effect sizeEffect sizeEffect size
Cohen's $d$:
Standardized difference between the mean in group $1$ and in group $2$: $$d = \frac{\bar{y}_1 - \bar{y}_2}{s_p}$$ Cohen's $d$ indicates how many standard deviations $s_p$ the two sample means are removed from each other.
Cohen's $d$:
Standardized difference between the mean in group $1$ and in group $2$: $$d = \frac{\bar{y}_1 - \bar{y}_2}{s_p}$$ Cohen's $d$ indicates how many standard deviations $s_p$ the two sample means are removed from each other.
Cohen's $d$:
Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean $\bar{y}$ is removed from $\mu_0.$
Visual representationVisual representationVisual representation
Two sample t test - equal variances assumed
Two sample t test - equal variances assumed
One sample t test
Equivalent toEquivalent ton.a.
One way ANOVA with an independent variable with 2 levels ($I$ = 2):
  • two sided two sample $t$ test is equivalent to ANOVA $F$ test when $I$ = 2
  • two sample $t$ test is equivalent to $t$ test for contrast when $I$ = 2
  • two sample $t$ test is equivalent to $t$ test multiple comparisons when $I$ = 2
OLS regression with one categorical independent variable with 2 levels:
  • two sided two sample $t$ test is equivalent to $F$ test regression model
  • two sample $t$ test is equivalent to $t$ test for regression coefficient $\beta_1$
One way ANOVA with an independent variable with 2 levels ($I$ = 2):
  • two sided two sample $t$ test is equivalent to ANOVA $F$ test when $I$ = 2
  • two sample $t$ test is equivalent to $t$ test for contrast when $I$ = 2
  • two sample $t$ test is equivalent to $t$ test multiple comparisons when $I$ = 2
OLS regression with one categorical independent variable with 2 levels:
  • two sided two sample $t$ test is equivalent to $F$ test regression model
  • two sample $t$ test is equivalent to $t$ test for regression coefficient $\beta_1$
-
Example contextExample contextExample context
Is the average mental health score different between men and women? Assume that in the population, the standard deviation of mental health scores is equal amongst men and women.Is the average mental health score different between men and women? Assume that in the population, the standard deviation of mental health scores is equal amongst men and women.Is the average mental health score of office workers different from $\mu_0 = 50$?
SPSSSPSSSPSS
Analyze > Compare Means > Independent-Samples T Test...
  • Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable
  • Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
  • Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
  • Continue and click OK
Analyze > Compare Means > Independent-Samples T Test...
  • Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable
  • Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
  • Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
  • Continue and click OK
Analyze > Compare Means > One-Sample T Test...
  • Put your variable in the box below Test Variable(s)
  • Fill in the value for $\mu_0$ in the box next to Test Value
JamoviJamoviJamovi
T-Tests > Independent Samples T-Test
  • Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
  • Under Tests, select Student's (selected by default)
  • Under Hypothesis, select your alternative hypothesis
T-Tests > Independent Samples T-Test
  • Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
  • Under Tests, select Student's (selected by default)
  • Under Hypothesis, select your alternative hypothesis
T-Tests > One Sample T-Test
  • Put your variable in the box below Dependent Variables
  • Under Hypothesis, fill in the value for $\mu_0$ in the box next to Test Value, and select your alternative hypothesis
Practice questionsPractice questionsPractice questions