Binomial test for a single proportion - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Binomial test for a single proportion
Two way ANOVA
Independent variableIndependent/grouping variables
NoneTwo categorical, the first with $I$ independent groups and the second with $J$ independent groups ($I \geqslant 2$, $J \geqslant 2$)
Dependent variableDependent variable
One categorical with 2 independent groupsOne quantitative of interval or ratio level
Null hypothesisNull hypothesis
H0: $\pi = \pi_0$

Here $\pi$ is the population proportion of 'successes', and $\pi_0$ is the population proportion of successes according to the null hypothesis.
ANOVA $F$ tests:
  • H0 for main and interaction effects together (model): no main effects and interaction effect
  • H0 for independent variable A: no main effect for A
  • H0 for independent variable B: no main effect for B
  • H0 for the interaction term: no interaction effect between A and B
Like in one way ANOVA, we can also perform $t$ tests for specific contrasts and multiple comparisons. This is more advanced stuff.
Alternative hypothesisAlternative hypothesis
H1 two sided: $\pi \neq \pi_0$
H1 right sided: $\pi > \pi_0$
H1 left sided: $\pi < \pi_0$
ANOVA $F$ tests:
  • H1 for main and interaction effects together (model): there is a main effect for A, and/or for B, and/or an interaction effect
  • H1 for independent variable A: there is a main effect for A
  • H1 for independent variable B: there is a main effect for B
  • H1 for the interaction term: there is an interaction effect between A and B
AssumptionsAssumptions
  • Sample is a simple random sample from the population. That is, observations are independent of one another
  • Within each of the $I \times J$ populations, the scores on the dependent variable are normally distributed
  • The standard deviation of the scores on the dependent variable is the same in each of the $I \times J$ populations
  • For each of the $I \times J$ groups, the sample is an independent and simple random sample from the population defined by that group. That is, within and between groups, observations are independent of one another
  • Equal sample sizes for each group make the interpretation of the ANOVA output easier (unequal sample sizes result in overlap in the sum of squares; this is advanced stuff)
Test statisticTest statistic
$X$ = number of successes in the sampleFor main and interaction effects together (model):
  • $F = \dfrac{\mbox{mean square model}}{\mbox{mean square error}}$
For independent variable A:
  • $F = \dfrac{\mbox{mean square A}}{\mbox{mean square error}}$
For independent variable B:
  • $F = \dfrac{\mbox{mean square B}}{\mbox{mean square error}}$
For the interaction term:
  • $F = \dfrac{\mbox{mean square interaction}}{\mbox{mean square error}}$
Note: mean square error is also known as mean square residual or mean square within.
n.a.Pooled standard deviation
-$ \begin{aligned} s_p &= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2}{N - (I \times J)}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} $
Sampling distribution of $X$ if H0 were trueSampling distribution of $F$ if H0 were true
Binomial($n$, $P$) distribution.

Here $n = N$ (total sample size), and $P = \pi_0$ (population proportion according to the null hypothesis).
For main and interaction effects together (model):
  • $F$ distribution with $(I - 1) + (J - 1) + (I - 1) \times (J - 1)$ (df model, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For independent variable A:
  • $F$ distribution with $I - 1$ (df A, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For independent variable B:
  • $F$ distribution with $J - 1$ (df B, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For the interaction term:
  • $F$ distribution with $(I - 1) \times (J - 1)$ (df interaction, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
Here $N$ is the total sample size.
Significant?Significant?
Two sided:
  • Check if $X$ observed in sample is in the rejection region or
  • Find two sided $p$ value corresponding to observed $X$ and check if it is equal to or smaller than $\alpha$
Right sided:
  • Check if $X$ observed in sample is in the rejection region or
  • Find right sided $p$ value corresponding to observed $X$ and check if it is equal to or smaller than $\alpha$
Left sided:
  • Check if $X$ observed in sample is in the rejection region or
  • Find left sided $p$ value corresponding to observed $X$ and check if it is equal to or smaller than $\alpha$
  • Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
  • Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$
n.a.Effect size
-
  • Proportion variance explained $R^2$:
    Proportion variance of the dependent variable $y$ explained by the independent variables and the interaction effect together:
    $$ \begin{align} R^2 &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}} \end{align} $$ $R^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.

  • Proportion variance explained $\eta^2$:
    Proportion variance of the dependent variable $y$ explained by an independent variable or interaction effect:
    $$ \begin{align} \eta^2_A &= \dfrac{\mbox{sum of squares A}}{\mbox{sum of squares total}}\\ \\ \eta^2_B &= \dfrac{\mbox{sum of squares B}}{\mbox{sum of squares total}}\\ \\ \eta^2_{int} &= \dfrac{\mbox{sum of squares int}}{\mbox{sum of squares total}} \end{align} $$ $\eta^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.

  • Proportion variance explained $\omega^2$:
    Corrects for the positive bias in $\eta^2$ and is equal to:
    $$ \begin{align} \omega^2_A &= \dfrac{\mbox{sum of squares A} - \mbox{degrees of freedom A} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \\ \omega^2_B &= \dfrac{\mbox{sum of squares B} - \mbox{degrees of freedom B} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \\ \omega^2_{int} &= \dfrac{\mbox{sum of squares int} - \mbox{degrees of freedom int} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \end{align} $$ $\omega^2$ is a better estimate of the explained variance in the population than $\eta^2$. Only for balanced designs (equal sample sizes).

  • Proportion variance explained $\eta^2_{partial}$: $$ \begin{align} \eta^2_{partial\,A} &= \frac{\mbox{sum of squares A}}{\mbox{sum of squares A} + \mbox{sum of squares error}}\\ \\ \eta^2_{partial\,B} &= \frac{\mbox{sum of squares B}}{\mbox{sum of squares B} + \mbox{sum of squares error}}\\ \\ \eta^2_{partial\,int} &= \frac{\mbox{sum of squares int}}{\mbox{sum of squares int} + \mbox{sum of squares error}} \end{align} $$
n.a.ANOVA table
-
two way ANOVA table
n.a.Equivalent to
-OLS regression with two categorical independent variables and the interaction term, transformed into $(I - 1)$ + $(J - 1)$ + $(I - 1) \times (J - 1)$ code variables.
Example contextExample context
Is the proportion of smokers amongst office workers different from $\pi_0 = 0.2$?Is the average mental health score different between people from a low, moderate, and high economic class? And is the average mental health score different between men and women? And is there an interaction effect between economic class and gender?
SPSSSPSS
Analyze > Nonparametric Tests > Legacy Dialogs > Binomial...
  • Put your dichotomous variable in the box below Test Variable List
  • Fill in the value for $\pi_0$ in the box next to Test Proportion
Analyze > General Linear Model > Univariate...
  • Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factor(s)
JamoviJamovi
Frequencies > 2 Outcomes - Binomial test
  • Put your dichotomous variable in the white box at the right
  • Fill in the value for $\pi_0$ in the box next to Test value
  • Under Hypothesis, select your alternative hypothesis
ANOVA > ANOVA
  • Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factors
Practice questionsPractice questions