Sign test - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Sign test
Binomial test for a single proportion
One sample Wilcoxon signed-rank test
Two way ANOVA
Two sample $t$ test - equal variances assumed
Independent variableIndependent variableIndependent variableIndependent/grouping variablesIndependent/grouping variable
2 paired groupsNoneNoneTwo categorical, the first with $I$ independent groups and the second with $J$ independent groups ($I \geqslant 2$, $J \geqslant 2$)One categorical with 2 independent groups
Dependent variableDependent variableDependent variableDependent variableDependent variable
One of ordinal levelOne categorical with 2 independent groupsOne of ordinal levelOne quantitative of interval or ratio levelOne quantitative of interval or ratio level
Null hypothesisNull hypothesisNull hypothesisNull hypothesisNull hypothesis
  • H0: P(first score of a pair exceeds second score of a pair) = P(second score of a pair exceeds first score of a pair)
If the dependent variable is measured on a continuous scale, this can also be formulated as:
  • H0: the population median of the difference scores is equal to zero
A difference score is the difference between the first score of a pair and the second score of a pair.
H0: $\pi = \pi_0$

Here $\pi$ is the population proportion of 'successes', and $\pi_0$ is the population proportion of successes according to the null hypothesis.
H0: $m = m_0$

Here $m$ is the population median, and $m_0$ is the population median according to the null hypothesis.
ANOVA $F$ tests:
  • H0 for main and interaction effects together (model): no main effects and interaction effect
  • H0 for independent variable A: no main effect for A
  • H0 for independent variable B: no main effect for B
  • H0 for the interaction term: no interaction effect between A and B
Like in one way ANOVA, we can also perform $t$ tests for specific contrasts and multiple comparisons. This is more advanced stuff.
H0: $\mu_1 = \mu_2$

Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.
Alternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesis
  • H1 two sided: P(first score of a pair exceeds second score of a pair) $\neq$ P(second score of a pair exceeds first score of a pair)
  • H1 right sided: P(first score of a pair exceeds second score of a pair) > P(second score of a pair exceeds first score of a pair)
  • H1 left sided: P(first score of a pair exceeds second score of a pair) < P(second score of a pair exceeds first score of a pair)
If the dependent variable is measured on a continuous scale, this can also be formulated as:
  • H1 two sided: the population median of the difference scores is different from zero
  • H1 right sided: the population median of the difference scores is larger than zero
  • H1 left sided: the population median of the difference scores is smaller than zero
H1 two sided: $\pi \neq \pi_0$
H1 right sided: $\pi > \pi_0$
H1 left sided: $\pi < \pi_0$
H1 two sided: $m \neq m_0$
H1 right sided: $m > m_0$
H1 left sided: $m < m_0$
ANOVA $F$ tests:
  • H1 for main and interaction effects together (model): there is a main effect for A, and/or for B, and/or an interaction effect
  • H1 for independent variable A: there is a main effect for A
  • H1 for independent variable B: there is a main effect for B
  • H1 for the interaction term: there is an interaction effect between A and B
H1 two sided: $\mu_1 \neq \mu_2$
H1 right sided: $\mu_1 > \mu_2$
H1 left sided: $\mu_1 < \mu_2$
AssumptionsAssumptionsAssumptionsAssumptionsAssumptions
  • Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
  • Sample is a simple random sample from the population. That is, observations are independent of one another
  • The population distribution of the scores is symmetric
  • Sample is a simple random sample from the population. That is, observations are independent of one another
  • Within each of the $I \times J$ populations, the scores on the dependent variable are normally distributed
  • The standard deviation of the scores on the dependent variable is the same in each of the $I \times J$ populations
  • For each of the $I \times J$ groups, the sample is an independent and simple random sample from the population defined by that group. That is, within and between groups, observations are independent of one another
  • Equal sample sizes for each group make the interpretation of the ANOVA output easier (unequal sample sizes result in overlap in the sum of squares; this is advanced stuff)
  • Within each population, the scores on the dependent variable are normally distributed
  • The standard deviation of the scores on the dependent variable is the same in both populations: $\sigma_1 = \sigma_2$
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
Test statisticTest statisticTest statisticTest statisticTest statistic
$W = $ number of difference scores that is larger than 0$X$ = number of successes in the sampleTwo different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic. In order to compute each of the test statistics, follow the steps below:
  1. For each subject, compute the sign of the difference score $\mbox{sign}_d = \mbox{sgn}(\mbox{score} - m_0)$. The sign is 1 if the difference is larger than zero, -1 if the diffence is smaller than zero, and 0 if the difference is equal to zero.
  2. For each subject, compute the absolute value of the difference score $|\mbox{score} - m_0|$.
  3. Exclude subjects with a difference score of zero. This leaves us with a remaining number of difference scores equal to $N_r$.
  4. Assign ranks $R_d$ to the $N_r$ remaining absolute difference scores. The smallest absolute difference score corresponds to a rank score of 1, and the largest absolute difference score corresponds to a rank score of $N_r$. If there are ties, assign them the average of the ranks they occupy.
Then compute the test statistic:

  • $W_1 = \sum\, R_d^{+}$
    or
    $W_1 = \sum\, R_d^{-}$
    That is, sum all ranks corresponding to a positive difference or sum all ranks corresponding to a negative difference. Theoratically, both definitions will result in the same test outcome. However:
    • Tables with critical values for $W_1$ are usually based on the smaller of $\sum\, R_d^{+}$ and $\sum\, R_d^{-}$. So if you are using such a table, pick the smaller one.
    • If you are using the normal approximation to find the $p$ value, it makes things most straightforward if you use $W_1 = \sum\, R_d^{+}$ (if you use $W_1 = \sum\, R_d^{-}$, the right and left sided alternative hypotheses 'flip').
  • $W_2 = \sum\, \mbox{sign}_d \times R_d$
    That is, for each remaining difference score, multiply the rank of the absolute difference score by the sign of the difference score, and then sum all of the products.
For main and interaction effects together (model):
  • $F = \dfrac{\mbox{mean square model}}{\mbox{mean square error}}$
For independent variable A:
  • $F = \dfrac{\mbox{mean square A}}{\mbox{mean square error}}$
For independent variable B:
  • $F = \dfrac{\mbox{mean square B}}{\mbox{mean square error}}$
For the interaction term:
  • $F = \dfrac{\mbox{mean square interaction}}{\mbox{mean square error}}$
Note: mean square error is also known as mean square residual or mean square within.
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s_p$ is the pooled standard deviation, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis.

The denominator $s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0.

Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.
n.a.n.a.n.a.Pooled standard deviationPooled standard deviation
---$ \begin{aligned} s_p &= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2}{N - (I \times J)}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} $ $s_p = \sqrt{\dfrac{(n_1 - 1) \times s^2_1 + (n_2 - 1) \times s^2_2}{n_1 + n_2 - 2}}$
Sampling distribution of $W$ if H0 were trueSampling distribution of $X$ if H0 were trueSampling distribution of $W_1$ and of $W_2$ if H0 were trueSampling distribution of $F$ if H0 were trueSampling distribution of $t$ if H0 were true
The exact distribution of $W$ under the null hypothesis is the Binomial($n$, $P$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $P = 0.5$.

If $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $nP = n \times 0.5$ and standard deviation $\sqrt{nP(1-P)} = \sqrt{n \times 0.5(1 - 0.5)}$. Hence, if $n$ is large, the standardized test statistic $$z = \frac{W - n \times 0.5}{\sqrt{n \times 0.5(1 - 0.5)}}$$ follows approximately the standard normal distribution if the null hypothesis were true.
Binomial($n$, $P$) distribution.

Here $n = N$ (total sample size), and $P = \pi_0$ (population proportion according to the null hypothesis).
Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately the standard normal distribution if the null hypothesis were true.

Sampling distribution of $W_2$:
If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately the standard normal distribution if the null hypothesis were true.

If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used.

Note: if ties are present in the data, the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated.
For main and interaction effects together (model):
  • $F$ distribution with $(I - 1) + (J - 1) + (I - 1) \times (J - 1)$ (df model, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For independent variable A:
  • $F$ distribution with $I - 1$ (df A, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For independent variable B:
  • $F$ distribution with $J - 1$ (df B, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For the interaction term:
  • $F$ distribution with $(I - 1) \times (J - 1)$ (df interaction, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
Here $N$ is the total sample size.
$t$ distribution with $n_1 + n_2 - 2$ degrees of freedom
Significant?Significant?Significant?Significant?Significant?
If $n$ is small, the table for the binomial distribution should be used:
Two sided:
  • Check if $W$ observed in sample is in the rejection region or
  • Find two sided $p$ value corresponding to observed $W$ and check if it is equal to or smaller than $\alpha$
Right sided:
  • Check if $W$ observed in sample is in the rejection region or
  • Find right sided $p$ value corresponding to observed $W$ and check if it is equal to or smaller than $\alpha$
Left sided:
  • Check if $W$ observed in sample is in the rejection region or
  • Find left sided $p$ value corresponding to observed $W$ and check if it is equal to or smaller than $\alpha$

If $n$ is large, the table for standard normal probabilities can be used:
Two sided: Right sided: Left sided:
Two sided:
  • Check if $X$ observed in sample is in the rejection region or
  • Find two sided $p$ value corresponding to observed $X$ and check if it is equal to or smaller than $\alpha$
Right sided:
  • Check if $X$ observed in sample is in the rejection region or
  • Find right sided $p$ value corresponding to observed $X$ and check if it is equal to or smaller than $\alpha$
Left sided:
  • Check if $X$ observed in sample is in the rejection region or
  • Find left sided $p$ value corresponding to observed $X$ and check if it is equal to or smaller than $\alpha$
For large samples, the table for standard normal probabilities can be used:
Two sided: Right sided: Left sided:
  • Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
  • Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$
Two sided: Right sided: Left sided:
n.a.n.a.n.a.n.a.$C\%$ confidence interval for $\mu_1 - \mu_2$
----$(\bar{y}_1 - \bar{y}_2) \pm t^* \times s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$
where the critical value $t^*$ is the value under the $t_{n_1 + n_2 - 2}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).

The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test.
n.a.n.a.n.a.Effect sizeEffect size
---
  • Proportion variance explained $R^2$:
    Proportion variance of the dependent variable $y$ explained by the independent variables and the interaction effect together:
    $$ \begin{align} R^2 &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}} \end{align} $$ $R^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.

  • Proportion variance explained $\eta^2$:
    Proportion variance of the dependent variable $y$ explained by an independent variable or interaction effect:
    $$ \begin{align} \eta^2_A &= \dfrac{\mbox{sum of squares A}}{\mbox{sum of squares total}}\\ \\ \eta^2_B &= \dfrac{\mbox{sum of squares B}}{\mbox{sum of squares total}}\\ \\ \eta^2_{int} &= \dfrac{\mbox{sum of squares int}}{\mbox{sum of squares total}} \end{align} $$ $\eta^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.

  • Proportion variance explained $\omega^2$:
    Corrects for the positive bias in $\eta^2$ and is equal to:
    $$ \begin{align} \omega^2_A &= \dfrac{\mbox{sum of squares A} - \mbox{degrees of freedom A} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \\ \omega^2_B &= \dfrac{\mbox{sum of squares B} - \mbox{degrees of freedom B} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \\ \omega^2_{int} &= \dfrac{\mbox{sum of squares int} - \mbox{degrees of freedom int} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \end{align} $$ $\omega^2$ is a better estimate of the explained variance in the population than $\eta^2$. Only for balanced designs (equal sample sizes).

  • Proportion variance explained $\eta^2_{partial}$: $$ \begin{align} \eta^2_{partial\,A} &= \frac{\mbox{sum of squares A}}{\mbox{sum of squares A} + \mbox{sum of squares error}}\\ \\ \eta^2_{partial\,B} &= \frac{\mbox{sum of squares B}}{\mbox{sum of squares B} + \mbox{sum of squares error}}\\ \\ \eta^2_{partial\,int} &= \frac{\mbox{sum of squares int}}{\mbox{sum of squares int} + \mbox{sum of squares error}} \end{align} $$
Cohen's $d$:
Standardized difference between the mean in group $1$ and in group $2$: $$d = \frac{\bar{y}_1 - \bar{y}_2}{s_p}$$ Cohen's $d$ indicates how many standard deviations $s_p$ the two sample means are removed from each other.
n.a.n.a.n.a.n.a.Visual representation
----
Two sample t test - equal variances assumed
n.a.n.a.n.a.ANOVA tablen.a.
---
two way ANOVA table
-
Equivalent ton.a.n.a.Equivalent toEquivalent to
Two sided sign test is equivalent to --OLS regression with two categorical independent variables and the interaction term, transformed into $(I - 1)$ + $(J - 1)$ + $(I - 1) \times (J - 1)$ code variables.One way ANOVA with an independent variable with 2 levels ($I$ = 2):
  • two sided two sample $t$ test is equivalent to ANOVA $F$ test when $I$ = 2
  • two sample $t$ test is equivalent to $t$ test for contrast when $I$ = 2
  • two sample $t$ test is equivalent to $t$ test multiple comparisons when $I$ = 2
OLS regression with one categorical independent variable with 2 levels:
  • two sided two sample $t$ test is equivalent to $F$ test regression model
  • two sample $t$ test is equivalent to $t$ test for regression coefficient $\beta_1$
Example contextExample contextExample contextExample contextExample context
Do people tend to score higher on mental health after a mindfulness course?Is the proportion of smokers amongst office workers different from $\pi_0 = 0.2$?Is the median mental health score of office workers different from $m_0 = 50$?Is the average mental health score different between people from a low, moderate, and high economic class? And is the average mental health score different between men and women? And is there an interaction effect between economic class and gender?Is the average mental health score different between men and women? Assume that in the population, the standard deviation of mental health scores is equal amongst men and women.
SPSSSPSSSPSSSPSSSPSS
Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
  • Put the two paired variables in the boxes below Variable 1 and Variable 2
  • Under Test Type, select the Sign test
Analyze > Nonparametric Tests > Legacy Dialogs > Binomial...
  • Put your dichotomous variable in the box below Test Variable List
  • Fill in the value for $\pi_0$ in the box next to Test Proportion
Specify the measurement level of your variable on the Variable View tab, in the column named Measure. Then go to:

Analyze > Nonparametric Tests > One Sample...
  • On the Objective tab, choose Customize Analysis
  • On the Fields tab, specify the variable for which you want to compute the Wilcoxon signed-rank test
  • On the Settings tab, choose Customize tests and check the box for 'Compare median to hypothesized (Wilcoxon signed-rank test)'. Fill in your $m_0$ in the box next to Hypothesized median
  • Click Run
  • Double click on the output table to see the full results
Analyze > General Linear Model > Univariate...
  • Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factor(s)
Analyze > Compare Means > Independent-Samples T Test...
  • Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable
  • Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
  • Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
  • Continue and click OK
JamoviJamoviJamoviJamoviJamovi
Jamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the two sided $p$ value that would have resulted from the sign test. Go to:

ANOVA > Repeated Measures ANOVA - Friedman
  • Put the two paired variables in the box below Measures
Frequencies > 2 Outcomes - Binomial test
  • Put your dichotomous variable in the white box at the right
  • Fill in the value for $\pi_0$ in the box next to Test value
  • Under Hypothesis, select your alternative hypothesis
T-Tests > One Sample T-Test
  • Put your variable in the box below Dependent Variables
  • Under Tests, select Wilcoxon rank
  • Under Hypothesis, fill in the value for $m_0$ in the box next to Test Value, and select your alternative hypothesis
ANOVA > ANOVA
  • Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factors
T-Tests > Independent Samples T-Test
  • Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
  • Under Tests, select Student's (selected by default)
  • Under Hypothesis, select your alternative hypothesis
Practice questionsPractice questionsPractice questionsPractice questionsPractice questions