Sign test - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Sign test
Binomial test for a single proportion
One sample Wilcoxon signed-rank test
One sample $z$ test for the mean
Marginal Homogeneity test / Stuart-Maxwell test
Independent variableIndependent variableIndependent variableIndependent variableIndependent variable
2 paired groupsNoneNoneNone2 paired groups
Dependent variableDependent variableDependent variableDependent variableDependent variable
One of ordinal levelOne categorical with 2 independent groupsOne of ordinal levelOne quantitative of interval or ratio levelOne categorical with $J$ independent groups ($J \geqslant 2$)
Null hypothesisNull hypothesisNull hypothesisNull hypothesisNull hypothesis
  • H0: P(first score of a pair exceeds second score of a pair) = P(second score of a pair exceeds first score of a pair)
If the dependent variable is measured on a continuous scale, this can also be formulated as:
  • H0: the population median of the difference scores is equal to zero
A difference score is the difference between the first score of a pair and the second score of a pair.
H0: $\pi = \pi_0$

Here $\pi$ is the population proportion of 'successes', and $\pi_0$ is the population proportion of successes according to the null hypothesis.
H0: $m = m_0$

Here $m$ is the population median, and $m_0$ is the population median according to the null hypothesis.
H0: $\mu = \mu_0$

Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis.
H0: for each category $j$ of the dependent variable, $\pi_j$ for the first paired group = $\pi_j$ for the second paired group.

Here $\pi_j$ is the population proportion in category $j.$
Alternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesis
  • H1 two sided: P(first score of a pair exceeds second score of a pair) $\neq$ P(second score of a pair exceeds first score of a pair)
  • H1 right sided: P(first score of a pair exceeds second score of a pair) > P(second score of a pair exceeds first score of a pair)
  • H1 left sided: P(first score of a pair exceeds second score of a pair) < P(second score of a pair exceeds first score of a pair)
If the dependent variable is measured on a continuous scale, this can also be formulated as:
  • H1 two sided: the population median of the difference scores is different from zero
  • H1 right sided: the population median of the difference scores is larger than zero
  • H1 left sided: the population median of the difference scores is smaller than zero
H1 two sided: $\pi \neq \pi_0$
H1 right sided: $\pi > \pi_0$
H1 left sided: $\pi < \pi_0$
H1 two sided: $m \neq m_0$
H1 right sided: $m > m_0$
H1 left sided: $m < m_0$
H1 two sided: $\mu \neq \mu_0$
H1 right sided: $\mu > \mu_0$
H1 left sided: $\mu < \mu_0$
H1: for some categories of the dependent variable, $\pi_j$ for the first paired group $\neq$ $\pi_j$ for the second paired group.
AssumptionsAssumptionsAssumptionsAssumptionsAssumptions
  • Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
  • Sample is a simple random sample from the population. That is, observations are independent of one another
  • The population distribution of the scores is symmetric
  • Sample is a simple random sample from the population. That is, observations are independent of one another
  • Scores are normally distributed in the population
  • Population standard deviation $\sigma$ is known
  • Sample is a simple random sample from the population. That is, observations are independent of one another
  • Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
Test statisticTest statisticTest statisticTest statisticTest statistic
$W = $ number of difference scores that is larger than 0$X$ = number of successes in the sampleTwo different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic. In order to compute each of the test statistics, follow the steps below:
  1. For each subject, compute the sign of the difference score $\mbox{sign}_d = \mbox{sgn}(\mbox{score} - m_0)$. The sign is 1 if the difference is larger than zero, -1 if the diffence is smaller than zero, and 0 if the difference is equal to zero.
  2. For each subject, compute the absolute value of the difference score $|\mbox{score} - m_0|$.
  3. Exclude subjects with a difference score of zero. This leaves us with a remaining number of difference scores equal to $N_r$.
  4. Assign ranks $R_d$ to the $N_r$ remaining absolute difference scores. The smallest absolute difference score corresponds to a rank score of 1, and the largest absolute difference score corresponds to a rank score of $N_r$. If there are ties, assign them the average of the ranks they occupy.
Then compute the test statistic:

  • $W_1 = \sum\, R_d^{+}$
    or
    $W_1 = \sum\, R_d^{-}$
    That is, sum all ranks corresponding to a positive difference or sum all ranks corresponding to a negative difference. Theoratically, both definitions will result in the same test outcome. However:
    • Tables with critical values for $W_1$ are usually based on the smaller of $\sum\, R_d^{+}$ and $\sum\, R_d^{-}$. So if you are using such a table, pick the smaller one.
    • If you are using the normal approximation to find the $p$ value, it makes things most straightforward if you use $W_1 = \sum\, R_d^{+}$ (if you use $W_1 = \sum\, R_d^{-}$, the right and left sided alternative hypotheses 'flip').
  • $W_2 = \sum\, \mbox{sign}_d \times R_d$
    That is, for each remaining difference score, multiply the rank of the absolute difference score by the sign of the difference score, and then sum all of the products.
$z = \dfrac{\bar{y} - \mu_0}{\sigma / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $\sigma$ is the population standard deviation, and $N$ is the sample size.

The denominator $\sigma / \sqrt{N}$ is the standard deviation of the sampling distribution of $\bar{y}$. The $z$ value indicates how many of these standard deviations $\bar{y}$ is removed from $\mu_0$.
Computing the test statistic is a bit complicated and involves matrix algebra. Unless you are following a technical course, you probably won't need to calculate it by hand.
Sampling distribution of $W$ if H0 were trueSampling distribution of $X$ if H0 were trueSampling distribution of $W_1$ and of $W_2$ if H0 were trueSampling distribution of $z$ if H0 were trueSampling distribution of the test statistic if H0 were true
The exact distribution of $W$ under the null hypothesis is the Binomial($n$, $P$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $P = 0.5$.

If $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $nP = n \times 0.5$ and standard deviation $\sqrt{nP(1-P)} = \sqrt{n \times 0.5(1 - 0.5)}$. Hence, if $n$ is large, the standardized test statistic $$z = \frac{W - n \times 0.5}{\sqrt{n \times 0.5(1 - 0.5)}}$$ follows approximately the standard normal distribution if the null hypothesis were true.
Binomial($n$, $P$) distribution.

Here $n = N$ (total sample size), and $P = \pi_0$ (population proportion according to the null hypothesis).
Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately the standard normal distribution if the null hypothesis were true.

Sampling distribution of $W_2$:
If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately the standard normal distribution if the null hypothesis were true.

If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used.

Note: if ties are present in the data, the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated.
Standard normal distributionApproximately the chi-squared distribution with $J - 1$ degrees of freedom
Significant?Significant?Significant?Significant?Significant?
If $n$ is small, the table for the binomial distribution should be used:
Two sided:
  • Check if $W$ observed in sample is in the rejection region or
  • Find two sided $p$ value corresponding to observed $W$ and check if it is equal to or smaller than $\alpha$
Right sided:
  • Check if $W$ observed in sample is in the rejection region or
  • Find right sided $p$ value corresponding to observed $W$ and check if it is equal to or smaller than $\alpha$
Left sided:
  • Check if $W$ observed in sample is in the rejection region or
  • Find left sided $p$ value corresponding to observed $W$ and check if it is equal to or smaller than $\alpha$

If $n$ is large, the table for standard normal probabilities can be used:
Two sided: Right sided: Left sided:
Two sided:
  • Check if $X$ observed in sample is in the rejection region or
  • Find two sided $p$ value corresponding to observed $X$ and check if it is equal to or smaller than $\alpha$
Right sided:
  • Check if $X$ observed in sample is in the rejection region or
  • Find right sided $p$ value corresponding to observed $X$ and check if it is equal to or smaller than $\alpha$
Left sided:
  • Check if $X$ observed in sample is in the rejection region or
  • Find left sided $p$ value corresponding to observed $X$ and check if it is equal to or smaller than $\alpha$
For large samples, the table for standard normal probabilities can be used:
Two sided: Right sided: Left sided:
Two sided: Right sided: Left sided: If we denote the test statistic as $X^2$:
  • Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
  • Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
n.a.n.a.n.a.$C\%$ confidence interval for $\mu$n.a.
---$\bar{y} \pm z^* \times \dfrac{\sigma}{\sqrt{N}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).

The confidence interval for $\mu$ can also be used as significance test.
-
n.a.n.a.n.a.Effect sizen.a.
---Cohen's $d$:
Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{\sigma}$$ Cohen's $d$ indicates how many standard deviations $\sigma$ the sample mean $\bar{y}$ is removed from $\mu_0.$
-
n.a.n.a.n.a.Visual representationn.a.
---
One sample z test
-
Equivalent ton.a.n.a.n.a.n.a.
Two sided sign test is equivalent to ----
Example contextExample contextExample contextExample contextExample context
Do people tend to score higher on mental health after a mindfulness course?Is the proportion of smokers amongst office workers different from $\pi_0 = 0.2$?Is the median mental health score of office workers different from $m_0 = 50$?Is the average mental health score of office workers different from $\mu_0 = 50$? Assume that the standard deviation of the mental health scores in the population is $\sigma = 3.$Subjects are asked to taste three different types of mayonnaise, and to indicate which of the three types of mayonnaise they like best. They then have to drink a glass of beer, and taste and rate the three types of mayonnaise again. Does drinking a beer change which type of mayonnaise people like best?
SPSSSPSSSPSSn.a.SPSS
Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
  • Put the two paired variables in the boxes below Variable 1 and Variable 2
  • Under Test Type, select the Sign test
Analyze > Nonparametric Tests > Legacy Dialogs > Binomial...
  • Put your dichotomous variable in the box below Test Variable List
  • Fill in the value for $\pi_0$ in the box next to Test Proportion
Specify the measurement level of your variable on the Variable View tab, in the column named Measure. Then go to:

Analyze > Nonparametric Tests > One Sample...
  • On the Objective tab, choose Customize Analysis
  • On the Fields tab, specify the variable for which you want to compute the Wilcoxon signed-rank test
  • On the Settings tab, choose Customize tests and check the box for 'Compare median to hypothesized (Wilcoxon signed-rank test)'. Fill in your $m_0$ in the box next to Hypothesized median
  • Click Run
  • Double click on the output table to see the full results
-Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
  • Put the two paired variables in the boxes below Variable 1 and Variable 2
  • Under Test Type, select the Marginal Homogeneity test
JamoviJamoviJamovin.a.n.a.
Jamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the two sided $p$ value that would have resulted from the sign test. Go to:

ANOVA > Repeated Measures ANOVA - Friedman
  • Put the two paired variables in the box below Measures
Frequencies > 2 Outcomes - Binomial test
  • Put your dichotomous variable in the white box at the right
  • Fill in the value for $\pi_0$ in the box next to Test value
  • Under Hypothesis, select your alternative hypothesis
T-Tests > One Sample T-Test
  • Put your variable in the box below Dependent Variables
  • Under Tests, select Wilcoxon rank
  • Under Hypothesis, fill in the value for $m_0$ in the box next to Test Value, and select your alternative hypothesis
--
Practice questionsPractice questionsPractice questionsPractice questionsPractice questions