Sign test - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Sign test | Binomial test for a single proportion | Chi-squared test for the relationship between two categorical variables | Binomial test for a single proportion | Paired sample $t$ test |
|
---|---|---|---|---|---|
Independent variable | Independent variable | Independent /column variable | Independent variable | Independent variable | |
2 paired groups | None | One categorical with $I$ independent groups ($I \geqslant 2$) | None | 2 paired groups | |
Dependent variable | Dependent variable | Dependent /row variable | Dependent variable | Dependent variable | |
One of ordinal level | One categorical with 2 independent groups | One categorical with $J$ independent groups ($J \geqslant 2$) | One categorical with 2 independent groups | One quantitative of interval or ratio level | |
Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | |
| H0: $\pi = \pi_0$
Here $\pi$ is the population proportion of 'successes', and $\pi_0$ is the population proportion of successes according to the null hypothesis. | H0: there is no association between the row and column variable More precisely, if there are $I$ independent random samples of size $n_i$ from each of $I$ populations, defined by the independent variable:
| H0: $\pi = \pi_0$
Here $\pi$ is the population proportion of 'successes', and $\pi_0$ is the population proportion of successes according to the null hypothesis. | H0: $\mu = \mu_0$
Here $\mu$ is the population mean of the difference scores, and $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0. A difference score is the difference between the first score of a pair and the second score of a pair. | |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
| H1 two sided: $\pi \neq \pi_0$ H1 right sided: $\pi > \pi_0$ H1 left sided: $\pi < \pi_0$ | H1: there is an association between the row and column variable More precisely, if there are $I$ independent random samples of size $n_i$ from each of $I$ populations, defined by the independent variable:
| H1 two sided: $\pi \neq \pi_0$ H1 right sided: $\pi > \pi_0$ H1 left sided: $\pi < \pi_0$ | H1 two sided: $\mu \neq \mu_0$ H1 right sided: $\mu > \mu_0$ H1 left sided: $\mu < \mu_0$ | |
Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | |
|
|
|
|
| |
Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | |
$W = $ number of difference scores that is larger than 0 | $X$ = number of successes in the sample | $X^2 = \sum{\frac{(\mbox{observed cell count} - \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here for each cell, the expected cell count = $\dfrac{\mbox{row total} \times \mbox{column total}}{\mbox{total sample size}}$, the observed cell count is the observed sample count in that same cell, and the sum is over all $I \times J$ cells. | $X$ = number of successes in the sample | $t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to the null hypothesis, $s$ is the sample standard deviation of the difference scores, and $N$ is the sample size (number of difference scores). The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$. | |
Sampling distribution of $W$ if H0 were true | Sampling distribution of $X$ if H0 were true | Sampling distribution of $X^2$ if H0 were true | Sampling distribution of $X$ if H0 were true | Sampling distribution of $t$ if H0 were true | |
The exact distribution of $W$ under the null hypothesis is the Binomial($n$, $P$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $P = 0.5$.
If $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $nP = n \times 0.5$ and standard deviation $\sqrt{nP(1-P)} = \sqrt{n \times 0.5(1 - 0.5)}$. Hence, if $n$ is large, the standardized test statistic $$z = \frac{W - n \times 0.5}{\sqrt{n \times 0.5(1 - 0.5)}}$$ follows approximately the standard normal distribution if the null hypothesis were true. | Binomial($n$, $P$) distribution.
Here $n = N$ (total sample size), and $P = \pi_0$ (population proportion according to the null hypothesis). | Approximately the chi-squared distribution with $(I - 1) \times (J - 1)$ degrees of freedom | Binomial($n$, $P$) distribution.
Here $n = N$ (total sample size), and $P = \pi_0$ (population proportion according to the null hypothesis). | $t$ distribution with $N - 1$ degrees of freedom | |
Significant? | Significant? | Significant? | Significant? | Significant? | |
If $n$ is small, the table for the binomial distribution should be used: Two sided:
If $n$ is large, the table for standard normal probabilities can be used: Two sided:
| Two sided:
|
| Two sided:
| Two sided:
| |
n.a. | n.a. | n.a. | n.a. | $C\%$ confidence interval for $\mu$ | |
- | - | - | - | $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu$ can also be used as significance test. | |
n.a. | n.a. | n.a. | n.a. | Effect size | |
- | - | - | - | Cohen's $d$: Standardized difference between the sample mean of the difference scores and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean of the difference scores $\bar{y}$ is removed from $\mu_0.$ | |
n.a. | n.a. | n.a. | n.a. | Visual representation | |
- | - | - | - | ||
Equivalent to | n.a. | n.a. | n.a. | Equivalent to | |
Two sided sign test is equivalent to
| - | - | - |
| |
Example context | Example context | Example context | Example context | Example context | |
Do people tend to score higher on mental health after a mindfulness course? | Is the proportion of smokers amongst office workers different from $\pi_0 = 0.2$? | Is there an association between economic class and gender? Is the distribution of economic class different between men and women? | Is the proportion of smokers amongst office workers different from $\pi_0 = 0.2$? | Is the average difference between the mental health scores before and after an intervention different from $\mu_0 = 0$? | |
SPSS | SPSS | SPSS | SPSS | SPSS | |
Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| Analyze > Nonparametric Tests > Legacy Dialogs > Binomial...
| Analyze > Descriptive Statistics > Crosstabs...
| Analyze > Nonparametric Tests > Legacy Dialogs > Binomial...
| Analyze > Compare Means > Paired-Samples T Test...
| |
Jamovi | Jamovi | Jamovi | Jamovi | Jamovi | |
Jamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the two sided $p$ value that would have resulted from the sign test. Go to:
ANOVA > Repeated Measures ANOVA - Friedman
| Frequencies > 2 Outcomes - Binomial test
| Frequencies > Independent Samples - $\chi^2$ test of association
| Frequencies > 2 Outcomes - Binomial test
| T-Tests > Paired Samples T-Test
| |
Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | |