| Term | Definition |
| systematic source of variability/systematic variance | variance due to the manipulations we have introduced in the study (in other words those that are under our control, our effects) |
| unsystematic source of variability/unsystematic variance | error variance; due to chance factors which are typically out of our control and which have not been varied systematically across participants in the study. |
| t-test | statistical test for comparing two different means |
| ANOVA/analysis of variance | statistical test allowing comparison of more than two different means |
| within group variation | = experimental error; assumed the same between all groups
individual differences/experimenter error/chance factors |
| between group variation | why means differ across groups
condition effects + experimental error |
| F ratio | between group variation / within group variation
= 1 if no condition effects
> 1 if condition effects |
| type 1 error | rejecting the null hypothesis where it should have been accepted
defined by α-level |
| α-level | the probability level on which the decision to accept the null hypothesis is based (usually p > 0.05) |
| effect size | family of indices giving us information about the strength of effect(s)
allows:
assessment of treatment magnitude (IV on DV)
comparison of effects with other studies
aggregation of results for meta analysis
estimation of required sample size |
| eta-squared (η²) | η² = SSeffect/SStotal
proportion of total DV variance attributed to an effect
closer to 1 = larger effect |
| partial eta-squared (partial η²) | ηp² = SSeffect/(SSeffect + SSerror)
less biased than η² (takes less of design into account) |
| ω2 | measure of effect size
estimate of how much variance in the DV is accounted for by the IVs
less biased alternative to eta-squared, especially when sample sizes are small.
ω2 = (SSeffect – (dfeffect)(MSerror)) / (MSerror + SStotal) |
| ωp2 | ωp2 = (SSeffect – (dfeffect)(MSs/Cells)) / (SSeffect + (N - dfeffect)MSs/Cells) |