Skip to main content
Back to Guides
Advanced21 min read

ANOVA Analysis Guide

Compare means across multiple groups simultaneously using Analysis of Variance and understand when group differences are statistically significant.

Ready to calculate?

Try our ANOVA Calculator

Open Calculator

Introduction

ANOVA (Analysis of Variance) is a statistical technique for comparing means across three or more groups. Despite its name focusing on "variance," ANOVA actually tests whether group means are significantly different.

While a t-test compares two means, ANOVA extends this to multiple groups without inflating the Type I error rate that would occur from multiple t-tests.

When to Use ANOVA

ANOVA is appropriate when:

  • • Comparing means of 3+ groups
  • • Independent variable is categorical
  • • Dependent variable is continuous
  • • Groups are independent

Example Scenarios

  • • Comparing test scores across 4 teaching methods
  • • Testing drug effectiveness across 3 dosage levels
  • • Comparing sales across 5 different regions
  • • Analyzing crop yields with different fertilizers

How ANOVA Works

ANOVA partitions the total variance in the data into two components:

Between-Group Variance (SSB)

Variation due to differences between group means. This is what we want to be large.

SSB = Σnᵢ(x̄ᵢ - x̄)²

Sum of squared deviations of group means from grand mean

Within-Group Variance (SSW)

Variation due to individual differences within each group. This represents "noise."

SSW = ΣΣ(xᵢⱼ - x̄ᵢ)²

Sum of squared deviations within each group

The Logic

If group means are truly different, between-group variance should be much larger than within-group variance. ANOVA tests this ratio using the F-statistic.

The F-Statistic

F-Statistic Formula

F = MSB / MSW = (SSB/dfB) / (SSW/dfW)

MSB (Mean Square Between)

SSB / (k - 1), where k = number of groups

MSW (Mean Square Within)

SSW / (N - k), where N = total sample size

Interpreting F

  • • F ≈ 1: Group means are similar (null hypothesis likely true)
  • • F > 1: Group means differ more than expected by chance
  • • Larger F values provide stronger evidence against H₀
  • • Compare F to critical value or calculate p-value

ANOVA Assumptions

1. Independence

Observations are independent within and between groups. Each measurement should not influence others.

2. Normality

Data in each group should be approximately normally distributed. ANOVA is robust to minor violations with large samples.

3. Homogeneity of Variances

Variances should be roughly equal across groups. Test with Levene's test. Use Welch's ANOVA if violated.

Post-Hoc Tests

ANOVA tells you if groups differ, but not which groups differ. Post-hoc tests identify specific pairwise differences.

Tukey's HSD

Most common choice. Controls family-wise error rate. Good when comparing all pairs.

Bonferroni

Conservative approach. Divides α by number of comparisons. Good for few specific comparisons.

Scheffé

Most conservative. Good for complex comparisons beyond simple pairs.

Games-Howell

Use when variances are unequal. Does not assume homogeneity of variance.

Types of ANOVA

One-Way ANOVA

One independent variable with 3+ levels.

Example: Comparing test scores across 4 different schools.

Two-Way ANOVA

Two independent variables. Tests main effects and interaction.

Example: Effects of both teaching method AND gender on test scores.

Repeated Measures ANOVA

Same subjects measured multiple times. Accounts for within-subject correlation.

Example: Measuring anxiety before, during, and after treatment.

Summary

Key Takeaways

  • 1.ANOVA compares means across 3+ groups using variance ratios.
  • 2.F = MSB/MSW tests if between-group variance exceeds within-group variance.
  • 3.Key assumptions: independence, normality, equal variances.
  • 4.Significant ANOVA requires post-hoc tests to identify which groups differ.
  • 5.Choose the right ANOVA type based on your research design.