Skip to main content
Back

Confidence Intervals, Effect Size, and Statistical Power – Study Notes

Study Guide - Smart Notes

Tailored notes based on your materials, expanded with key definitions, examples, and context.

Confidence Intervals, Effect Size, and Statistical Power

Introduction

This chapter introduces key concepts in inferential statistics, including confidence intervals, effect size, and statistical power. These tools are essential for interpreting the results of hypothesis tests and for understanding the reliability and magnitude of observed effects in research studies.

Confidence Intervals

Point and Interval Estimates

  • Point estimate: A single summary statistic from a sample used as an estimate of a population parameter.

  • Interval estimate: A range of plausible values for a population parameter, based on a sample statistic.

Definition and Interpretation of Confidence Intervals

  • A confidence interval (CI) is an interval estimate that includes the mean expected for the sample statistic a certain percentage of the time, if the same population is sampled repeatedly.

  • The CI includes a range around the mean, determined by adding and subtracting the margin of error.

  • Confidence intervals confirm findings of hypothesis testing and provide additional detail about the estimate's precision.

Steps for Calculating Confidence Intervals

  1. Draw a normal curve.

  2. Indicate the bounds of the confidence interval.

  3. Determine the z statistics corresponding to the desired confidence level.

  4. Convert the z statistic back into raw means.

  5. Check the answer for accuracy.

Formulas for Confidence Interval (z Test)

  • Lower bound:

  • Upper bound:

Note: Two calculations are needed to determine both the lower and upper bounds of the interval.

Example: 95% Confidence Interval

  • A 95% CI typically includes the central 95% of the normal distribution, with 2.5% in each tail.

  • This means that if the same population is sampled repeatedly, 95% of the calculated intervals will contain the true population mean.

Effect Size

Definition and Importance

  • Effect size measures the magnitude of a difference, independent of sample size.

  • It allows for standardization across studies and indicates how much two populations do not overlap.

Effect Size and Mean Differences

  • Effect size increases as the means of two distributions move further apart or as the variation within each population decreases.

Calculating Effect Size: Cohen's d

  • Cohen's d assesses the difference between means using the standard deviation:

Cohen's Conventions for Effect Size

Effect Size

Convention

Overlap

Small

0.2

85%

Medium

0.5

67%

Large

0.8

53%

Meta-Analysis

Definition and Purpose

  • Meta-analysis involves calculating a mean effect size from the individual effect sizes of multiple studies.

  • It increases statistical power and helps resolve debates caused by contradictory research findings.

Steps in Meta-Analysis

  1. Select the topic of interest and decide on the procedure.

  2. Locate every study that meets the criteria.

  3. Calculate an effect size for each study.

  4. Calculate overall statistics (e.g., Rosenthal, 1995).

  5. Check for availability of necessary statistical information.

  6. Select studies with appropriate participant criteria.

  7. Eliminate studies based on research design.

Forest Plot

  • A forest plot visually summarizes the effect sizes and confidence intervals from multiple studies in a meta-analysis.

The "File Drawer" Analysis

  • Calculates the number of unpublished studies with null results needed to nullify the statistical significance of a meta-analytic effect size.

  • Replication and reproducibility are also important considerations.

Statistical Power

Definition

  • Statistical power is the probability of correctly rejecting the null hypothesis when it is false (i.e., detecting a true effect).

  • It is also the probability of avoiding a Type II error (false negative).

Calculating Statistical Power

  • Determine the necessary information: population mean, population standard deviation, sample mean, sample size, and standard error.

  • Determine a critical z value and raw mean to calculate power.

  • Calculate power as the percentage of the distribution of means for population 1 that falls above the critical value.

Making Correct Decisions

In truth, no effect exists (Null hypothesis is true)

In truth, effect exists (Null hypothesis is false)

We reject the null hypothesis

Type I error ("false positive")

Correct decision (power)

We fail to reject the null hypothesis

Correct decision

Type II error ("false negative")

Factors That Affect Statistical Power

  1. Increasing alpha (the significance level)

  2. Turning a two-tailed hypothesis into a one-tailed hypothesis

  3. Increasing sample size (N)

  4. Exaggerating the mean difference between levels of the independent variable

Visualizing Power

  • Increasing alpha increases the area (power) under the curve beyond the critical value.

  • One-tailed tests concentrate power in one direction, increasing the chance of detecting an effect in that direction.

  • Increasing sample size or decreasing standard deviation narrows the distribution, increasing power.

  • Increasing the difference between means shifts the distributions further apart, increasing power.

Importance of Statistical Power

  • Knowing the statistical power of a study helps researchers design studies that are more likely to detect true effects and avoid Type II errors.

Pearson Logo

Study Prep