Skip to main content
Back

Statistics Study Guide: Chapters 1, 2 & 3

Study Guide - Smart Notes

Tailored notes based on your materials, expanded with key definitions, examples, and context.

Statistics Fundamentals

Populations, Parameters, Samples, and Statistics

Understanding the basic terminology is essential in statistics. These terms form the foundation for designing studies and interpreting data.

  • Population: The entire group of individuals or items that is the subject of a statistical study.

  • Sample: A subset of the population selected for analysis.

  • Parameter: A numerical characteristic of a population (e.g., population mean).

  • Statistic: A numerical characteristic calculated from a sample (e.g., sample mean).

  • Example: If studying the average height of all college students (population), measuring 100 students (sample) and calculating their average height (statistic) estimates the true average (parameter).

Experiments vs. Observational Studies

Distinguishing between experimental and observational studies is crucial for understanding causality and bias.

  • Experiment: The researcher manipulates one or more variables to observe the effect.

  • Observational Study: The researcher observes and records data without intervention.

  • Example: Testing a new drug (experiment) vs. surveying health outcomes in a population (observational study).

Types of Observational Studies

Observational studies can be classified based on how data is collected over time.

  • Cross-sectional: Data collected at one point in time.

  • Retrospective: Data collected from past records.

  • Prospective: Data collected forward in time from the present.

  • Example: A survey conducted today (cross-sectional), reviewing medical records (retrospective), or following patients for several years (prospective).

Sampling Methods

Types of Sampling

Sampling methods affect the representativeness and reliability of results.

  • Random Sampling: Every member of the population has an equal chance of being selected.

  • Systematic Sampling: Selecting every k-th member from a list.

  • Convenience Sampling: Selecting individuals who are easiest to reach.

  • Stratified Sampling: Dividing the population into subgroups and sampling from each.

  • Cluster Sampling: Dividing the population into clusters, then randomly selecting clusters and sampling all members within them.

  • Example: Surveying every 10th person entering a store (systematic), or randomly selecting students from each grade (stratified).

Discrete vs. Continuous Data

Data can be classified based on the nature of its values.

  • Discrete Data: Countable values (e.g., number of students).

  • Continuous Data: Measurable values within a range (e.g., height, weight).

  • Example: Number of cars (discrete), temperature readings (continuous).

Levels of Measurement

Levels of measurement determine the types of statistical analyses that are appropriate.

  • Nominal: Categories without order (e.g., gender, colors).

  • Ordinal: Categories with a meaningful order (e.g., rankings).

  • Interval: Ordered categories with equal intervals, no true zero (e.g., temperature in Celsius).

  • Ratio: Ordered categories with equal intervals and a true zero (e.g., height, weight).

  • Example: Shirt sizes (ordinal), exam scores (ratio).

Graphical Representation of Data

Dotplots, Stemplots, Boxplots, and Histograms

Visualizing data helps in understanding distributions and identifying patterns.

  • Dotplot: Displays individual data points along a number line.

  • Stemplot (Stem-and-leaf plot): Shows data distribution while retaining actual data values.

  • Boxplot: Summarizes data using quartiles and highlights outliers.

  • Histogram: Shows frequency distribution of continuous data using bars.

  • Class Boundaries: Used in histograms to separate intervals on the horizontal axis.

  • Example: A histogram of exam scores shows how many students scored within each range.

Frequency and Relative Frequency

Frequency Distribution

Frequency distributions summarize how often each value or range of values occurs.

  • Frequency: The number of times a value appears in the dataset.

  • Relative Frequency: The proportion of times a value appears, calculated as:

  • Example: If 5 out of 20 students scored an 'A', the relative frequency is .

Measures of Central Tendency and Dispersion

Mean, Weighted Mean, and Percentiles

Central tendency measures describe the center of a data set.

  • Mean: The average value, calculated as:

  • Weighted Mean: Used when data points contribute unequally, calculated as:

  • Percentile: The value below which a given percentage of observations fall.

  • Example: GPA calculation uses weighted mean; the 90th percentile is the value below which 90% of data falls.

Standard Deviation and Variance

Measures of dispersion indicate how spread out the data is.

  • Variance: The average squared deviation from the mean.

  • Standard Deviation: The square root of variance.

  • Example: Calculating the standard deviation of test scores to assess variability.

Percentiles and Data Conversion

Percentile to Data Value Conversion

Percentiles are used to interpret individual scores within a dataset.

  • To find the value at a given percentile: Arrange data in order and use the formula:

  • where L is the location in the ordered data, p is the percentile, and n is the number of data points.

  • Example: The 25th percentile in a dataset of 20 values is at position .

Identifying Outliers

Outliers in Data Sets

Outliers are values that are significantly different from the rest of the data.

  • Common method: Values more than 1.5 times the interquartile range (IQR) above the third quartile or below the first quartile.

  • Example: In a boxplot, outliers are often marked as individual points beyond the whiskers.

Empirical Rule and Chebyshev's Theorem

Estimating Data Spread

These rules help estimate the proportion of data within certain ranges.

  • Empirical Rule (for normal distributions):

Range

Approximate Percentage

Within 1 standard deviation

68%

Within 2 standard deviations

95%

Within 3 standard deviations

99.7%

  • Chebyshev's Theorem (for any distribution): At least of data lies within k standard deviations of the mean, for .

  • Example: For , at least (75%) of data lies within 2 standard deviations.

Z-Scores and Data Comparison

Calculating and Interpreting Z-Scores

Z-scores standardize values for comparison across different datasets.

  • Z-score: The number of standard deviations a value is from the mean.

  • Application: Comparing scores from different distributions (e.g., test scores from different exams).

  • Example: A z-score of 2 means the value is 2 standard deviations above the mean.

Pearson Logo

Study Prep