Skip to main content
Back

Chapter 10 Hypothese

Study Guide - Smart Notes

Tailored notes based on your materials, expanded with key definitions, examples, and context.

10.1 Null and Alternative Hypotheses

Introduction to Hypotheses

In statistics, hypothesis testing begins with a claim about an unknown population parameter. This claim, called a hypothesis, is a statement about a mean, proportion, difference, or other quantity. Hypotheses come in pairs: the null hypothesis and the alternative hypothesis.

  • Null Hypothesis (): States that there is no effect, no difference, or no change in the population. It often represents the status quo or that "nothing is happening."

  • Alternative Hypothesis ( or ): States that there is an effect or difference, typically what the researcher hopes to demonstrate. It is tested by assuming the null hypothesis is true and evaluating sample data.

In legal analogy, the null hypothesis is like the presumption of innocence. A jury does not "prove" innocence; it either convicts (rejects the null) or fails to convict (fails to reject the null). In statistics, we never accept ; we simply fail to reject it when evidence is weak.

Defining the Hypotheses

  • Population parameter vs. sample statistic: A parameter is a fixed but unknown number that describes a population (e.g., the mean cholesterol level of all patients taking a drug). A statistic is computed from a sample and used to estimate the parameter.

  • Null hypothesis (): Typically asserts that the parameter equals a specific value (e.g., ). It reflects the idea that nothing unusual is happening.

  • Alternative hypothesis (): The alternative expresses the research question. It can be one-sided (e.g., ) or two-sided (e.g., ). The direction determines the type of test.

Examples of Hypotheses

  • One-sided test: Investigators care only about increases in reduction. ,

  • Two-sided test: The gene could be up-regulated or down-regulated. ,

  • Manufacturer's claim: ,

Recap Table

Keyword/Concept

Definition

Hypothesis

A claim about a population parameter that can be tested using sample data.

Null hypothesis ()

A statement of no effect or no difference; the hypothesis assumed as the starting point of a test.

Alternative hypothesis ()

The statement researchers hope to support; it suggests a parameter is different from the null value.

10.2 Type I and Type II Errors

Understanding Errors in Hypothesis Testing

When we make a decision on a sample, there is a chance of making an error. In hypothesis testing, there are two kinds of error:

  • Type I error (): Occurs when we reject the null hypothesis even though it is true. Probability is the significance level ().

  • Type II error (): Occurs when we fail to reject the null hypothesis when the alternative is true. Probability is denoted as .

For example, a clinical trial may incorrectly conclude that a new drug improves symptoms when in reality it does not (Type I error). The probability of a Type I error is controlled by the significance level (), commonly set at 0.05 or 0.01.

Examples and Intuitive Consequences

  • Medical trial: Type I error means approving an ineffective vaccine; Type II error means dismissing a vaccine that actually works.

  • Business quality control: Type I error might lead to unnecessary recalls; Type II error misses a true defect.

  • Biology experiment: Type I error claims a gene is regulated when it is not; Type II error misses a truly regulated gene.

Recap Table

Keyword/Concept

Definition

Type I error

Rejecting the null hypothesis when it is actually true; false positive. Its probability is the significance level ().

Type II error

Failing to reject the null hypothesis when the alternative is true; false negative. Its probability is denoted as .

Significance level ()

The probability of committing a Type I error. It is chosen by the researcher before data are collected.

Power ()

The probability of correctly rejecting the null hypothesis when the alternative is true. Power increases with sample size, effect size, and higher significance level.

10.3 The p-Value and Significance Level

Understanding p-Values

The p-value quantifies the evidence against the null hypothesis using sample data. It is the probability of obtaining a result at least as extreme as the one observed, given the null hypothesis is true. The smaller the p-value, the stronger the evidence against .

Interpreting the p-Value

  • If -value is less than , we reject .

  • If -value is greater than , we fail to reject .

  • Common significance levels are 0.05, 0.01, and 0.10.

  • The choice of should reflect the consequences of Type I and Type II errors in context.

Example: Computing a p-Value

  • Suppose we test versus and obtain a p-value of 0.03. This means that if the true mean were zero, there is a 3% chance of obtaining a sample mean as large (or larger) than what we observed.

  • If , we reject because .

10.4 One-Tailed and Two-Tailed Tests

Types of Hypothesis Tests

Hypothesis tests can be one-tailed or two-tailed depending on the direction of the alternative hypothesis.

  • One-tailed test: The alternative hypothesis specifies a direction (e.g., or ).

  • Two-tailed test: The alternative hypothesis specifies a difference in either direction (e.g., ).

Choosing between one- and two-tailed tests depends on the research question and whether only one direction of change is meaningful.

Example: Comparing One- and Two-Tailed p-Values

  • If the alternative is one-sided, the p-value is computed for results in one direction only.

  • If the alternative is two-sided, the p-value is computed for results in both directions (greater or less than the null value).

10.5 The Steps of Hypothesis Testing

Step-by-Step Process

Hypothesis testing follows a structured process to ensure valid statistical inference:

  1. State the hypotheses: Formulate and based on the research question.

  2. Formulate the analysis plan: Choose the significance level (), test statistic, and decision criteria.

  3. Analyze the data: Collect sample data and compute the test statistic and p-value.

  4. Interpret the results in context: Decide whether to reject or fail to reject and interpret the findings in terms of the original question.

Example: Testing Patient Wait Times

  • Suppose a hospital claims the average patient wait time is 20 minutes. A sample of 50 patients yields a mean of 25 minutes and a standard deviation of 12 minutes.

  • State hypotheses: ,

  • Compute test statistic and p-value using sample data.

  • Interpret results: If p-value , conclude that the average wait time is significantly greater than claimed.

Recap Table: Steps of Hypothesis Testing

Step

Description

1. State hypotheses

Formulate null and alternative hypotheses based on the research question.

2. Analysis plan

Choose significance level, test statistic, and criteria for decision.

3. Analyze data

Collect data, compute test statistic and p-value.

4. Interpret results

Make decision and interpret findings in context.

Additional info:

  • Confidence intervals can be used alongside hypothesis tests to estimate the range of plausible values for the parameter.

  • Sample size, effect size, and variability all influence the power of a test.

Pearson Logo

Study Prep