Skip to main content
Back

Scientific Skills & Statistics: Experimental Design and Data Analysis

Study Guide - Smart Notes

Tailored notes based on your materials, expanded with key definitions, examples, and context.

Scientific Skills & Statistics

Experimental Design & Hypotheses

Experimental design is the foundation of scientific research, ensuring that hypotheses are tested systematically and results are reliable. Understanding variables, controls, and the structure of hypotheses is essential for valid experiments.

  • Hypothesis: A testable statement predicting the outcome of an experiment.

  • Null Hypothesis (H0): States there is no effect or difference.

  • Alternative Hypothesis (HA): States there is an effect or difference.

  • Variables:

    • Independent Variable: The factor manipulated by the experimenter.

    • Dependent Variable: The factor measured in response to changes in the independent variable.

    • Controlled Variables: Factors kept constant to ensure a fair test.

  • Control Group: The group not exposed to the independent variable, used for comparison.

Example: Testing the effect of light on plant growth. The independent variable is light exposure, the dependent variable is plant height, and controlled variables include water and soil type.

Data Collection & Sampling

Accurate data collection and appropriate sampling methods are crucial for obtaining valid and generalizable results.

  • Random Sampling: Every individual has an equal chance of being selected, reducing bias.

  • Systematic Sampling: Selecting every nth individual from a list.

  • Sample Size: Larger sample sizes increase reliability and statistical power.

Example: Measuring enzyme activity in randomly selected bacterial colonies.

Types of Data

Data can be classified based on their nature and measurement scale.

  • Qualitative Data: Descriptive, non-numerical information (e.g., color, shape).

  • Quantitative Data: Numerical information (e.g., height, mass).

  • Discrete Data: Countable values (e.g., number of leaves).

  • Continuous Data: Measurable values within a range (e.g., temperature).

Describing Data

Statistical measures summarize and describe data sets.

  • Mean: The average value.

  • Median: The middle value when data are ordered.

  • Mode: The most frequently occurring value.

  • Range: Difference between the highest and lowest values.

  • Standard Deviation (SD): Measures the spread of data around the mean.

Comparing Two Unpaired Groups

Statistical tests are used to determine if differences between groups are significant.

  • Unpaired t-test: Compares means of two independent groups.

  • Mann-Whitney U test: Non-parametric test for comparing medians of two independent groups.

  • Assumptions: Normal distribution (t-test), similar variances.

Example: Comparing blood pressure between two unrelated groups of patients.

Comparing Two Paired Groups

Paired tests are used when the same subjects are measured before and after a treatment.

  • Paired t-test: Compares means from the same group at different times.

  • Wilcoxon signed-rank test: Non-parametric test for paired data.

Example: Measuring cholesterol levels in patients before and after a diet intervention.

Choosing Statistical Tests

Selection depends on data type, distribution, and experimental design.

Data Type

Unpaired Groups

Paired Groups

Parametric (Normal)

Unpaired t-test

Paired t-test

Non-parametric

Mann-Whitney U test

Wilcoxon signed-rank test

Interpreting Results

Statistical significance is determined by p-values, which indicate the probability that observed differences are due to chance.

  • p-value < 0.05: Statistically significant difference; reject the null hypothesis.

  • p-value > 0.05: Not statistically significant; fail to reject the null hypothesis.

Example: A p-value of 0.03 suggests a significant effect of a drug on heart rate.

Presenting Data

Clear presentation of data enhances understanding and communication of results.

  • Tables: Summarize numerical data for comparison.

  • Graphs: Visualize trends and differences (e.g., bar graphs, scatter plots).

  • Error Bars: Indicate variability (e.g., standard deviation or standard error).

Common Errors in Experimental Design

  • Small sample size

  • Lack of randomization

  • Failure to control variables

  • Inappropriate statistical tests

Additional info: Proper experimental design and statistical analysis are foundational for all areas of biology, ensuring that conclusions are valid and reproducible.

Pearson Logo

Study Prep