Skip to main content
Back

Research Methods in Psychology: Safeguards Against Error

Study Guide - Smart Notes

Tailored notes based on your materials, expanded with key definitions, examples, and context.

Chapter 2: Research Methods

Safeguards Against Error

This chapter introduces the importance of rigorous research methods in psychology, emphasizing how systematic approaches help prevent errors and biases in scientific investigations.

Discussion: Homeopathy and Scientific Testing

Evaluating Claims with Research Methods

  • Homeopathy: A controversial alternative medicine practice, often tested for efficacy using scientific methods.

  • Experimenter Bias: When experimenters know which samples are which, their expectations can unintentionally influence results.

  • Blinding: Replication with coded samples (where experimenters do not know which is which) often eliminates previously observed effects, highlighting the importance of blinding in research.

  • Placebo Effect: Sometimes treatments appear to work even when patients are unaware they are receiving them, or when tested on animals. This suggests other explanations, such as natural recovery or observer bias.

  • Interpretation of Positive Results: Positive findings in uncontrolled trials (e.g., homeopathic pollen for hay-fever) may be misleading without proper controls.

Additional info: The homeopathy example illustrates why understanding research methods is crucial for evaluating psychological claims.

Necessity of Good Research Design

Why Do We Need Research?

  • Scientific Method: A systematic set of tools designed to minimize bias and error in scientific inquiry.

  • Human Bias: Even intelligent, well-educated individuals can be misled without proper research designs.

  • Subjective Impressions: Personal beliefs or anecdotal evidence (e.g., "I know it works!") are often unreliable.

  • Historical Example: The prefrontal lobotomy was once considered effective based on subjective impressions, but controlled studies later disproved its efficacy.

Heuristics and Biases in Judgment

Availability & Representativeness Heuristics

  • Heuristics: Mental shortcuts that help us make quick judgments, but can lead to systematic errors.

  • Availability Heuristic: Judging the likelihood of events based on how easily examples come to mind. Example: Overestimating the number of words starting with 'R' versus those with 'R' in the third position.

  • Representativeness Heuristic: Judging the probability of an event by its similarity to a prototype, often ignoring base rates (actual statistical frequency).

  • Base Rate Fallacy: Ignoring how common a characteristic is in the general population can lead to incorrect conclusions.

Example: Estimating whether a poetry-loving, short, slim person is more likely a classics professor or a truck driver, without considering the actual numbers of each in the population.

Research Methods in Psychology

Major Approaches

  • Naturalistic Observation: Observing people in real-world settings without intervention. Useful for unexpected discoveries but may be influenced by observer presence.

  • Case Study Designs: In-depth study of one individual, often used for rare conditions (e.g., Phineas Gage). Can provide existence proofs but may lack generalizability.

  • Correlation Studies: Examining relationships between variables. Positive correlation: Both variables increase together. Negative correlation: One variable increases as the other decreases. Correlation does not imply causation!

Example: Height and weight are positively correlated; tooth decay and brushing frequency are negatively correlated.

Experimental Design

Key Elements of Experiments

  • Random Selection: Ensures every person in a population has an equal chance of being chosen, increasing generalizability.

  • Random Assignment: Ensures all participants have an equal chance of being assigned to any condition, controlling for confounding variables.

  • Experimental Group: Receives the manipulation.

  • Control Group: Does not receive the manipulation.

  • Independent Variable: The variable manipulated by the experimenter.

  • Dependent Variable: The outcome measured to assess the effect of the manipulation.

Example: Testing whether highlighting textbooks improves exam scores. The independent variable is highlighting; the dependent variable is exam score.

Pitfalls in Experimental Design

Common Sources of Error

  • Placebo Effect: Improvement due to expectation, not the treatment itself. Subjects must be blind to their group assignment.

  • Nocebo Effect: Harm resulting from the expectation of harm.

  • Experimenter Expectancy Effect: Researchers' hypotheses unintentionally bias results. Solution: Double-blind design, where neither researchers nor subjects know group assignments.

Example: The "Clever Hans" horse appeared to perform math, but was actually responding to subtle cues from observers.

Self-Report Measures & Surveys

Strengths and Limitations

  • Self-Report Measures: Questionnaires assessing characteristics, interests, or behaviors.

  • Surveys: Used to measure opinions and attitudes.

  • Reliability: Consistency of measurement.

  • Validity: Whether the measure assesses what it is intended to.

  • Potential Issues: Dishonesty, response sets (e.g., positive impression management, malingering), and unrepresentative samples.

  • Anchoring Effects: Initial information can influence responses (e.g., estimating distances).

Example: Surveying toilet paper habits; only a small, possibly unrepresentative sample responds.

Statistics: The Language of Research

Descriptive and Inferential Statistics

  • Descriptive Statistics: Summarize data meaningfully.

  • Central Tendency: Where scores cluster. Mean: Average score. Median: Middle score. Mode: Most frequent score.

  • Range: Difference between highest and lowest scores.

  • Standard Deviation: Measures variability from the mean.

  • Inferential Statistics: Allow generalization from sample to population.

  • Statistical Significance: A result is statistically significant if the probability of it occurring by chance is less than 1 in 20 ().

  • Practical Significance: The real-world importance of a finding.

Review Questions and Applications

Applying Research Methods

  • Naturalistic Observation: Used when researchers observe behaviors in real-world settings (e.g., door-holding on campus).

  • Case Study: In-depth observation of rare cases (e.g., dissociative identity disorder).

  • Self-Report Measures: Disadvantage: Respondents may not always be honest.

  • Correlational Design: Used to study relationships between variables (e.g., days missed and GPA).

  • Correlation Strength: The closer is to 0, the weaker the relationship. Example: is weaker than .

  • Dependent Variable: The outcome measured in an experiment (e.g., exam score).

  • Mode: The most frequent value in a data set.

Key Terms and Concepts

  • Scientific Method

  • Bias

  • Heuristics

  • Naturalistic Observation

  • Case Study

  • Correlation

  • Experiment

  • Random Selection

  • Random Assignment

  • Placebo/Nocebo Effect

  • Double-Blind Design

  • Self-Report Measures

  • Reliability/Validity

  • Descriptive/Inferential Statistics

  • Statistical Significance

Summary Table: Types of Research Methods

Method

Description

Strengths

Limitations

Naturalistic Observation

Observing behavior in real-world settings

High ecological validity

Limited control, possible observer effect

Case Study

In-depth study of one individual

Detailed information, existence proofs

Low generalizability, anecdotal

Correlational Study

Examines relationships between variables

Identifies associations

Cannot infer causation

Experiment

Manipulates variables to test effects

Can infer causation

May lack ecological validity

Pearson Logo

Study Prep