Skip to main content
Back

Research Methods in Psychology: Foundations, Designs, and Validity

Study Guide - Smart Notes

Tailored notes based on your materials, expanded with key definitions, examples, and context.

Research Methods in Psychology

Introduction to Research Methods

Research methods are essential in psychology for systematically investigating questions about behavior, cognition, and emotion. They help distinguish scientific findings from common sense assumptions and anecdotal observations.

  • Purpose of Research: To test assumptions, solve real-world problems, and understand psychological phenomena.

  • Facilitated Communication: Example of why rigorous research is needed to evaluate claims and interventions.

  • Case Example: 'Tell Them You Love Me' illustrates the importance of research in evaluating communication methods for individuals with disabilities.

Formulating Research Questions

Developing a Research Question

Effective research begins with a clear, focused question. This guides the choice of methods and interpretation of results.

  • Sources of Research Questions:

    • Common sense assumptions

    • Observations in the real world

    • Solving real-world problems

    • Understanding how something works

Sampling in Psychological Research

Populations vs. Samples

Researchers must define who will participate in their studies. The distinction between populations and samples is crucial for generalizability.

  • Population: The entire group of interest (e.g., all PSYC1010 students at York).

  • Sample: A smaller group drawn from the population (e.g., 20 students who participate in the study).

Random Selection and Generalizability

Random selection ensures that every member of the population has an equal chance of being chosen, which is vital for making findings generalizable.

  • Importance:

    • Accurately represents the population

    • Reduces selection bias

    • Essential for experiments seeking generalizability

Operational Definitions

Variables and Operationalization

Operational definitions translate abstract concepts into measurable and observable procedures.

  • Variable: Any factor or characteristic that can vary.

  • Operational Definition: Specifies how a variable is measured or manipulated in a study.

  • Examples:

    • Studying aggression in children: Number of aggressive acts observed during play.

    • Measuring stress levels in university students: Self-reported stress scale or physiological measures (e.g., cortisol levels).

Overview of Research Designs

The Methods Toolbox

Psychological research employs various designs, each suited to different questions and contexts.

  • Descriptive Methods:

    • Naturalistic observation

    • Case study

    • Self-report measures and surveys

  • Correlational Designs: Examine relationships between variables.

  • Experimental Designs: Test cause and effect by manipulating variables.

Validity in Research

Internal and External Validity

Validity refers to the accuracy and generalizability of research findings.

  • Internal Validity: How well a study is conducted; the degree to which it establishes a trustworthy cause-and-effect relationship.

  • External Validity: The extent to which findings apply to real-world settings.

Descriptive Research Methods

Naturalistic Observation

Observing behavior in its natural environment without intervention.

Advantages

Disadvantages

High external validity (generalizable) Rich, detailed information Sometimes the only possible option

Lack of control Time and resource consuming Observer bias Cannot draw cause & effect conclusions

Example:

  • Studying how often university students use laptops in class for non-class related reasons.

Case Studies

In-depth analysis of a single person or setting, often used for rare or unusual phenomena.

  • Advantages: Rich, detailed descriptions; useful for rare cases.

  • Disadvantages: Low external validity; researcher bias.

  • Example: Case study of Russell Williams, examining escalation of criminal behavior.

Self-Report/Survey Methods

Collecting data by asking participants to describe their own behaviors, attitudes, or perceptions.

  • Issues:

    • Careless or random responding

    • Misunderstanding questions

    • Response bias (e.g., social desirability)

Evaluating Measures: Reliability and Validity

Reliability

Reliability refers to the consistency of a measure.

  • Test-Retest Reliability: Consistency across time points.

  • Inter-Rater Reliability: Consistency across different raters.

Validity

Validity is the extent to which a measure assesses what it claims to measure.

  • A test must be reliable to be valid, but a reliable test can still be invalid.

  • Example: Feline preference scale (Likert scale 1-7) to measure how much a person likes cats.

Correlational/Non-Experimental Methods

Correlation and Causation

Correlational designs examine the strength and direction of relationships between variables, but do not establish causation.

  • Correlation Coefficient: Ranges from -1.0 to +1.0.

  • Scatter Plots: Visualize relationships between variables.

  • Third Variables/Confounds: Outside factors that may create misleading associations.

  • Advantages:

    • Establish trends across large data sets

    • Describe and predict behavior

    • Useful when experiments are not ethical or feasible

  • Disadvantages:

    • Cannot infer causal direction

    • Third-variable problem

Experimental Methods

Experimental Design

Experiments test causal relationships by manipulating one or more variables and measuring the effect on others.

  • Independent Variable (IV): Manipulated by the researcher.

  • Dependent Variable (DV): Measured outcome affected by the IV.

  • Random Assignment: Participants are randomly assigned to experimental or control groups.

  • Operationalization: IV should have at least two levels (e.g., treatment vs. placebo).

  • Example: Does listening to music improve test performance?

    • IV: Music (listening vs. not listening)

    • DV: Test performance

    • Control condition: No music

Confounding Variables

Confounds are variables other than the IV that may affect the DV, threatening internal validity.

  • Example: Mood induction via music and its effect on tipping behavior; possible confounds include prior mood, type of music, or social context.

Classic Experiment Example

The Stanford Marshmallow Experiment studied delay of gratification in children and its relation to later life outcomes.

  • Delay time was related to SAT scores, BMI, and positive functioning.

  • Large-scale replication found only weak correlations, with differences by socioeconomic status.

Experimental Bias and Expectancy Effects

Biases can affect experimental outcomes.

  • Expectancy Effect: Changes in participant behavior due to researcher expectations.

  • Demand Characteristics: Participants guess the study's purpose and alter behavior.

  • Prevention: Use of double-blind designs and disguising study purpose.

Ethical Guidelines in Psychological Research

Ethical Principles

Ethical guidelines protect participants and ensure integrity in research.

  • Informed Consent: Participants must be informed about the study and consent to participate.

  • Protection from Harm: Researchers must minimize discomfort and risk.

  • Deception and Debriefing: Deception is allowed only when necessary and must be followed by thorough debriefing.

  • Special Populations: Additional protections for minors and vulnerable groups (e.g., assent required).

Historical Example: Tuskegee Syphilis Study

Ethical guidelines have evolved in response to past abuses, such as the Tuskegee Syphilis Study, where participants were not informed of their diagnosis or provided with treatment.

  • Highlights the necessity of ethical standards in research.

Summary Table: Research Methods Comparison

Method

Main Purpose

Advantages

Disadvantages

Naturalistic Observation

Describe behavior in real-world settings

High external validity, rich data

Lack of control, observer bias

Case Study

In-depth analysis of individuals/settings

Rich detail, useful for rare cases

Low generalizability, researcher bias

Self-Report/Survey

Collect subjective data

Efficient, large samples

Response bias, social desirability

Correlational Design

Examine relationships

Trends, prediction

No causation, confounds

Experimental Design

Test cause and effect

Control, causal inference

Ethical/practical limits, confounds

Additional info: Expanded definitions, examples, and context added for clarity and completeness.

Pearson Logo

Study Prep