Skip to main content
Back

Research Methods in Psychology: Foundations, Designs, and Ethics

Study Guide - Smart Notes

Tailored notes based on your materials, expanded with key definitions, examples, and context.

Research Methods in Psychology

Introduction to Research Methods

Research methods are essential in psychology to systematically investigate questions about behavior, cognition, and emotion. They help distinguish between common sense assumptions and scientifically validated knowledge.

  • Purpose of Research: To test assumptions, solve real-world problems, and understand psychological phenomena.

  • Example: Facilitated communication was once believed to help nonverbal individuals communicate, but research revealed its limitations and risks.

Formulating Research Questions

Identifying Research Questions

Research begins with a clear, focused question. Good research questions arise from observations, common sense, or the need to solve practical problems.

  • Sources of Questions: Common sense, real-world observations, problem-solving, curiosity about mechanisms.

  • Example: Does using laptops in class affect student learning?

Sampling: Populations and Samples

Populations vs. Samples

Researchers rarely study entire populations. Instead, they select samples that represent the population of interest.

  • Population: The entire group of people relevant to the research question (e.g., all psychology students at a university).

  • Sample: A subset of the population who actually participate in the study (e.g., 20 students from the larger group).

Random Selection and Generalizability

Random selection ensures every member of the population has an equal chance of being chosen, which increases the generalizability of findings.

  • Generalizability: The extent to which results from a sample apply to the broader population.

  • Importance: Especially critical in experimental research aiming for broad applicability.

Operational Definitions

Defining Variables for Measurement

Operational definitions specify how abstract concepts are measured or manipulated in a study, making research questions testable and replicable.

  • Variable: Any characteristic or factor that can vary (e.g., aggression, stress).

  • Operational Definition: The specific procedures used to measure or manipulate a variable.

  • Examples:

    • Studying aggression in children: Number of aggressive acts observed during play.

    • Measuring stress in students: Self-reported stress scale scores or physiological measures (e.g., cortisol levels).

The Methods Toolbox

Overview of Research Designs

Psychological research employs various methods, each suited to different questions and levels of control.

  • Descriptive Methods: Naturalistic observation, case studies, self-report surveys.

  • Correlational Designs: Examine relationships between variables.

  • Experimental Designs: Test cause-and-effect relationships by manipulating variables.

Validity in Research

Internal and External Validity

Validity refers to the accuracy and applicability of research findings.

  • Internal Validity: How well a study is conducted; the degree to which it supports causal conclusions.

  • External Validity: How well findings generalize to real-world settings.

Descriptive Research Methods

Naturalistic Observation

Observing behavior in its natural context without intervention.

Advantages

Disadvantages

High external validity (generalizable) Rich, detailed information Sometimes the only possible option

Lack of control Time and resource consuming Observer bias Cannot draw cause & effect conclusions

  • Example: Observing how often students use laptops in class for non-academic purposes.

Case Studies

In-depth analysis of a single individual or setting, often used for rare or unusual cases.

  • Advantages: Rich, detailed data; useful for rare phenomena.

  • Disadvantages: Low external validity; potential researcher bias.

  • Example: Studying the behavior of a person with a rare brain injury.

Self-Report/Survey Methods

Collecting data by asking participants to report on their own behaviors, attitudes, or feelings.

  • Advantages: Efficient for gathering large amounts of data.

  • Disadvantages: Response bias, social desirability, misunderstanding questions.

  • Example: Using questionnaires to assess stress levels in students.

Evaluating Measures: Reliability and Validity

Reliability

Reliability refers to the consistency of a measure.

  • Test-Retest Reliability: Consistency of scores over time.

  • Inter-Rater Reliability: Agreement between different observers or raters (e.g., Cohen's kappa).

Validity

Validity is the extent to which a measure assesses what it claims to measure.

  • Example: A scale measuring preference for cats should include items directly related to liking cats.

  • Note: A test must be reliable to be valid, but a reliable test is not necessarily valid.

Correlational (Non-Experimental) Methods

Understanding Relationships Between Variables

Correlational research examines the strength and direction of relationships between variables without manipulation.

  • Correlation Coefficient (): Ranges from -1.0 (perfect negative) to +1.0 (perfect positive); 0 indicates no relationship.

  • Scatter Plots: Visual representation of relationships.

  • Example: Relationship between video game use and aggression.

Correlation vs. Causation

Correlation does not imply causation. Relationships may be due to direct effects, reverse causation, or third variables (confounds).

  • Third Variable Problem: An outside factor influences both variables, creating a spurious association.

  • Example: Kids with dogs are happier, but a third variable (e.g., family environment) may explain the association.

Pros and Cons of Correlational Designs

Advantages

Disadvantages

Can establish trends Good for describing behavior Can predict future behavior Useful when experiments are unethical

Cannot infer causality Third-variable/confounding issues

Experimental Methods

Establishing Cause and Effect

Experiments manipulate one or more variables to determine their effect on other variables, using random assignment to control for confounds.

  • Independent Variable (IV): Manipulated by the researcher (e.g., mood induction via music).

  • Dependent Variable (DV): Measured outcome (e.g., tipping behavior).

  • Control Condition: Lacks the experimental manipulation, serving as a baseline.

  • Random Assignment: Ensures groups are equivalent at the start.

Internal Validity and Confounds

Internal validity is threatened by confounding variables—factors other than the IV that may influence the DV.

  • Example: In a study on mood and generosity, confounds could include participants' prior experiences or expectations.

Experimental Bias and Demand Characteristics

  • Expectancy Effect: Researcher expectations influence participant behavior. Solution: Double-blind designs.

  • Demand Characteristics: Participants guess the study's purpose and alter their behavior. Solution: Masking the true purpose.

Ethical Guidelines in Psychological Research

Principles and Practices

Ethical research protects participants' rights and well-being.

  • Informed Consent: Participants must be informed about the study and agree to participate.

  • Protection from Harm: Researchers must minimize physical and psychological risks.

  • Deception and Debriefing: Deception is sometimes necessary but must be justified and followed by full debriefing.

  • Special Populations: Extra protections for minors and vulnerable groups (e.g., assent from children).

Historical Example: Tuskegee Syphilis Study

The Tuskegee Syphilis Study is a notorious example of unethical research, where participants were not informed of their diagnosis and denied treatment, leading to significant harm. This case led to the development of modern ethical standards in research.

Summary Table: Research Methods Overview

Method

Main Purpose

Key Features

Limitations

Naturalistic Observation

Describe behavior in real-world settings

No intervention; high external validity

Lack of control; cannot infer causality

Case Study

In-depth analysis of individuals/cases

Rich qualitative data

Low generalizability; potential bias

Self-Report/Survey

Assess attitudes, beliefs, behaviors

Efficient data collection

Response bias; social desirability

Correlational

Examine relationships

Correlation coefficient ()

No causality; confounds

Experimental

Test cause and effect

Manipulation, control, random assignment

May lack external validity

Pearson Logo

Study Prep