Skip to main content
Back

WEEK 2 - RESEARCH METHODS

Study Guide - Smart Notes

Tailored notes based on your materials, expanded with key definitions, examples, and context.

Week 2: Research Methods

Introduction to Research in Psychology

Research methods are essential in psychology for systematically investigating questions about behavior, cognition, and emotion. They help distinguish scientific findings from common sense assumptions and anecdotal observations.

  • Facilitated Communication: An example illustrating the need for research to validate interventions and avoid misleading conclusions.

  • Purpose of Research: To solve real-world problems, test common sense assumptions, and understand how psychological phenomena work.

Formulating Research Questions

Identifying What to Study

Research begins with a clear question, often inspired by observations, theoretical gaps, or practical issues.

  • Sources of Research Questions:

    • Common sense assumptions

    • Observations in the real world

    • Solving real-world problems

    • Understanding mechanisms of behavior

Sampling in Psychological Research

Populations vs. Samples

Researchers must define who will participate in their studies. The distinction between populations and samples is crucial for generalizability.

  • Population: The entire group of interest (e.g., all PSYC1010 students at York).

  • Sample: A smaller group drawn from the population (e.g., 20 students who participate in the study).

Random Selection and Generalizability

Random selection ensures every member of the population has an equal chance of being chosen, which is vital for making findings generalizable.

  • Helps samples accurately represent populations.

  • Important for studies aiming for broad applicability (e.g., experiments).

Operational Definitions

Defining Variables for Measurement

Operational definitions translate abstract concepts into measurable and observable procedures.

  • Variable: Any characteristic or factor that can vary.

  • Operational Definition: Specifies how a variable is measured or manipulated in a study.

  • Examples:

    • Studying aggression in children: Number of aggressive acts observed during play.

    • Measuring stress levels in university students: Self-reported stress scale scores or physiological measures (e.g., cortisol levels).

Overview of Research Designs

The Methods Toolbox

Psychological research employs various designs, each suited to different questions and levels of control.

  • Descriptive Methods: Naturalistic observation, case studies, self-report measures/surveys.

  • Correlational Designs: Examine relationships between variables.

  • Experimental Designs: Test cause-and-effect relationships by manipulating variables.

Validity in Research

Internal and External Validity

Validity refers to the accuracy and applicability of research findings.

  • Internal Validity: How well a study is conducted; the degree to which it establishes a trustworthy cause-and-effect relationship.

  • External Validity: The extent to which findings generalize to real-world settings.

Descriptive Research Methods

Naturalistic Observation

Observing behavior in its natural environment without intervention.

Advantages

Disadvantages

High external validity (generalizable) Rich, detailed information Sometimes the only possible option

Lack of control Time and resource consuming Observer bias Cannot draw cause & effect conclusions

  • Example: Observing how often university students use laptops in class for non-class-related reasons.

Case Studies

In-depth analysis of a single person or setting, often used for rare or unusual phenomena.

  • Provides rich, qualitative data.

  • Low external validity; findings may not generalize.

  • Example: Studying individuals with rare brain injuries.

Self-Report/Survey Methods

Collecting data by asking participants to describe their own behaviors, attitudes, or perceptions.

  • Issues include careless responding, misunderstanding questions, and response bias (e.g., social desirability).

  • Example: Using Likert scales to measure attitudes toward cats.

Evaluating Measures: Reliability and Validity

Reliability

Reliability refers to the consistency of a measure.

  • Test-Retest Reliability: Consistency across time points. Measured by correlation between scores at different times.

  • Inter-Rater Reliability: Consistency across different raters. Measured by statistics such as Cohen's kappa.

Validity

Validity is the extent to which a measure assesses what it claims to measure.

  • A test must be reliable to be valid, but a reliable test can still be invalid.

  • Example: A feline preference scale using a 1-7 Likert scale to measure how much a person likes cats.

Correlational (Non-Experimental) Methods

Examining Relationships Between Variables

Correlational designs assess the strength and direction of relationships between variables without manipulation.

  • Correlation Coefficient: Ranges from -1.0 to +1.0; higher absolute values indicate stronger relationships.

  • Scatter Plots: Visualize relationships between variables.

  • Correlation vs. Causation: Correlation does not imply causation; other explanations (reverse causality, third variables) are possible.

Third Variables/Confounds

A third variable is an outside factor that influences both variables, potentially creating a misleading association.

  • Example: "Kids with dogs are happier"—third variables could include family income, parental involvement, etc.

Advantages

Disadvantages

Can establish trends across large datasets Good for describing behavior Can predict future behavior Sometimes necessary due to ethical issues

Cannot infer causal direction Third-variable problem (confounding)

Experimental Methods

Establishing Cause and Effect

Experimental designs manipulate at least one variable and measure its effect on another, using random assignment to control groups.

  • Independent Variable (IV): Manipulated by the researcher.

  • Dependent Variable (DV): Measured outcome affected by the IV.

  • Random Assignment: Ensures groups are equivalent at the start.

Confounding Variables

Variables other than the IV that may affect the DV, threatening internal validity.

  • Example: Mood induction via music (IV) and tipping percentage (DV); confounds could include prior mood, personality, or context.

Classic Experiment Example: Stanford Marshmallow Experiment

  • Tested delay of gratification in preschoolers.

  • Found links between delay time and later outcomes (SAT scores, BMI).

  • Large-scale replication found only weak correlations, with differences by socioeconomic status.

Experimental Bias and Demand Characteristics

Expectancy Effects

Changes in participant behavior caused by researcher expectations. Double-blind designs help prevent this.

Demand Characteristics

Participants may guess the study's purpose and alter their behavior. Masking the study's purpose can reduce this risk.

Ethical Guidelines in Psychological Research

Principles for Human Research

  • Informed Consent: Participants must be fully informed about the study and consent voluntarily.

  • Protection from Harm and Discomfort: Researchers must minimize risks.

  • Deception and Debriefing: If deception is necessary, participants must be debriefed afterward.

Historical Example: Tuskegee Syphilis Study

  • Participants were not informed of their diagnosis or provided with treatment after a cure was found.

  • Led to significant harm and highlighted the need for ethical standards in research.

Additional info: Expanded definitions, examples, and context added for clarity and completeness.

Pearson Logo

Study Prep