Skip to main content
Back

Research Methods in Psychology: Foundations, Designs, and Validity

Study Guide - Smart Notes

Tailored notes based on your materials, expanded with key definitions, examples, and context.

Week 2: Research Methods

Introduction to Research in Psychology

Research methods are essential in psychology for systematically investigating questions about behavior, cognition, and emotion. They help distinguish scientific findings from common sense assumptions and anecdotal observations.

  • Purpose of Research: To test assumptions, solve real-world problems, and understand psychological phenomena.

  • Facilitated Communication: Example of why rigorous research is needed to validate interventions and claims.

  • Case Example: "Tell Them You Love Me" illustrates the importance of evidence-based approaches in communication disorders.

Formulating Research Questions

Identifying and Defining Research Questions

Effective research begins with a clear, focused question. This guides the choice of methods and interpretation of results.

  • Sources of Research Questions:

    • Common sense assumptions

    • Observations in the real world

    • Solving real-world problems

    • Understanding how something works

Sampling and Participants

Populations vs. Samples

Researchers must decide whom to study. The population is the entire group of interest, while the sample is a subset that actually participates in the study.

  • Population: The full group of people relevant to the research question (e.g., all PSYC1010 students at York).

  • Sample: A smaller group drawn from the population (e.g., 20 students who participate in the study).

Random Selection and Generalizability

Random selection ensures every member of the population has an equal chance of being chosen, which is crucial for generalizing findings.

  • Generalizability: The extent to which results apply to the broader population.

  • Importance: Accurate sampling is vital for studies aiming to make broad claims.

Operational Definitions

Variables and Operationalization

Operational definitions translate abstract concepts into measurable and observable procedures.

  • Variable: Any characteristic or factor that can vary (e.g., aggression, stress).

  • Operational Definition: Specifies how a variable is measured or manipulated in a study.

  • Examples:

    • Studying aggression in children: Number of aggressive acts observed during play.

    • Measuring stress in university students: Self-reported stress scale or cortisol levels.

Overview of Research Designs

The Methods Toolbox

Psychological research employs various designs, each suited to different questions and contexts.

  • Descriptive Methods: Naturalistic observation, case studies, self-report measures/surveys.

  • Correlational Designs: Examine relationships between variables.

  • Experimental Designs: Test cause-and-effect relationships.

Validity in Research

Internal and External Validity

Validity refers to the accuracy and applicability of research findings.

  • Internal Validity: How well a study is conducted; the degree to which it rules out alternative explanations.

  • External Validity: How well findings generalize to real-world settings.

Descriptive Research Methods

Naturalistic Observation

Observing behavior in its natural environment without intervention.

Advantages

Disadvantages

High external validity (generalizable) Rich, detailed information Sometimes the only possible option

Lack of control Time and resource consuming Observer bias Cannot draw cause & effect conclusions

  • Example: Observing how often university students use laptops in class for non-class-related reasons.

Case Studies

In-depth analysis of a single person or setting, often used for rare or unusual phenomena.

  • Advantages: Rich, detailed descriptions; sometimes the only feasible method.

  • Disadvantages: Low external validity; researcher bias.

  • Example: Case study of Russell Williams, examining behavioral escalation and criminal activity.

Self-Report/Survey Methods

Collecting data by asking participants to describe their own behaviors, attitudes, or perceptions.

  • Issues: Assumes honesty and understanding; subject to careless responding, misunderstanding, response bias, and social desirability.

  • Example: Narcissism scale items (e.g., "I know I am special because everyone keeps telling me so.")

Evaluating Measures: Reliability and Validity

Reliability

Reliability refers to the consistency of a measure.

  • Test-Retest Reliability: Consistency across time points.

  • Inter-Rater Reliability: Consistency across different raters.

Validity

Validity is the extent to which a measure assesses what it claims to measure.

  • High Validity: The measure accurately reflects the intended construct.

  • Example: Feline preference scale (Likert scale 1-7) for liking cats.

Correlational/Non-Experimental Methods

Correlation and Causation

Correlational designs examine the strength and direction of relationships between variables, but do not establish causality.

  • Correlation Coefficient: Ranges from -1.0 to +1.0.

  • Scatter Plots: Visualize relationships between variables.

  • Third Variables/Confounds: Outside factors that may create misleading associations.

  • Example: Relationship between texting speed and relationship drama; video games and aggression.

Advantages

Disadvantages

Can establish trends across large data sets Good for describing and predicting behavior Sometimes necessary due to ethical issues

Cannot infer causal direction Third-variable problem (confounding variables)

Experimental Methods

Experimental Design

Experiments test causal relationships by manipulating one variable (IV) and measuring its effect on another (DV), with random assignment to groups.

  • Independent Variable (IV): Manipulated by researcher.

  • Dependent Variable (DV): Measured outcome.

  • Random Assignment: Ensures groups are equivalent at the start.

  • Confounding Variables: Factors other than the IV that may affect the DV.

  • Example: Mood induction via music (IV) and tipping percentage (DV).

Internal Validity and Confounds

High internal validity means the study design rules out alternative explanations for observed effects.

  • Confounds: Threaten internal validity by providing alternative explanations.

  • Example: Stanford Marshmallow Experiment—delay of gratification and later outcomes.

Experimental Bias and Demand Characteristics

Biases can affect the validity of experimental findings.

  • Expectancy Effect: Changes in participant behavior due to researcher expectations.

  • Double-Blind Designs: Prevent expectancy effects.

  • Demand Characteristics: Participants guess study purpose and alter behavior.

Ethical Guidelines in Psychological Research

Ethics and Human Participants

Ethical standards protect participants and ensure integrity in research.

  • Informed Consent: Participants must be informed about the study and consent to participate.

  • Protection from Harm: Researchers must minimize risks and discomfort.

  • Deception and Debriefing: If deception is used, participants must be debriefed afterward.

  • Historical Example: Tuskegee Syphilis Study—violations of ethical standards led to harm and lack of informed consent.

Pearson Logo

Study Prep