Skip to main content
Back

Research Methods in Psychology: Foundations, Designs, and Validity

Study Guide - Smart Notes

Tailored notes based on your materials, expanded with key definitions, examples, and context.

Week 2: Research Methods

Introduction to Research in Psychology

Research methods are essential in psychology for systematically investigating questions about behavior, cognition, and emotion. They help distinguish scientific findings from common sense assumptions and anecdotal observations.

  • Purpose of Research: To test assumptions, solve real-world problems, and understand psychological phenomena.

  • Facilitated Communication: Example of why rigorous research is needed to validate interventions and claims.

  • Example: The documentary "Tell Them You Love Me" highlights the importance of research in evaluating communication methods for individuals with disabilities.

Formulating Research Questions

Identifying What to Study

Developing a research question is the first step in the scientific process. It involves identifying areas of interest and translating them into testable questions.

  • Sources of Research Questions:

    • Common sense assumptions

    • Observations in the real world

    • Solving real-world problems

    • Understanding how something works

Sampling in Psychological Research

Populations vs. Samples

Researchers must decide whom to study. The population is the entire group of interest, while a sample is a subset of that population who actually participate in the study.

  • Population: All individuals of interest (e.g., all PSYC1010 students at York University).

  • Sample: A smaller group drawn from the population (e.g., 20 students who participate in a study).

Random Selection and Generalizability

Random selection ensures every member of the population has an equal chance of being chosen, which is crucial for generalizing findings.

  • Random Selection: Increases representativeness and generalizability of results.

  • Importance: Especially critical in experimental studies aiming for broad applicability.

Operational Definitions

Defining Variables for Measurement

An operational definition translates abstract concepts into measurable and observable procedures.

  • Variable: Any characteristic or factor that can vary.

  • Operational Definition: Specifies how a variable is measured or manipulated in a study.

  • Examples:

    • Studying aggression in children: Number of aggressive acts observed during play.

    • Measuring stress levels in university students: Scores on a standardized stress questionnaire.

Overview of Research Designs

The Methods Toolbox

Psychological research employs various designs, each suited to different questions and contexts.

  • Descriptive Methods: Naturalistic observation, case studies, self-report measures/surveys.

  • Correlational Designs: Examine relationships between variables.

  • Experimental Designs: Test cause-and-effect relationships.

Validity in Research

Internal vs. External Validity

Validity refers to the accuracy and applicability of research findings.

  • Internal Validity: How well a study is conducted; the degree to which it rules out alternative explanations.

  • External Validity: The extent to which findings generalize to real-world settings.

Descriptive Research Methods

Naturalistic Observation

Observing behavior in its natural environment without intervention.

Advantages

Disadvantages

High external validity (generalizable) Rich, detailed information Sometimes the only possible option

Lack of control Time and resource consuming Observer bias Cannot draw cause & effect conclusions

  • Example: Observing how often university students use laptops in class for non-class-related reasons.

Case Studies

In-depth analysis of a single person or setting, often used for rare or unusual phenomena.

  • Advantages: Rich, detailed descriptions; useful for rare cases.

  • Disadvantages: Low external validity; researcher bias.

  • Example: Case study of Russell Williams, a former Canadian colonel involved in criminal activity.

Self-Report/Survey Methods

Collecting data by asking participants to describe their own behaviors, attitudes, or perceptions.

  • Issues: Careless responding, misunderstanding questions, response bias, social desirability bias.

  • Example: Narcissism scale items (e.g., "I know I am special because everyone keeps telling me so.")

Evaluating Measures: Reliability and Validity

Reliability

Reliability refers to the consistency of a measure.

  • Test-Retest Reliability: Consistency across time points.

  • Inter-Rater Reliability: Consistency across different raters.

Validity

Validity is the extent to which a measure assesses what it claims to measure.

  • High Validity: The measure accurately reflects the intended construct.

  • Example: Feline preference scale (Likert scale 1-7) to measure how much a person likes cats.

Correlational (Non-Experimental) Methods

Examining Relationships Between Variables

Correlational designs assess the strength and direction of relationships between variables without manipulation.

  • Correlation Coefficient (): Ranges from -1.0 to +1.0.

  • Positive Correlation: Both variables increase together.

  • Negative Correlation: One variable increases as the other decreases.

  • Zero Correlation: No relationship.

  • Example: Relationship between texting speed and relationship drama.

Correlation vs. Causation

Correlation does not imply causation. Multiple explanations are possible for observed relationships.

  • Possible Directions: A → B, B → A, or a third variable (confound) influences both.

  • Third Variable Problem: An outside factor creates a false association.

  • Example: "Kids with dogs are happier"—other factors may explain the relationship.

Advantages

Disadvantages

Can establish trends across large data sets Good for describing and predicting behavior Useful when experiments are unethical

Cannot infer causality Third-variable/confounding issues

Experimental Methods

Establishing Cause and Effect

Experimental designs manipulate one or more variables to determine causal effects.

  • Independent Variable (IV): Manipulated by the researcher.

  • Dependent Variable (DV): Measured outcome affected by the IV.

  • Random Assignment: Participants are randomly assigned to experimental or control groups.

  • Example: Does mood induction via music affect tipping behavior?

Confounding Variables

Confounds are variables other than the IV that may affect the DV, threatening internal validity.

  • Example: In the Stanford Marshmallow experiment, delay of gratification was linked to later outcomes, but socioeconomic status (SES) was a confound in replication studies.

Experimental Bias and Demand Characteristics

  • Expectancy Effect: Changes in participant behavior due to researcher expectations.

  • Demand Characteristics: Participants guess the study's purpose and alter behavior.

  • Solution: Use double-blind designs and conceal study purpose.

Ethical Guidelines in Psychological Research

Protecting Participants

Ethical standards ensure the safety and rights of research participants.

  • Informed Consent: Participants must be fully informed about the study.

  • Protection from Harm: Minimize risks and discomfort.

  • Deception and Debriefing: If deception is used, participants must be debriefed afterward.

  • Historical Example: The Tuskegee Syphilis Study violated ethical principles by withholding treatment and information from participants.

Additional info: These notes expand on brief points from the original materials, providing definitions, examples, and context for key research concepts in psychology.

Pearson Logo

Study Prep