Skip to main content
Back

Research Methods in Psychology: Foundations and Applications

Study Guide - Smart Notes

Tailored notes based on your materials, expanded with key definitions, examples, and context.

Research Methods in Psychology

Introduction to Research Methods

Research methods are essential in psychology to systematically investigate questions about behavior, cognition, and emotion. They help distinguish between common sense assumptions and scientifically validated knowledge.

  • Purpose of Research: To test assumptions, solve real-world problems, and understand how psychological processes work.

  • Example: Facilitated communication was once believed to help nonverbal individuals communicate, but research revealed its limitations and potential for bias.

Formulating Research Questions

Developing a Research Question

Effective research begins with a clear, focused question. This question often arises from observations, common sense, or the need to solve practical problems.

  • Sources of Research Questions:

    • Common sense assumptions

    • Observations in the real world

    • Solving real-world problems

    • Understanding how something works

Sampling in Psychological Research

Populations and Samples

Researchers rarely study entire populations. Instead, they select samples that represent the population of interest.

  • Population: The entire group of people relevant to the research question (e.g., all PSYC1010 students at York University).

  • Sample: A smaller group drawn from the population who actually participate in the study (e.g., 20 students from the class).

Random Selection and Generalizability

Random selection ensures every member of the population has an equal chance of being chosen, which increases the generalizability of findings.

  • Importance: Helps ensure the sample accurately represents the population, which is crucial for studies aiming for generalizability.

Operational Definitions

Defining Variables for Measurement

Operational definitions specify how abstract concepts are measured or manipulated in a study, making research questions testable and observable.

  • Variable: Any characteristic or factor that can vary.

  • Operational Definition: The specific procedures used to measure or manipulate a variable.

  • Example:

    • Studying aggression in children: Number of aggressive acts observed during playtime.

    • Measuring stress in university students: Self-reported stress scores on a standardized questionnaire.

The Methods Toolbox

Overview of Research Designs

Psychological research employs various methods, each suited to different types of questions.

  • Descriptive Methods: Naturalistic observation, case studies, self-report measures and surveys.

  • Correlational Designs: Examine relationships between variables.

  • Experimental Designs: Test cause-and-effect relationships by manipulating variables.

Validity in Research

Internal and External Validity

Validity refers to the accuracy and generalizability of research findings.

  • Internal Validity: How well a study is conducted (e.g., control of confounding variables).

  • External Validity: How well findings apply to real-world settings.

Descriptive Research Methods

Naturalistic Observation

Observing behavior in its natural context without intervention.

Advantages

Disadvantages

High external validity (generalizable) Rich, detailed information Sometimes the only possible option

Lack of control Time and resource consuming Observer bias Cannot draw cause & effect conclusions

  • Example: Observing how often university students use laptops in class for non-class-related reasons.

Case Studies

In-depth analysis of a single individual or setting, often used for rare or unusual phenomena.

  • Advantages: Rich, detailed descriptions; sometimes the only method available.

  • Disadvantages: Low external validity; potential for researcher bias.

  • Example: Studying the behavior and history of a person with a rare brain injury.

Self-Report/Survey Methods

Collecting data by asking participants to describe their own behaviors, attitudes, or perceptions.

  • Advantages: Efficient for gathering large amounts of data.

  • Disadvantages: Susceptible to response bias, social desirability, and misunderstanding of questions.

Evaluating Measures: Reliability and Validity

Reliability

Reliability refers to the consistency of a measure.

  • Test-Retest Reliability: Consistency of scores over time. Measured by correlation between scores at two time points.

  • Inter-Rater Reliability: Consistency between different observers or raters. Measured by statistics such as Cohen's kappa.

Validity

Validity is the extent to which a measure assesses what it claims to measure.

  • Note: A test must be reliable to be valid, but a reliable test is not necessarily valid.

  • Example: A scale measuring preference for cats should include items directly related to liking cats.

Correlational (Non-Experimental) Methods

Examining Relationships Between Variables

Correlational research assesses the strength and direction of relationships between variables without manipulation.

  • Correlation Coefficient (): Ranges from -1.0 (perfect negative) to +1.0 (perfect positive); 0 indicates no relationship.

  • Scatter Plots: Visual representation of the relationship between two variables.

  • Example: Relationship between texting speed and relationship drama.

Correlation vs. Causation

  • Correlation does not imply causation. Possible explanations include:

    • A causes B

    • B causes A

    • A and B are both caused by a third variable (confound)

  • Example: Kids with dogs are happier, but a third variable (e.g., family environment) may explain the association.

Advantages

Disadvantages

Can establish trends across large datasets Good for describing and predicting behavior Useful when experiments are unethical or impractical

Cannot infer causality Susceptible to third-variable (confounding) problems

Experimental Methods

Establishing Causality

Experiments manipulate one or more variables to determine their effect on other variables, allowing for causal inferences.

  • Independent Variable (IV): Manipulated by the researcher (e.g., mood induction via music).

  • Dependent Variable (DV): Measured outcome (e.g., tipping percentage).

  • Random Assignment: Participants are randomly assigned to experimental or control groups to control for confounds.

Internal Validity and Confounds

  • Confounding Variable: An extraneous variable that varies with the IV and may provide an alternative explanation for results.

  • Example: In a study on mood and generosity, background music volume could be a confound if not controlled.

Classic Example: Stanford Marshmallow Experiment

  • Tested delay of gratification in preschoolers; delay time was related to later outcomes like SAT scores and BMI.

  • Large-scale replication found only weak correlations, with socioeconomic status as a significant factor.

Experimental Bias and Demand Characteristics

  • Expectancy Effect: Changes in participant behavior due to researcher expectations. Controlled by double-blind designs.

  • Demand Characteristics: Participants guess the study's purpose and alter their behavior. Reduced by disguising the study's true purpose.

Ethical Guidelines in Human Research

Principles and Historical Context

Ethical guidelines protect participants from harm and ensure informed consent, confidentiality, and debriefing.

  • Informed Consent: Participants must be fully informed about the study and consent to participate.

  • Protection from Harm: Researchers must minimize physical and psychological risks.

  • Deception and Debriefing: If deception is necessary, participants must be debriefed afterward.

  • Historical Example: The Tuskegee Syphilis Study violated ethical principles by withholding treatment and information from participants.

Pearson Logo

Study Prep