Skip to main content
Back

Research Methods in Psychology: Foundations, Designs, and Validity

Study Guide - Smart Notes

Tailored notes based on your materials, expanded with key definitions, examples, and context.

Week 2: Research Methods

Introduction to Research in Psychology

Research methods are essential in psychology for systematically investigating questions about behavior, cognition, and emotion. They help distinguish scientific findings from common sense assumptions and anecdotal observations.

  • Facilitated Communication: Research is needed to evaluate claims and interventions, such as facilitated communication for individuals with autism, to determine their validity and effectiveness.

  • Example: Studies have shown that facilitated communication often fails to demonstrate independent communication by the individual, highlighting the importance of rigorous research.

Formulating Research Questions

Identifying What to Study

Developing a research question is the first step in the scientific process. It involves identifying areas of interest and gaps in knowledge.

  • Sources of Research Questions:

    • Common sense assumptions

    • Observations in the real world

    • Solving real-world problems

    • Understanding how something works

Sampling in Psychological Research

Populations vs. Samples

Researchers must define the group they wish to study (population) and select a manageable subset (sample) for their research.

  • Population: The entire group of interest (e.g., all PSYC1010 students at York University).

  • Sample: A smaller group drawn from the population (e.g., 20 students who participate in the study).

Random Selection and Generalizability

Random selection ensures every member of the population has an equal chance of being chosen, which is crucial for generalizing findings.

  • Importance: Increases the representativeness of the sample and the external validity of the study.

Operational Definitions

Defining Variables for Measurement

Operational definitions translate abstract concepts into measurable and observable procedures.

  • Variable: Any characteristic or factor that can vary.

  • Operational Definition: Specifies how a variable is measured or manipulated in a study.

  • Examples:

    • Studying aggression in children: Number of aggressive acts observed during play.

    • Measuring stress levels in university students: Self-reported stress scores on a validated questionnaire.

Overview of Research Methods

The Methods Toolbox

Psychological research employs various methods, each suited to different types of questions.

  • Descriptive Methods: Naturalistic observation, case studies, self-report measures/surveys.

  • Correlational Designs: Examine relationships between variables.

  • Experimental Designs: Test cause-and-effect relationships.

Validity in Research

Internal and External Validity

Validity refers to the accuracy and generalizability of research findings.

  • Internal Validity: The degree to which a study is well-conducted and free from confounding variables.

  • External Validity: The extent to which findings apply to real-world settings.

Descriptive Methods

Naturalistic Observation

Observing behavior in its natural environment without intervention.

Advantages

Disadvantages

High external validity (generalizable) Rich, detailed information Sometimes the only possible option

Lack of control Time and resource consuming Observer bias Cannot draw cause & effect conclusions

  • Example: Observing how often university students use laptops in class for non-class-related reasons.

Case Studies

In-depth analysis of a single person or setting, often used for rare or unusual phenomena.

  • Advantages: Rich, detailed descriptions; useful for rare cases.

  • Disadvantages: Low external validity; researcher bias.

  • Example: Case study of Russell Williams, examining behavioral escalation and criminal activity.

Self-Report/Survey Methods

Collecting data by asking participants to describe their own behaviors, attitudes, or perceptions.

  • Issues: Careless responding, misunderstanding questions, response bias, social desirability bias.

  • Example: Narcissism scale items (e.g., "I know I am special because everyone keeps telling me so.")

Evaluating Measures: Reliability and Validity

Reliability

Reliability refers to the consistency of a measure.

  • Test-Retest Reliability: Consistency of scores across time points

  • Inter-Rater Reliability: Consistency across different raters. Cohen's kappa is a statistic used to measure agreement.

Validity

Validity is the extent to which a measure assesses what it claims to measure.

  • Example: Feline preference scale (Likert scale 1-7) measuring attitudes toward cats.

  • A test must be reliable to be valid, but a reliable test can still be invalid.

Correlational/Non-Experimental Methods

Examining Relationships Between Variables

Correlational designs assess the strength and direction of relationships between variables without manipulation.

  • Correlation Coefficient: Ranges from -1.0 to +1.0.

  • Examples: Relationship between texting speed and relationship drama; video games and aggression.

Correlation vs. Causation

Correlation does not imply causation. Multiple explanations are possible:

  • A causes B

  • B causes A

  • A and B are both influenced by a third variable

Third Variables/Confounds

Confounding variables can create misleading associations between variables.

  • Example: "Kids with dogs are happier"—other factors (e.g., family income) may influence both variables.

Advantages

Disadvantages

Can establish trends across large data sets Good for describing and predicting behavior Useful when experiments are unethical

Cannot infer causality Third-variable problem (confounding)

Experimental Methods

Establishing Cause and Effect

Experimental designs manipulate one or more variables to determine causal relationships.

  • Independent Variable (IV): Manipulated by the researcher.

  • Dependent Variable (DV): Measured outcome affected by the IV.

  • Random Assignment: Participants are randomly assigned to experimental or control groups.

Internal Validity and Confounds

Confounding variables threaten internal validity by providing alternative explanations for results.

  • Example: Mood induction via music (IV) and tipping percentage (DV); confounds must be controlled.

Classic Experiment Example: Stanford Marshmallow Experiment

Examined delay of gratification in preschoolers and its relation to later outcomes (e.g., SAT scores, BMI).

  • Replication studies found weaker correlations and highlighted socioeconomic status as a confound.

Experimental Bias and Demand Characteristics

  • Expectancy Effect: Changes in participant behavior due to researcher expectations.

  • Demand Characteristics: Participants guess the study's purpose and alter behavior.

  • Solution: Double-blind designs and disguising study purpose.

Ethical Guidelines in Psychological Research

Protecting Participants

Ethical standards ensure the safety and rights of research participants.

  • Informed Consent: Participants must be informed about the study and consent to participate.

  • Protection from Harm: Researchers must minimize risks and discomfort.

  • Deception and Debriefing: If deception is used, participants must be debriefed afterward.

  • Historical Example: Tuskegee Syphilis Study—unethical practices led to the development of modern ethical guidelines.

Additional info: Expanded explanations, definitions, and examples have been added for academic completeness and clarity.

Pearson Logo

Study Prep