BackResearch Methods in Psychology: Foundations, Designs, and Ethics
Study Guide - Smart Notes
Tailored notes based on your materials, expanded with key definitions, examples, and context.
Research Methods in Psychology
Introduction to Research Methods
Research methods are essential in psychology to systematically investigate questions about behavior, cognition, and emotion. They help distinguish scientific findings from common sense or anecdotal beliefs.
Purpose of Research: To test assumptions, solve real-world problems, and understand how psychological phenomena work.
Example: Facilitated communication was once believed to help nonverbal individuals communicate, but research revealed its limitations and potential for bias.
Formulating Research Questions
Identifying a Research Question
Research begins with a clear, focused question based on curiosity, observation, or the need to address a problem.
Sources of Research Questions:
Common sense assumptions
Observations in the real world
Solving real-world problems
Understanding how something works
Sampling in Psychological Research
Populations and Samples
Researchers rarely study entire populations; instead, they select samples that represent the population of interest.
Population: The entire group of people relevant to the research question (e.g., all PSYC1010 students at York University).
Sample: A smaller group drawn from the population who actually participate in the study (e.g., 20 students from the class).
Random Selection and Generalizability
Random selection ensures every member of the population has an equal chance of being chosen, which increases the generalizability of findings.
Generalizability: The extent to which results from a sample apply to the broader population.
Importance: Especially critical in experimental research aiming for broad applicability.
Operational Definitions
Defining Variables for Measurement
Operational definitions specify how abstract concepts are measured or manipulated in a study.
Variable: Any characteristic or factor that can vary (e.g., aggression, stress).
Operational Definition: The specific procedures used to measure or manipulate a variable.
Example: Aggression in children could be operationalized as the number of times a child hits or yells during a play session.
Example: Stress in university students could be measured using a standardized questionnaire or physiological indicators like cortisol levels.
Overview of Research Designs
The Methods Toolbox
Psychological research employs various methods, each suited to different types of questions.
Descriptive Methods: Naturalistic observation, case studies, self-report measures/surveys
Correlational Designs: Examine relationships between variables
Experimental Designs: Test cause-and-effect relationships
Validity in Research
Internal and External Validity
Validity refers to the accuracy and applicability of research findings.
Internal Validity: How well a study is conducted (e.g., control of confounding variables).
External Validity: How well findings generalize to real-world settings.
Descriptive Research Methods
Naturalistic Observation
Observing behavior in its natural context without intervention.
Advantages | Disadvantages |
|---|---|
High external validity (generalizable) Rich, detailed information Sometimes the only possible option | Lack of control Time and resource consuming Observer bias Cannot draw cause & effect conclusions |
Example: Observing how often university students use laptops in class for non-class-related reasons.
Case Studies
In-depth analysis of a single individual or setting, often used for rare or unusual cases.
Advantages: Rich, detailed data; useful for rare phenomena.
Disadvantages: Low external validity; potential for researcher bias.
Example: Studying the behavior and history of a person with a rare brain injury.
Self-Report/Survey Methods
Collecting data by asking participants to report on their own behaviors, attitudes, or feelings.
Advantages: Efficient for gathering large amounts of data.
Disadvantages: Susceptible to response bias, social desirability, and misunderstanding of questions.
Example: Using questionnaires to assess stress levels in students.
Reliability and Validity of Measures
Reliability
Reliability refers to the consistency of a measure.
Test-Retest Reliability: Consistency of scores over time.
Inter-Rater Reliability: Agreement between different observers or raters (e.g., Cohen's kappa).
Validity
Validity is the extent to which a measure assesses what it claims to measure.
Example: A feline preference scale should accurately measure how much a person likes cats, not just their general attitude toward animals.
Note: A test must be reliable to be valid, but a reliable test is not necessarily valid.
Correlational Research
Correlational/Non-Experimental Methods
These methods examine the relationship between variables without manipulation.
Correlation Coefficient (): Ranges from -1.0 to +1.0, indicating the strength and direction of a relationship.
Scatter Plots: Visual representations of relationships between variables.
Example: Studying the relationship between texting speed and relationship drama.
Correlation vs. Causation
Correlation does not imply causation; relationships may be due to third variables or confounds.
Example: Kids with dogs may be happier, but other factors (e.g., family environment) could explain the association.
Advantages and Disadvantages of Correlational Designs
Advantages | Disadvantages |
|---|---|
Can establish trends across large data sets Useful for prediction Sometimes necessary for ethical reasons | Cannot infer causality Third-variable (confounding) problem |
Experimental Research
Experimental Method
Experiments are designed to test causal relationships by manipulating one variable (independent variable, IV) and measuring its effect on another (dependent variable, DV).
Random Assignment: Participants are randomly assigned to experimental or control groups to control for confounds.
Operationalization: IVs should have at least two levels (e.g., treatment vs. control).
Example: Does listening to music improve test performance? IV: Music exposure; DV: Test scores; Control: No music.
Internal Validity and Confounds
Internal Validity: The degree to which the study design allows for confident conclusions about causality.
Confounding Variable: An extraneous variable that varies with the IV and could provide an alternative explanation for results.
Example: In a study on mood and generosity, confounds could include participants' prior experiences or expectations.
Classic Example: Stanford Marshmallow Experiment
Tested delay of gratification in children and its relation to later outcomes (e.g., SAT scores, BMI).
Replication studies found weaker correlations and highlighted the influence of socioeconomic status (SES).
Experimental Bias and Demand Characteristics
Expectancy Effect: Changes in participant behavior due to researcher expectations; controlled by double-blind designs.
Demand Characteristics: Cues that inform participants of the study's purpose, potentially altering their behavior.
Ethical Guidelines in Psychological Research
Ethical Principles
Informed Consent: Participants must be informed about the study and consent to participate.
Protection from Harm: Researchers must minimize physical and psychological risks.
Deception and Debriefing: Deception is sometimes necessary but must be justified and followed by full debriefing.
Special Populations: Additional protections for minors and vulnerable groups (e.g., assent from children).
Historical Example: Tuskegee Syphilis Study
Participants were not informed of their diagnosis or provided with treatment after a cure was found, leading to significant harm.
This unethical study led to the development of stricter ethical guidelines in research.