BackResearch Methods in Psychology: Foundations, Designs, and Validity
Study Guide - Smart Notes
Tailored notes based on your materials, expanded with key definitions, examples, and context.
Week 2: Research Methods
Introduction to Research in Psychology
Research methods are essential in psychology for systematically investigating questions about behavior, cognition, and emotion. They help distinguish scientific findings from common sense assumptions and anecdotal observations.
Purpose of Research: To test assumptions, solve real-world problems, and understand psychological phenomena.
Facilitated Communication: Example of why rigorous research is needed to validate interventions and avoid misleading conclusions.
Case Example: "Tell Them You Love Me" highlights the importance of evidence-based practices in communication disorders.
Formulating Research Questions
Identifying What to Study
Research begins with a clear question, often inspired by observations, common sense, or the need to solve practical problems.
Sources of Research Questions:
Common sense assumptions
Observations in the real world
Solving real-world problems
Understanding mechanisms of behavior
Sampling in Psychological Research
Populations vs. Samples
Researchers must define who will participate in their studies. The distinction between populations and samples is crucial for generalizability.
Population: The entire group of interest (e.g., all PSYC1010 students at York).
Sample: A smaller group drawn from the population (e.g., 20 students who participate in the study).
Random Selection and Generalizability
Random selection ensures every member of the population has an equal chance of being chosen, which is vital for making findings generalizable.
Random Selection: Reduces bias and increases the representativeness of the sample.
Generalizability: The extent to which findings apply to the broader population.
Operational Definitions
Variables and Measurement
Operational definitions translate abstract concepts into measurable and observable procedures.
Variable: Any factor or characteristic that can vary.
Operational Definition: Specifies how a variable is measured or manipulated in a study.
Examples:
Studying aggression in children: Number of aggressive acts observed during play.
Measuring stress levels in university students: Scores on a standardized stress questionnaire.
Overview of Research Designs
The Methods Toolbox
Psychological research employs various designs, each suited to different questions and levels of control.
Descriptive Methods: Naturalistic observation, case studies, self-report measures/surveys.
Correlational Designs: Examine relationships between variables.
Experimental Designs: Test cause-and-effect relationships.
Validity in Research
Internal vs. External Validity
Validity refers to the accuracy and applicability of research findings.
Internal Validity: How well a study is conducted (control of confounds, accurate measurement).
External Validity: How well findings generalize to real-world settings.
Descriptive Research Methods
Naturalistic Observation
Observing behavior in its natural context without intervention.
Advantages | Disadvantages |
|---|---|
High external validity (generalizable) Rich, detailed information Sometimes the only possible option | Lack of control Time and resource consuming Observer bias Cannot draw cause & effect conclusions |
Example: Observing how often university students use laptops in class for non-class-related reasons.
Case Studies
In-depth analysis of a single individual or setting, often used for rare or unusual phenomena.
Advantages: Rich, detailed descriptions; useful for rare cases.
Disadvantages: Low external validity; researcher bias.
Example: Studying the behavior and history of a person with a rare neurological disorder.
Self-Report/Survey Methods
Collecting data by asking participants to describe their own behaviors, attitudes, or perceptions.
Advantages: Efficient for large samples; can assess subjective experiences.
Disadvantages: Response bias, social desirability, misunderstanding questions.
Example: Using questionnaires to measure stress levels among students.
Evaluating Measures: Reliability and Validity
Reliability
Reliability refers to the consistency of a measure.
Test-Retest Reliability: Consistency of scores across time points.
Inter-Rater Reliability: Consistency across different observers.
Validity
Validity is the extent to which a measure assesses what it claims to measure.
High Validity: The measure accurately reflects the intended construct.
Example: A feline preference scale should include items that genuinely reflect liking cats.
Correlational (Non-Experimental) Methods
Examining Relationships Between Variables
Correlational designs assess the strength and direction of relationships between variables without manipulation.
Correlation Coefficient: Ranges from -1.0 to +1.0.
Positive Correlation: Both variables increase together.
Negative Correlation: One variable increases as the other decreases.
Zero Correlation: No relationship.
Example: Relationship between texting speed and relationship drama.
Correlation vs. Causation
Correlation does not imply causation. Multiple explanations are possible, including third variables (confounds).
Third Variable Problem: An outside factor influences both variables, creating a misleading association.
Example: Kids with dogs may be happier due to family environment, not dog ownership itself.
Advantages | Disadvantages |
|---|---|
Can establish trends Good for describing behavior Can predict future behavior Useful when experiments are unethical | Cannot infer causality Third-variable/confounding issues |
Experimental Methods
Establishing Cause and Effect
Experimental designs manipulate one variable (independent variable, IV) and measure its effect on another (dependent variable, DV), with random assignment to conditions.
Independent Variable (IV): Manipulated by researcher (e.g., mood induction via music).
Dependent Variable (DV): Measured outcome (e.g., tipping percentage).
Control Condition: Lacks manipulation, serves as a baseline.
Random Assignment: Ensures groups are equivalent at the start.
Confounding Variables
Confounds are variables other than the IV that may affect the DV, threatening internal validity.
Example: In a study on mood and generosity, time of day or participant personality could be confounds.
Classic Experiment Example: Stanford Marshmallow Experiment
Delay of Gratification
Preschoolers' ability to delay gratification (waiting for a marshmallow) was linked to later outcomes like SAT scores and BMI. Replications found weaker correlations and socioeconomic status differences.
Key Point: Experimental design allows for testing causal hypotheses, but results must be interpreted in context.
Experimental Bias and Ethics
Expectancy Effects and Demand Characteristics
Biases can arise when researchers or participants alter behavior based on expectations or perceived study purpose.
Expectancy Effect: Researcher expectations influence participant behavior.
Demand Characteristics: Participants guess study purpose and change behavior.
Solution: Use double-blind designs and conceal study purpose.
Ethical Guidelines in Human Research
Ethics are central to psychological research, ensuring participant safety and informed consent.
Informed Consent: Participants must be fully informed about the study.
Protection from Harm: Minimize physical and psychological risks.
Deception and Debriefing: Deception is allowed only when necessary and must be followed by full debriefing.
Special Populations: Extra protections for minors and vulnerable groups.
Historical Example: Tuskegee Syphilis Study illustrates the consequences of unethical research practices.
Additional info: Ethical standards have evolved to protect participants and ensure scientific integrity.