BackResearch Methods in Psychology: Foundations, Designs, and Validity
Study Guide - Smart Notes
Tailored notes based on your materials, expanded with key definitions, examples, and context.
Week 2: Research Methods
Introduction to Research in Psychology
Research methods are essential in psychology for systematically investigating questions about human behavior, cognition, and emotion. Rigorous research helps distinguish scientific findings from common sense assumptions and anecdotal observations.
Purpose of Research: To test assumptions, solve real-world problems, and understand psychological phenomena.
Facilitated Communication: Example of why research is needed—claims must be empirically tested to avoid misleading practices.
Formulating a Research Question
Every research project begins with a clear, focused question. This guides the choice of methods and interpretation of results.
Sources of Research Questions:
Common sense assumptions
Observations in the real world
Solving real-world problems
Understanding how something works
Sampling and Participants
Populations vs. Samples
Researchers must define who will participate in their studies. The distinction between populations and samples is crucial for generalizability.
Population: The entire group of interest (e.g., all PSYC1010 students at York).
Sample: A smaller group drawn from the population (e.g., 20 students who participate in the study).
Random Selection and Generalizability
Random selection ensures that every member of the population has an equal chance of being included in the sample, which is vital for making generalizable conclusions.
Importance:
Accurately represents the population
Reduces selection bias
Essential for experiments seeking generalizability
Operational Definitions
Variables and Operationalization
Operational definitions translate abstract concepts into measurable and observable procedures.
Variable: Any characteristic or factor that can vary (e.g., aggression, stress).
Operational Definition: Specifies how a variable is measured or manipulated in a study.
Examples:
Studying aggression in children: Number of aggressive acts observed during play.
Measuring stress levels in university students: Self-reported stress scale or physiological measures (e.g., cortisol levels).
Overview of Research Designs
The Methods Toolbox
Psychological research employs various methods, each suited to different types of questions.
Descriptive Methods:
Naturalistic observation
Case study
Self-report measures and surveys
Correlational Designs: Examine relationships between variables.
Experimental Designs: Test cause and effect by manipulating variables.
Validity in Research
Internal and External Validity
Validity refers to the accuracy and applicability of research findings.
Internal Validity: How well a study is conducted; the degree to which it establishes a trustworthy cause-and-effect relationship.
External Validity: The extent to which findings generalize to real-world settings.
Descriptive Methods
Naturalistic Observation
Observing behavior in its natural environment without intervention.
Advantages | Disadvantages |
|---|---|
High external validity (generalizable) Rich, detailed information Sometimes the only possible option | Lack of control Time and resource consuming Observer bias Cannot draw cause & effect conclusions |
Example:
Studying how often university students use laptops in class for non-class-related reasons.
Case Studies
In-depth analysis of a single person or setting, often used for rare or unusual phenomena.
Advantages: Rich, detailed descriptions; useful for rare cases.
Disadvantages: Low external validity; researcher bias.
Example: Case study of a person with a rare brain injury.
Self-Report/Survey Methods
Collecting data by asking participants to describe their own behaviors, attitudes, or perceptions.
Issues:
Careless or random responding
Misunderstanding questions
Response bias (e.g., social desirability)
Evaluating Measures: Reliability and Validity
Reliability
Reliability refers to the consistency of a measure.
Test-Retest Reliability: Consistency across time points.
Inter-Rater Reliability: Consistency across different raters.
Validity
Validity is the extent to which a measure assesses what it claims to measure.
High Validity: The measure accurately reflects the intended construct.
Example: A feline preference scale should include items that genuinely reflect liking cats.
Correlational/Non-Experimental Methods
Correlation Coefficient
Correlational studies examine the strength and direction of relationships between variables.
Correlation Coefficient (): Ranges from -1.0 to +1.0.
Scatter Plots: Visualize relationships between variables.
Correlation vs. Causation
Correlation does not imply causation. Multiple explanations are possible for observed relationships.
A may cause B
B may cause A
A and B may be related due to a third variable
Third Variables/Confounds
A third variable is an outside factor that influences both variables, potentially creating a misleading association.
Example: Kids with dogs may be happier due to family environment, not dog ownership itself.
Advantages | Disadvantages |
|---|---|
Can establish trends across large data sets Good for describing behavior Can predict future behavior Sometimes necessary due to ethical issues | Cannot infer causal direction Third-variable problem (confounding variable) |
Experimental Methods
Experimental Design
Experiments test causal relationships by manipulating one variable and measuring its effect on another.
Independent Variable (IV): Manipulated by the researcher.
Dependent Variable (DV): Measured outcome affected by the IV.
Random Assignment: Participants are randomly assigned to experimental or control groups.
Operationalization: IV should have at least two levels (e.g., treatment vs. control).
Example:
Does listening to music improve test performance? IV: Music exposure (yes/no); DV: Test scores; Control: No music group.
Confounding Variables
Confounds are variables other than the IV that may affect the DV, threatening internal validity.
Example: Mood induction via music—other factors (e.g., time of day) could influence generosity.
Experimental Bias and Expectancy Effects
Biases can arise from researchers' or participants' expectations.
Expectancy Effect: Changes in participant behavior due to researcher expectations.
Demand Characteristics: Participants guess the study's purpose and alter their behavior.
Solution: Use double-blind designs and conceal study purpose.
Ethical Guidelines in Psychological Research
Key Principles
Informed Consent: Participants must be informed about the study and consent to participate.
Protection from Harm: Researchers must minimize risks and discomfort.
Deception and Debriefing: Deception is allowed only when necessary and must be followed by thorough debriefing.
Special Populations: Extra protections for minors and vulnerable groups (e.g., assent required).
Historical Example: Tuskegee Syphilis Study
Ethical guidelines have evolved in response to past abuses. The Tuskegee Syphilis Study is a notorious example where participants were not informed of their diagnosis and denied treatment, leading to significant harm.
Lesson: Ethical standards are essential to protect participants and maintain public trust in research.
Summary Table: Research Methods Comparison
Method | Main Purpose | Advantages | Disadvantages |
|---|---|---|---|
Naturalistic Observation | Describing behavior in real-world settings | High external validity, rich data | Lack of control, observer bias |
Case Study | In-depth analysis of individuals/settings | Rich detail, useful for rare cases | Low generalizability, researcher bias |
Self-Report/Survey | Collecting subjective data | Efficient, scalable | Response bias, social desirability |
Correlational Design | Examining relationships | Trends, prediction | No causation, confounds |
Experimental Design | Testing cause and effect | Causal inference, control | May lack external validity, ethical limits |
Additional info: Expanded definitions, examples, and context were added to ensure completeness and academic quality.