Skip to main content
Back

Research Methods in Psychology: Key Concepts and Applications

Study Guide - Smart Notes

Tailored notes based on your materials, expanded with key definitions, examples, and context.

Research Questions and Participants

Formulating Research Questions

Developing a research question is the foundational step in psychological research. Good questions often arise from real-world observations, common sense assumptions, and the need to solve practical problems.

  • Common sense assumptions: Everyday beliefs that can be tested scientifically.

  • Observations in the real world: Noticing patterns or phenomena that require explanation.

  • Solving real-world problems: Addressing issues that impact individuals or society.

  • Understanding how something works: Exploring mechanisms behind behaviors or mental processes.

Population vs. Sample

Researchers must distinguish between the population of interest and the sample used in a study.

  • Population: The entire group of people of interest (e.g., all PSYCH1010 students at York University).

  • Sample: A smaller group selected from the population who actually participate in the study (e.g., 20 students from the class).

Random Selection and Generalizability

Random selection is crucial for ensuring that study results can be generalized to the broader population.

  • Every person in the population has an equal chance of being selected.

  • Helps to ensure sample accurately represents population.

  • Important for studies seeking generalizability.

  • Example: Putting names in a hat and picking 20 participants (fair selection).

Operational Definitions

Operational definitions translate abstract concepts into specific, testable procedures.

  • Variable: Any factor that can change or be measured.

  • Operational definition: Specifies how a variable is measured or manipulated in a study.

  • Example concepts: Studying aggression in children, measuring stress levels in university students.

The Methods Toolbox

Psychological research employs a variety of methods to collect and analyze data. These include descriptive, correlational, and experimental approaches.

  • Descriptive: Observing and describing behavior.

  • Correlational: Examining relationships between variables.

  • Experimental: Manipulating variables to determine cause and effect.

Internal and External Validity

Validity refers to the accuracy and generalizability of a study's findings.

  • Internal validity: How well a study is conducted (control of confounding variables).

  • External validity: The extent to which results generalize to the real world.

Observational and Case Study Methods

Naturalistic Observation

Researchers observe behavior in its natural environment without intervention.

  • High external validity (generalizable).

  • Rich, detailed information.

  • Sometimes the only possible option.

Advantages

Disadvantages

High external validity, rich information

Lack of control, time-consuming, observer bias, cannot draw cause-effect conclusions

Case Studies

In-depth analysis of a single person or setting, often used for rare or unusual phenomena.

  • Provides qualitative data.

  • Useful for studying rare conditions (e.g., brain injuries, rare diagnoses).

Advantages

Disadvantages

Rich descriptions, sometimes only possible method

Low external validity, researcher bias

Surveys and Questionnaires

Issues in Survey Research

Surveys rely on self-report, which can be affected by careless responding and social desirability bias.

  • Careless/random responding.

  • Misunderstanding questions.

  • Social desirability: Tendency to respond in a way that presents oneself positively.

Short Dark Triad (SD3)

An example of a psychological scale measuring socially aversive traits.

  • Items assess traits like narcissism, Machiavellianism, and psychopathy.

  • Example item: "I know I am special because everyone keeps telling me so."

Evaluating Measures: Reliability and Validity

Reliability

Reliability refers to the consistency of a measurement.

  • Test-retest reliability: Consistency across time points.

  • Inter-rater reliability: Agreement between different raters.

  • Cohen's kappa: Statistic for inter-rater agreement.

Validity

Validity refers to whether a measure assesses what it claims to measure.

  • A test must be reliable to be valid, but a reliable test is not always valid.

Validity Example

Feline preference scale: Measures how much a person likes cats.

  • Scale items: "I watch videos of cats often", "I enjoy being around cats", etc.

  • Response: 1 (strongly disagree) to 7 (strongly agree).

Correlation and Causation

Correlation: Non-experimental Method

Correlation examines the strength and direction of relationships between variables.

  • Variables are observed, not manipulated.

  • Correlation coefficient ranges from -1.0 to +1.0.

  • Higher values indicate stronger relationships.

Scatter Plots

Scatter plots visually represent relationships among variables.

Correlation vs. Causation

Correlation does not imply causation. Many possible explanations exist for why two variables are related.

  • A → B

  • B → A

  • A ←→ B

Determining causation requires experimental manipulation.

Correlational and Experimental Designs

Correlational Designs

  • Can establish trends across large amounts of data.

  • Good for describing behavior.

  • Cannot infer causal direction.

  • Third-variable problem (confounding variables).

Experimental Method

Experiments manipulate variables to determine cause and effect.

  • Independent variable (IV): Manipulated by researcher.

  • Dependent variable (DV): Measured by researcher.

  • Random assignment to experimental or control group.

Experimental Validity

  • Internal validity: Certainty that IV caused changes in DV.

  • Confound: Variable that could provide an alternative explanation.

Pitfalls in Experiments

Experimental Bias

  • Expectancy effect: Changes in participant behavior due to researcher expectations.

  • Double-blind designs: Prevent expectancy effects.

Demand Characteristics

  • Participants guess the purpose of the study and alter their behavior.

  • Disguising study's purpose can reduce this.

Ethical Guidelines for Human Research

Informed Consent

  • Participants must be informed about the study.

  • Protection from harm and discomfort.

  • Deception and debriefing must be handled ethically.

Special Populations

  • Minors require parental consent (assent).

  • Special care for vulnerable groups.

Deception and Debriefing

  • If deception is used, participants should be informed as soon as possible.

  • Participants should not be deceived about procedures that may cause harm.

  • After the study, participants should be fully debriefed.

Historical Example: Tuskegee Syphilis Study

The Tuskegee Syphilis Study is a notorious example of unethical research.

  • Participants were not informed about the true nature of the study.

  • Treatment was withheld even after penicillin was discovered.

  • Many participants unknowingly spread the disease and died.

Additional info: These notes provide a comprehensive overview of foundational research methods in psychology, including definitions, examples, and ethical considerations. They are suitable for introductory college-level psychology courses.

Pearson Logo

Study Prep