Skip to main content
Back

Reading and Evaluating Scientific Research in Psychology

Study Guide - Smart Notes

Tailored notes based on your materials, expanded with key definitions, examples, and context.

Introduction to Scientific Research in Psychology

This chapter introduces the foundational principles and methods used to conduct, evaluate, and interpret scientific research in psychology. Understanding these concepts is essential for critically assessing psychological studies and applying evidence-based practices.

Learning Objectives

  • Define and apply the scientific method to psychology.

  • Compare and contrast descriptive, experimental, and correlational research.

  • Understand the five characteristics of quality scientific research.

  • Apply ethical principles to research examples.

  • Define basic elements of a statistical investigation.

Characteristics of Quality Scientific Research

High-quality scientific research in psychology adheres to several key criteria to ensure validity, reliability, and ethical integrity.

  • Objective, Valid, and Reliable Measurements: Research must be based on measurements that are free from personal bias and consistently reflect what they intend to measure.

  • Objectivity: Facts should be observable and measurable, independent of the researcher's personal beliefs or expectations.

  • Generalizability: Findings should apply to broader populations beyond the specific sample studied.

  • Bias Reduction: Employing techniques that minimize both researcher and participant bias is essential.

  • Transparency and Replicability: Research should be made public for peer review and be replicable by other researchers.

Scientific Measurement in Psychology

Objectivity

Objective measurements are the foundation of scientific methodology. They ensure consistency across different instruments and observers.

  • Variables: In psychology, variables are objects, concepts, or behaviors that can be measured (e.g., stress, memory, reaction time).

  • Measurement Tools: These include behavioral observations, neuroscience methods (such as fMRI), biological samples, and self-report questionnaires.

Operational Definitions

Operational definitions specify the exact procedures used to measure a variable, ensuring clarity and reproducibility.

  • Example: The variable intoxication can be operationally defined by blood alcohol level (physiological), number of missteps on a walking test (behavioral), or scores on a self-report scale (subjective).

Reliability and Validity

Reliable and valid measurements are crucial for trustworthy research findings.

  • Reliability: The consistency of a measure across time, observers, and instruments.

    • Inter-rater reliability: Agreement among multiple observers.

    • Test-retest reliability: Stability of scores over time.

    • Alternate-forms reliability: Consistency across different versions of a test.

  • Validity: The extent to which a measure accurately reflects the concept it is intended to assess.

  • Note: A measure can be reliable without being valid (e.g., shoe size is a reliable but invalid measure of intelligence).

Generalizability of Results

Generalizability refers to the extent to which research findings can be applied to settings, people, or situations beyond the original study.

  • Sample vs. Population: Researchers study a sample to make inferences about a larger population.

  • Random Sample: Every individual in the population has an equal chance of being selected, increasing generalizability.

  • Convenience Sample: Participants are selected based on availability, which may limit generalizability.

  • Ecological Validity: The degree to which study results can be applied to real-world settings.

Sources of Bias in Psychological Research

Bias can distort research findings and reduce their accuracy. It may originate from researchers or participants.

  • Researcher Bias: Expectations or preferences of the researcher influence the outcome.

  • Subject Bias: Participants alter their behavior because they know they are being observed (Hawthorne Effect) or wish to present themselves favorably (Social Desirability).

  • Demand Characteristics: Unintentional cues from researchers or the environment that influence participant behavior.

  • Pygmalion Effect: Researcher expectations can influence outcomes, as seen in studies where teachers' expectations affected student performance.

  • Placebo Effect: Participants' expectations alone can produce changes in outcomes.

Techniques to Reduce Bias

  • Anonymity: Responses are not linked to participant identities.

  • Confidentiality: Only researchers have access to participant data.

  • Single-Blind Study: Participants do not know which group they are in.

  • Double-Blind Study: Neither participants nor researchers know group assignments, reducing both subject and researcher bias.

Publishing and Replicating Research

Publishing results in academic journals allows for peer review and transparency. Replication—repeating studies to see if results hold—strengthens scientific confidence. Psychology faces a replication crisis, emphasizing the need for transparency and repeated testing.

Characteristics of Poor Research

  • Unfalsifiable Hypotheses: Claims that cannot be tested or disproven.

  • Anecdotal Evidence: Using personal stories as proof.

  • Biased Data Selection: Only presenting data that supports a claim.

  • Appeals to Authority: Accepting claims solely because an expert said so.

  • Appeals to Common Sense: Relying on intuition instead of evidence.

Scientific Research Designs

Research design is the plan for testing hypotheses and answering research questions. The main types are descriptive, correlational, and experimental designs.

Descriptive Research

Descriptive research observes and records behavior without manipulating variables. It answers "what" is happening, not "how" or "why."

  • Case Studies: In-depth reports on individuals with unique traits (e.g., Phineas Gage).

  • Naturalistic Observation: Observing behavior in real-world settings.

  • Self-Report: Participants describe their own thoughts, feelings, or behaviors (e.g., surveys, interviews).

Limitations: Results may not generalize to others; observer bias can influence findings.

Comparison of Descriptive Research Methods

Method

Description

Strengths

Limitations

Case Study

In-depth report on one individual

Rich detail; useful for rare cases

May not generalize

Naturalistic Observation

Observing behavior in real-world settings

High ecological validity

Little control; observer bias

Self-Report (Surveys)

Participants report own thoughts/feelings

Efficient for large groups

Subject to bias

Correlational Research

Correlational research measures the relationship between two or more variables without manipulating them. It identifies associations but cannot establish causation.

  • Positive Correlation: Both variables increase together (e.g., education and income).

  • Negative Correlation: One variable increases as the other decreases (e.g., stress and sleep quality).

  • Correlation Coefficient: Ranges from -1.0 to +1.0; values closer to ±1.0 indicate stronger relationships.

  • Scatterplots: Visual representations of correlations; tightly clustered dots indicate strong correlations.

Note: Correlation does not imply causation. Some correlations are illusory or based on stereotypes.

Experimental Research

Experimental designs allow researchers to infer cause-and-effect relationships by manipulating variables and using random assignment.

  • Independent Variable (IV): The variable manipulated by the researcher.

  • Dependent Variable (DV): The outcome measured.

  • Experimental Group: Receives the treatment or intervention.

  • Control Group: Does not receive the treatment; serves as a baseline.

  • Between-Subjects Design: Different participants in each group.

  • Within-Subjects Design: Same participants experience all conditions.

  • Random Assignment: Ensures groups are comparable and controls for confounding variables.

Quasi-Experimental Research

Quasi-experimental designs compare groups based on pre-existing traits (e.g., gender) without random assignment. They can show relationships but not causation.

Converging Operations

Using multiple research methods and measures increases confidence in findings. Consistent results across methods strengthen theories.

Ethics in Psychological Research

Ethical principles protect the rights and welfare of research participants. Research Ethics Boards (REBs) review studies to ensure ethical standards are met.

  • Informed Consent: Participants must be informed about the study's purpose, procedures, and risks, and must agree voluntarily.

  • Right to Withdraw: Participants can decline or withdraw at any time.

  • Anonymity and Confidentiality: Data should not be linked to individuals; if not possible, confidentiality must be maintained.

  • Debriefing: Participants are informed about the study's true purpose after participation, especially if deception was used.

  • Animal Research: Animals must be housed and cared for humanely; harm must be justified and minimized.

Data Analysis and Descriptive Statistics

After data collection, researchers use statistics to summarize and interpret results.

  • Descriptive Statistics: Organize and summarize data to reveal trends.

  • Frequency: The number of observations in each category or range.

  • Central Tendency: Measures the center of a distribution: mean (average), median (middle value), and mode (most frequent value).

  • Variability: The degree to which scores are dispersed; measured by standard deviation.

Central Tendency and Skewed Distributions

  • Symmetrical Distributions: Mean, median, and mode are equal or similar.

  • Skewed Distributions: One tail is longer; mean may not represent the typical value.

Standard Deviation

Standard deviation quantifies variability around the mean. Higher standard deviation indicates more spread in the data.

Hypothesis Testing and Statistical Significance

Hypothesis testing evaluates whether observed differences are likely due to chance.

  • Null Hypothesis (H0): Assumes no effect or difference; any observed difference is due to chance.

  • Experimental Hypothesis (H1): Assumes the manipulation caused a difference.

  • p-value: Probability that results occurred by chance. Standard cutoff is ; more stringent is .

  • Effect Size: Indicates the magnitude of the difference, not just statistical significance.

Concerns: Multiple comparisons can increase false positives; large samples can make trivial effects significant. Both p-values and effect sizes should be reported.

Psychology as a Foundation for Nursing Practice

Understanding research methods and statistics equips nurses to make informed, ethical, and evidence-based clinical decisions. Psychological principles guide compassionate, science-based care and critical thinking in healthcare settings.

Pearson Logo

Study Prep