BackReading and Evaluating Scientific Research
Study Guide - Smart Notes
Tailored notes based on your materials, expanded with key definitions, examples, and context.
Principles of Scientific Research
Objectivity in Scientific Research
Scientific research in psychology strives for objectivity, meaning that facts about the world can be observed and tested independently of the observer. However, subjectivity can influence research when prior beliefs, expectations, and experiences shape interpretation.
Objectivity: Assumes facts are independent of the observer.
Subjectivity: Knowledge is influenced by personal experiences and biases.
Characteristics of Quality Scientific Research
High-quality scientific research is defined by several key characteristics:
Based on measurements that are objective, valid, and reliable
Can be generalized
Uses techniques that reduce bias
Is made public
Can be replicated
Scientific Measurement: Objectivity
Objective measurement ensures that the value of a quality or behavior is consistent regardless of who is measuring or the tool used. However, some margin of error is inevitable.
Example: Your weight may differ slightly depending on the scale used (bathroom vs. gym), illustrating measurement error.
Why Do We Need Research?
Facilitates communication
Allows for public scrutiny and replication
What is a Research Question?
A research question guides the investigation and is based on:
Common sense assumptions
Observation in the real world
Solving real-world problems
Understanding how something works
Research Participants and Sampling
Who Are Research Participants?
Study in samples: Select group of population
Study in population: Entire group of people in interest
Random sample: Technique in which every individual has an equal chance of being included
Convenience samples: Those who are readily available, typically psychology students
Random Selection and Generalizability
Random selection is crucial for generalizability, which refers to the degree to which results can be applied to other populations or settings.
Ensures sample accurately represents population
Important for studies seeking generalizability (experiments)
Example: Studying large groups for accuracy and reliability
Variables and Measurement
Variable vs. Operational Definition
Translating research questions into specific, testable procedures is essential.
Variable: The object, concept, or event being controlled, manipulated, or measured
Operational definition: Statements that describe the operations and specific measures used to observe variables
Example: Measuring intoxication (variable) via blood alcohol level (operational definition)
Internal and External Validity
Internal validity: How the study is conducted (control of variables)
External validity: How applicable the findings are in real-world settings
Methods Used in Studies
Naturalistic observation: Watching behavior in natural settings
Self-report measures and surveys: Collecting data by asking participants to describe their own behaviors, attitudes, views, and perceptions
Experimental designs: Manipulating variables to determine cause and effect
Evaluating Measures: Validity and Reliability
Validity
Validity: The degree to which an instrument or procedure actually measures what it claims to measure
High internal validity: High certainty that IV caused changes to DV
Confound: A variable not of interest that varies along with the independent variable
Reliability
Reliability: When a tool provides consistent and stable answers across multiple observations or points in time
Test-retest reliability: Consistency of measure across test sessions
Inter-rater reliability: Consistency across different raters
Alternate-forms reliability: Consistency across different versions of the same test
Ecological Validity
Results of a laboratory study can be applied to or repeated in the natural environment
Correlation and Experimental Methods
Correlation/Non-Experimental Method
Examines the strength of relationship between variables without manipulating them.
Measures what is happening, does not manipulate the variable
Correlation coefficient ranges from -1.0 to 1.0 (positive, negative, or zero)
Higher value means stronger relationship
Scatter plot illustrates the relationship
Examples of Correlation
The relationship between texting speed & relationship drama
Video games & aggression
Time spent in traffic & happiness (negative)
Number of tennis balls & dog (curvilinear relationship)
Music coverage & Ivy immortality (nonlinear)
Correlation vs. Causation
Correlation does not imply causation
Third variable problem: A third variable may influence both variables, creating a false or misleading association
Example: Kids with dogs may be happier, but it could be due to spending more time outside
Advantages and Disadvantages of Correlational Designs
Advantages: Can establish relationships, good for describing behavior, can predict future behavior
Disadvantages: Cannot infer causal direction, third-variable problem
Experimental Method
Research design that focuses on determining causal influences between variables.
One variable is manipulated, and the other is measured or observed
Random assignment of participants to experimental or control group
Independent variable: What is being manipulated
Dependent variable: What is being measured
Control condition: Basis for comparison
Example: Number of cups of coffee consumed (IV) and note-taking speed (DV)
Experimental Research: The Independent Variable
Must have operationalization, at least two levels/conditions
Ex: Treatment vs. placebo, High IV vs. Low IV
Experimental Bias
Expectancy effect: Changes in participants' behavior caused by expectations of researcher/observer
Double-blind designs: Prevent expectancy effects
Demand characteristics/participant bias: Cues in research setting that lead participants to guess the purpose of the study and adjust their behavior
Researcher bias: Bias in treatment of experiment, can question if it was for experimental reasons or their own bias
Act of observation: When participants realize they are being observed, they may produce biased results (Hawthorne effect)
Social desirability: Participants respond with what they think is expected
Placebo effect: Measurable and experienced improvement in health or behavior that cannot be attributed to a medication or treatment
Ethical Guidelines for Human Research
Must have informed consent
Protect from harm and discomfort
Deception and debrief (let them know what it is about after study has been done)
When is it Okay to Not Fully Inform?
Research is purely observational
Special populations
Research purposes
Required in cases where knowing the purpose would change their behavior
Are not told the purpose of the study
Misled given false purpose or not told
The Tuskegee Syphilis Study
Ethical guidelines did not always exist
In 1932, black men were recruited in an experiment studying syphilis
Those tested positive were not informed they had a disease
Although no cure existed at the time of study, a cure was found in 1947
Summary Table: Types of Validity and Reliability
Type | Definition | Example |
|---|---|---|
Internal Validity | Degree to which the study controls for confounding variables | Random assignment in experiments |
External Validity | Degree to which findings generalize to real-world settings | Naturalistic observation |
Reliability | Consistency and stability of measurement | Test-retest reliability |
Ecological Validity | Applicability of lab results to natural environments | Field studies |
Key Equations
Correlation coefficient: ranges from to
Additional info: Some explanations and examples have been expanded for clarity and completeness, including definitions of validity, reliability, and ethical guidelines.