BackIntroduction to Psychological Research Methods: Key Concepts and Approaches
Study Guide - Smart Notes
Tailored notes based on your materials, expanded with key definitions, examples, and context.
Why Do We Need Research? Facilitated Communication
Research in psychology is essential for understanding human behavior, testing assumptions, and developing evidence-based practices. It helps distinguish between anecdotal beliefs and scientifically validated knowledge.
Facilitated communication is an example of a practice that was widely accepted before research revealed its limitations.
Research questions often arise from common assumptions or observations, such as whether laptops are distracting in classrooms.
What is Your Research Question?
Formulating Research Questions
Research begins with identifying a clear, testable question. This involves specifying what you want to study and why it matters.
Assumptions: What do you believe or expect? (e.g., "Laptops can be distracting in classrooms.")
Observations: What have you noticed? (e.g., students using laptops for non-class-related activities.)
Understanding: What do you want to learn? (e.g., how something works or affects behavior.)
Who are Your Research Participants?
Populations vs. Samples
Defining your participants is crucial for research validity and generalizability.
Population: The entire group of people of interest (e.g., all first-year students at a university).
Sample: A smaller group selected from the population to participate in the study (e.g., a group of 20 students).
Random Selection
Random selection is a key ingredient for generalizability, ensuring every person in the population has an equal chance of being chosen.
Helps to create a sample that accurately represents the population.
Variables and Operational Definitions
Variables are characteristics or conditions that can change or be measured in research. Operational definitions specify how variables are measured or manipulated.
Variable: Any factor that can vary (e.g., aggression, stress levels).
Operational Definition: A specific, testable procedure for measuring a variable (e.g., using biometric data to measure stress).
The Method Toolbox
Psychological research uses various methods, each with strengths and limitations. These methods can be descriptive, correlational, or experimental.
Naturalistic Observation: Observing behavior in its natural context.
Case Study: In-depth analysis of a single person or setting.
Self-Report Measures/Surveys: Collecting data by asking participants to describe their own behaviors or attitudes.
Correlational Research: Examining relationships between variables.
Experimental Research: Manipulating variables to determine cause and effect.
Internal and External Validity
Internal Validity: How well a study is conducted (control of variables, elimination of confounds).
External Validity: How applicable the findings are to the real world (generalizability).
Naturalistic Observation
Naturalistic observation involves watching subjects in their natural environment without interference.
Advantages:
High external validity
Rich, detailed information
Sometimes the only feasible option
Disadvantages:
Lack of control over variables
Time and resource consuming
Observer bias
Cannot draw cause-and-effect conclusions
Example: Observing how often university students use laptops in class for non-class-related reasons.
Case Studies
Case studies provide in-depth analysis of a single person or setting, often used for rare or unusual phenomena.
Advantages:
Rich, detailed descriptions and data
Disadvantages:
Low external validity (findings may not generalize)
Example: Studying a patient with a rare neurological disorder.
Self-Report/Survey Methods
Self-report methods involve collecting data by asking participants to describe their own behaviors, attitudes, or experiences.
Can be used in both experimental and correlational research.
Assumptions: Participants answer honestly and provide meaningful responses.
Limitations:
Careless or random responding
Misunderstanding questions
Response bias (e.g., social desirability bias: tendency to present oneself in a positive light)
Example: Feline Preference Scale – a scale to determine how much a person likes cats.
Reliability and Validity
Reliability
Reliability refers to the consistency of a measure.
Test-Retest Reliability: Consistency of results over time (correlation between scores at Time 1 and Time 2).
Inter-Rater Reliability: Agreement between different raters (e.g., Cohen's Kappa coefficient).
Example: Feline Preference behaviors – consistency in how different observers rate cat-related behaviors.
Validity
Validity refers to how well a measure assesses what it claims to measure.
A test must be reliable to be valid, but a reliable test can still be invalid.
High validity: The measure actually assesses the intended variable.
Example: Feline Preference Scale – does the scale truly measure how much a person likes cats?
Summary Table: Research Methods Comparison
Method | Advantages | Disadvantages | Example |
|---|---|---|---|
Naturalistic Observation | High external validity, rich data | Lack of control, time-consuming | Observing laptop use in class |
Case Study | Detailed descriptions | Low generalizability | Rare neurological disorder |
Self-Report/Survey | Easy to administer, large samples | Response bias, honesty issues | Feline Preference Scale |
Key Terms and Definitions
Population: The entire group a researcher is interested in studying.
Sample: A subset of the population selected for the study.
Variable: Any characteristic or factor that can vary.
Operational Definition: A precise description of how a variable is measured or manipulated.
Reliability: Consistency of a measure.
Validity: Accuracy of a measure in assessing what it is intended to measure.
Social Desirability Bias: Tendency of participants to answer questions in a manner that will be viewed favorably by others.
Formulas and Equations
Test-Retest Reliability (Correlation Coefficient):
Cohen's Kappa (Inter-Rater Reliability):
where is the observed agreement and is the expected agreement by chance.
Applications and Examples
Operationalizing Variables: Measuring aggression in children using teacher reports or biometric data.
Survey Example: Feline Preference Scale to determine how much a person likes cats.