Impact Evaluation: How we Conduct Valid & Reliable Research on Our Products

Efficacy is not about conducting and passing one type of study. In addition, a more rigorous study doesn’t necessarily mean a product is more likely to be effective; it just means the claims we are able to draw from the study may be stronger.

Efficacy is therefore a journey to design research and gather the appropriate evidence necessary to understand the impact products have on learners. To that end, we are careful to clearly distinguish between different types of evidence and what can be learned from each type.

Below are some of the different methods we use to gather evidence of impact on outcomes, with those down the list generating more reliable evidence.

  • Efficacy Trials are short-cycle randomized controlled trials designed to examine the efficacy of a product feature on a specific learner outcome. These are lab-based studies conducted when products are in development. Efficacy trials are critical to ensure that the product has the desired impact on learner outcomes prior to release of the product for use with learners.
  • In-class pilots vary in duration from two-week to semester length use of a trial version of the product within classrooms. When feasible, an in-class pilot may include an embedded randomized control trial whereby the product randomly presents a feature or capability to some students in order to measure impact on student behavior and their achievement on assessments that are also embedded within the platform. In this case, in-class pilots provide information about efficacy, in addition to the information about instructor and student use, satisfaction, and perceptions of the product.
  • Implementation Studies are early stage exploratory studies conducted when the product is used in a regular course for the entire duration of the course. These are designed to document, in a systematic fashion, how the product is implemented across different instructional contexts (e.g. blended instruction vs. lab-based settings) and to gather preliminary information about the impact of the product on factors that may influence learner outcomes.
  • Correlational Studies are designed to gain insight into variables that may be related to the outcomes that product and learning design teams believe are related to and may predict positive learner achievement and progression outcomes. These studies use analytical approaches (e.g. structural equation modelling, Bayesian analysis, and hierarchical linear models) with data collected from learners and within the product platform. These studies can uncover patterns of student learning and test the hypotheses we have about how learner behaviors, motivations and attitudes may predict achievement and progression. The results of correlational studies do not generate causal claims. They must be completed in accordance with academic standards for correlational research (see Thompson et. al., 2005).
  • Causal Studies employ experimental or quasi-experimental research designs to determine if and how Pearson products cause positive learner outcomes when implemented in real classrooms. The goal of these studies is to isolate the impact of Pearson products when factors related to differences in implementation, school and classroom characteristics and the characteristics of instructors and students are taken into account. These studies also compare students that use Pearson products against similar students that do not use our products in order to make valid and reliable claims about the impact of Pearson products on learner outcomes. The results generated from these studies must be derived using the strongest research standards that have been defined by the academic research community.
  • Meta-analyses use statistical techniques to combine findings from individual quasi-experimental and experimental studies to produce more precise estimates of the impact of Pearson products. A meta-analysis of available studies for a product is used to resolve conflicting findings across studies by clarifying whether the variations are due to the mode in which the Pearson product was implemented, the location of implementation (e.g. country or type of institution), background of instructors or students that used the product, or in the designs used in the individual studies. A meta-analysis can help explain when a product works, for which populations and under what conditions. At Pearson, meta-analyses are conducted following the academic guidelines provided by Cooper and Hedges (2009) and review guides such as those provided by the US Department of Education What Works Clearinghouse.

In addition to the above methodologies, we also work with educators who utilize Pearson products to capture and support their implementation through Educator Studies. Our Educator Studies, (previously known as implementation and results case studies) give voice to our customers and what they believe is the impact of our products.

  • Educator Studies are completed by instructors who teach using Pearson products. The studies describe why instructors decided to use the product, how they used it in their classrooms (or online classes) and how they believe use of the product impacted their teaching and student learning. They allow educators to share blueprints of best practices with other educators seeking new ways to increase student success and continuously improve integration of the products into their courses. The studies include data from the educator about course achievement and student outcomes, often comparing achievement both prior to and after implementation of the product. In some of these studies, educators also provide information gleaned from surveys of their students. Since these studies do not use experimental or quasi-experimental designs, outcomes reported in these studies are used to provide insights that can be further investigated in more rigorous studies in order to generate more robust claims of impact.