Our Approach
With input from the Efficacy Academic Network, we have developed an eight-stage process to prepare reporting on the impact of use of our products on outcomes.
The process includes controls designed to make sure our reporting is rigorous and accurate. We refer to the process, controls, paperwork, reviews, and audits together as our Efficacy Reporting Framework.
We also submit our research studies to peer review by education research organizations such as SRI and at conferences such as those held by the American Education Research Association.
We hope that sharing details of our approach will encourage others to provide feedback about how we can further improve, replicate, and build on what we have done.
The efficacy reporting process
Stage 1: Determining learner outcomes
In partnership with our customers and learners, we determine what the product is aiming to achieve — its intended learner outcomes.
These act as benchmarks; from this point, the efficacy of the product is measured by its impact on these outcomes.
Stage 2: Designing research studies
We plan ways to investigate the learner outcomes identified in stage 1 — including searching for existing peer-reviewed evidence, and planning new research studies about how using the product affects the outcomes.
This stage is to make sure our research is relevant to the outcomes that matter most to customers and learners.
Stage 3: Commissioning research studies
Our in-house researchers and / or third-party researchers are commissioned to carry out the research studies planned in stage 2.
This stage is to make sure we commission researchers whose capabilities and experience are appropriate for the design of the research studies.
Stage 4: Implementing research studies
The researchers conduct the research as planned in stage 2. We make sure the research follows all appropriate laws and regulations for the collection and storage of student data, and that the data is analyzed and written up into the Technical Report accurately and in full.
Stage 5: Finalizing research studies
We review the quality of the research studies related to the product against the original research study design from stage 2. We assess whether the quality of the researchers’ methodology, analysis of the data, and conclusions are appropriate for the research study design.
This stage is to determine whether the research study can be used to create statements about the efficacy of the product.
Stage 6: Screening for efficacy reporting
We search for other existing research studies related to the product, either by our in-house researchers or by third-party researchers, and assess whether they are relevant and robust enough to incorporate into our efficacy reporting.
This stage ensures we do not overlook relevant research studies that are published while our research is underway, so we can be sure we are reporting the full story about the product.
Stage 7: Proposed efficacy statements
We review the body of research assembled for the product — including research studies commissioned by Pearson and those created by others — and use it to collate a series of efficacy statements.
Stage 8: Efficacy reports
We draft the web page and Research Report document for the product. We then assess whether they are aligned to the learner outcomes from stage 1, and whether the three layers of the Efficacy Report (web page, Research Report, and Technical Report/s) are consistent with each other.
This process specifically applies to audited efficacy reporting and the research studies that contribute to it. Pearson makes use of a range of different types of research throughout the lifecycle of a product, some of which are not covered here because they are not submitted for audit.
Audited efficacy reporting
Pearson has commissioned PwC to carry out a public, non-financial, third-party audit of our Efficacy Reporting Framework and the efficacy statements we make about our products.
This commitment demonstrates a level of transparency about our efficacy and research that is unparalleled in the commercial education sector. Our goal is for the audit to build trust and to provide additional assurance to our customers, learners, and investors about how we evaluate the impact of using our products.
PwC’s audit includes evaluating Pearson’s Efficacy Reporting Framework, reviewing research data and the conclusions drawn from it, and assessing the integrity of efficacy statements Pearson plans to make about its products.
PwC’s audit opinion appears in each research report.
The Pearson Efficacy Reporting Framework, dated April 04, 2018
Research studies used in efficacy reporting
The research studies that form part of our audited efficacy reporting include implementation studies, correlational studies, causal studies, and, we hope in time, meta-analyses.
Pearson makes use of a wider range of different types of research throughout the product lifecycle, including learning experiments and efficacy trials. But when we prepare efficacy reporting and audited efficacy statements for a product, we only use these four specific study types.
Implementation studies systematically document how a product is used in a real course.
Correlational studies investigate whether or not the way two variables change is related.
Causal studies investigate whether there is a cause and effect relationship between two variables.
Meta-analyses combine findings from multiple studies on the same subject.
Why focus on these methods for efficacy reporting?
In the social sciences, the question of what counts as good evidence of impact is somewhat contentious. Our view is that what counts as high-quality evidence in a given context depends on what we want to know, for what purposes, and in what context we envisage that evidence being used.
In the context of our audited efficacy reporting, we envisage using the evidence to demonstrate impact to our customers, improve existing products, and develop new ones.
This means focusing on building a body of evidence over the lifecycle of a product; collecting appropriate, evidence; using tried, tested, appropriate and respected methods of analysis; and emphasizing the usefulness of the evidence in understanding learning and improving the product.
Our selection of research types supports this:
- As products go to market, implementation studies explore the range of ways customers use the product.
- As use becomes established, correlational studies reveal relationships between product features, usage patterns, and learner outcomes.
- Causal designs investigate these relationships further, testing whether they can be replicated, and which outcomes may be a result of using the product.
- Meta-analyses combine findings from other studies to give more precise results and resolve any conflicting findings, helping to explain when a product works and for whom.
By placing value on these particular methods and applying them to agreed standards, we hope to foster innovation in efficacy research.
Efficacy reporting: The journey so far
Year | Action |
2013 |
Pearson commits to start publishing annual reports about the efficacy of its products, and to have those reports externally audited by 2018. Pearson publishes ‘The Incomplete Guide to Delivering Learning Outcomes’ with the Shared Value Initiative. Pearson publishes ‘Good intentions to real impact’, an initial approach to efficacy research and reporting developed in partnership with Nesta. |
2014–2016 |
Pearson refines its approach with input from organizations like the American Educational Research Association, What Works Clearinghouse, and the Efficacy Academic Network. |
2015 | The Efficacy Academic Network is formed and Pearson releases an update on progress with efficacy in ‘On the Road to Delivering Learner Outcomes.’ The first dry run: Pearson publishes reports on five of its products in March. |
2016 |
The second dry run: Pearson goes deeper and broader with efficacy reporting — looking at more products and detailing more rigorous research. |
2017 |
The third dry run: Pearson publishes five reports and this time also subjects them to a mock audit. Pearson examines its approach in more detail with SRI, with the results published as ‘Understand, implement, evaluate.’ Pearson sponsors and contributes to the EdTech Efficacy Research Academic Symposium, fostering, and contributing to discussion and debate around efficacy. |
2018 |
In April, Pearson publishes its Efficacy Reporting Framework and transparent, rigorous reports about the efficacy of its most widely used digital products, audited by PricewaterhouseCoopers LLP. |
The Efficacy Academic Network
The Efficacy Academic Network was formed in 2015. Its remit is to bring its members’ collective academic and research expertise to bear on Pearson’s plans for measuring and reporting efficacy, including how those reports are audited.
The Network’s contributions include both constructive comments on the proposed approach to efficacy reporting, and advice about ways to make it even stronger.
The Efficacy Academic Network is made up of four leading academics from the US, UK, and Australia.