Content Analytics

Summary

Between the idea and the reality falls the shadow: how different levels of efficacy analytics can help product teams make better decisions about content development AND can provide insight into how learners achieve their goals.

In order for Efficacy and Research to take shape in Pearson, we need to make sure that they are an integral part of the way that we conceive, design and develop our products and services. The vehicle that enables this to happen is the Product Lifecycle. In essence, the Product Lifecycle is a global set of practices and tools that will help Pearson develop market leading products and underpin all product investment. This will allow the company to make more strategic portfolio and investment decisions.

Where we are

In the past, we’ve gathered information about how learners benefit from Pearson products using a mix of methods. This has predominantly been from direct interactions and communications with users, such as interviews with students and instructors; user surveys; students’ participation and performance in their classrooms; and the instructors’ reports about the student performance. From information we have collected, we have also observed how instructors select content within products to support their course designs.
That process allowed us to see how our products were being used and how we could make improvements. Now, with the effective use of data, we are able to build on existing processes, unite them, gathering far more information and analysing it to a much deeper level.

As an example, this year, just in Higher Education, we’ve taken significant steps forward in terms of defining needs and gaining access to data to enrich our understanding of product use to support efficacy analytics and studies. We have defined a learner data footprint for our products. The platform data extracted based on the data footprint allowed us to explore about a half million unique student users in 2015.

First, a bit of context about our Higher Education products. The content is structured following traditional textbook content hierarchy of chapter, section, problem/assessment.

Unlike traditional textbooks, the content is flexible and can be customized, so that instructors can create their own learning solutions for their students by selectively choosing particular problems, homework assignments, or assessments (quizzes or tests), depending on their course goals and needs.

Both Pearson, and the instructors who customize our products, have desired learning paths in their mind when either we design, or they customize, our products. However, how instructors want their students to use the products may differ from how the students actually use them. Also, what the instructors expect their students to learn from their course content and design may be different from what their students actually learn. Being able to infer how well the instructors’ plans worked, by comparing the discrepancy or similarity between instructor expectations and their students’ actual learning, is a critical component of exploring the efficacy of a product.

What are we learning?

Set against this backdrop, one of our efficacy goals, therefore, is to try to measure this discrepancy and similarity between desired plans and actual learning achieved when instructors and students use Pearson products.

Data collected during platform use is the raw material we need access to in order to conduct our analysis. One way we can analyze this big platform data set is content usage analytics. In simple terms this means analyzing all available courses which are associated with a given book title. The number of courses that are generated in one academic year using one book title can be thousands of courses used by institutions ranging from small community colleges to large research universities. Analyzing all available data in a single book title can give us the information about how the content (by chapter, section and by problem etc) is actually used by learners and instructors. By undertaking this analysis we are able to give learning designers, editors, and authors insights about how much their designs and content were aligned with learners’ actual usage

As an example, see below for a visual representation of the content analytics which we created using the platform data associated with one of the popular MyLab products.

  • It shows how many problems were assigned (usage by instructors is represented in the height of each dot) and solved (usage by students is represented in circle size) within each chapter and section.
  • More specifically, instructors assigned fewer problems in the later chapters, as the height of each dot represents on the right side. (as represented in the height of each dot).
  • Also, those problems were solved by fewer students as represented by circle size.


Here is another example from a Mastering product. The visualization below gives us the information about content (chapters and sections) difficulty. It shows that there are 10 distinctively difficult sections in which lower numbers of problems were assigned (10 dots with lower height and the smallest circle size).

Looking at the graph, across the title, there are clear areas where instructors assigned fewer items, and areas where students solved fewer of those items.

The instructors and content designers can now look at these specific pieces of content to see what might have caused the lower usage. For example, was the item too difficult or was the item not considered to be relevant to the course?

By identifying these areas, we can analyse, improve or remove identified items. This allows us to deliver improved outcomes and a better product.

In addition to content analytics, we also utilise learning analytics to deliver learner outcomes by exploring correlations between product usage patterns and outcomes that are achieved by students.

Why this matters

  • To make a meaningful improvement in student outcomes, we believe that our products must work as intended. One way we can do this is through the use of content analytics, allowing us to respond to the experience of teachers and students. We can better identify and correct areas for improvement -- and better track whether our products are being used as intended and, alongside other research efforts, impact.
  • We have a responsibility to increasingly deliver products which are delightful to use and have a measurable impact on learners. Analytics is part of that responsibility and we are on a journey, as with efficacy, to scale this approach across all of our work. We are working to ensure that we use the data we collect wisely. If it can be utilised to improve our products and therefore the outcomes of learners, we have a responsibility to do exactly that.