Voices of Innovation: A Q&A Series on Generative AI - Part 3

View all tags
Image from above a desk, viewing a laptop, mobile device, coffee cup, pencils

Using technology to improve teaching and learning is in Pearson’s DNA. As the first major higher education publisher to integrate generative AI study tools into its proprietary academic content, Pearson is excited to be harnessing the power of AI to drive transformative outcomes for learners. We are focused on creating tools that combine the power of AI with trusted Pearson content to provide students with a simplified study experience that delivers on-demand and personalized support whenever and wherever they need it.

In this multi-part blog series, you’ll have a chance to hear about AI innovations from Pearson team members, faculty, and students who have been involved with the development and rollout of Pearson’s AI-powered study tools.

Part III takes a deep dive into early learnings from beta testing of the AI-powered study tools in MyLab, Mastering, and eTextbook. Emily Lai, VP of Learning Impact Measurement and Jessica Yarbro, Principal Research Scientist, team up to up to share how learning science and research is shaping development of the AI study tool.

Image of Emily Lai
Image of Jessica Yarbo

Can you tell us more about the work your team has done - and is doing - as part of the beta period and launch for the AI-powered study tool?

We followed the launch of ChatGPT and uptake around Gen AI with a huge amount of curiosity and interest, so we were excited to collaborate on the design and development of the AI-powered study tool. First, we worked with the product team to explore likely learner needs that would drive students to use the tool. Then we identified the elements of learning (what we call “learning design principles”) that would help ensure that the tool would meet those learner needs. Next came an interesting challenge as we had to help the team incorporate these learning design principles into the GPT prompts, so that we were telling the AI what to do and how to respond in a way that best supports learning.

For example, we found that when you simply ask ChatGPT to explain something, it sometimes gives the information without structuring it in a particularly thoughtful way. But we know that explanations have more learning value when they help the learners situate the topic in a broader context and understand how it relates to other topics. So, we added instructions to that end to the prompt that enables the “Explain” feature. We also know feedback is most effective when it is personalized to the student response and explains why a response is right or wrong, rather than generic statements that might describe the correct answer. That’s why we ensured that, within the multiple-choice “Practice” feature, there is an individualized feedback message for each answer choice.

Before the tool was launched, we designed protocols to gather and analyze ratings and feedback from subject matter experts who reviewed thousands of outputs from the bot during iterative waves of testing to ensure responses were safe, accurate, and reliable. Once the AI study tool was launched, we gathered feedback from students at a few points during the semester via short surveys. We also closely monitored the use of the AI study tool in four Intro Chemistry classrooms, gathering information about the courses themselves, how students were using the tools to complete assignments, and perceptions of the tools from both students and instructors.

ChatGPT and AI on Higher Ed campuses can be a polarizing topic for educators and students. What did you find to be some of the common preconceptions from faculty and students?

The instructors we spoke to in the fall generally felt that their students had little prior exposure to generative AI before that point. Some perceived a generally negative attitude toward using AI for learning among both their colleagues and their students. At least one instructor speculated that use of generative AI could compound cheating.

However, once we were able to walk these instructors through the AI-powered study tool and its features, they could see how it might help their students. In terms of the eTextbook AI study tool, instructors felt they could trust the output and predicted the bot would help focus students on the basics and stay on track with course content, while also stretching high performers. Regarding the Mylab/ Mastering AI study tool, instructors compared it to a tutor that can help students when they’re stuck without giving away the answer or overwhelming them with too much detail.

Their students reported relatively positive views about using AI for learning, with 74% of survey respondents saying they were either “optimistic” or “enthusiastic” about the prospects of AI supporting their learning. Even students who only used the tool once or twice in the semester were open to using AI for learning, with nearly 70% reporting they either “would consider” or “were very likely to” use AI for their study.

What are the most significant findings or insights from the beta test that have the potential to shape the product's development or strategy moving forward?

One of the most interesting findings relates to the eTextbook AI study tool Practice Questions feature. This feature got less usage than either the Summary or Explanation feature, but students who answered more practice questions seemed to have a more positive experience and had higher end-of-course grade expectations compared to people answering fewer questions. This makes sense from a learning science perspective, as self-testing one’s learning is an effective way to improve knowledge retention. So why weren’t more students taking advantage of this feature? We’re currently exploring how to surface practice questions, in the form of “Comprehension Checks”, after explanations and summaries to make students more aware of the Practice Questions feature. This has the added learning benefit of incorporating practice and self-testing into more AI study tool features.

We also recognized that students may not know the most effective ways to use these tools. For example, learning science tells us that active study (e.g., taking notes while reading, self-testing) is more beneficial for learning than passive study (e.g., re-reading portions of the text), and that spacing out study sessions can help reduce fatigue and boost long-term retention. So, we designed a series of Study Tips to encourage students to use the tool in particularly active ways. For example, students can use the Explain feature in a more active way by first trying to explain a topic to themselves, then using the tool’s explanation to check their understanding. We’re excited to see whether these Study Tips nudge more active and efficient usage of the AI study tools.

The learning impact measurement team has also been focused on the student perspective and understanding the impact these tools could have on improving learning and independent study. Can you share early learnings from students using the AI-powered study tools?

We consistently heard positive feedback from students about the AI study tool. Students using the eTextbook tool overwhelmingly reported positive experiences, both during midterms and around finals. Those participating in the classroom beta test also generally liked the eTextbook bot, seeing its primary use cases as prepping for class and saving time on homework/studying. More than two-thirds of these students reported that:

  • Summaries were easy to understand, at the right level of detail, and helpful for focusing on essential ideas
  • Explanations were accurate and helpful for clarifying complex concepts
  • Practice questions covered the most important topics and encouraged self-reflection.

Nearly 80% of these students said they would be likely or very likely to use the tool again in another course. Similarly, students who used the Mylab/ Mastering AI study tool liked it, seeing its primary use cases as making studying more enjoyable, prepping for class, and fitting learning around their schedule. Around 90% perceived the tool as easy to use, a reliable source of help, and supporting their confidence to tackle complex topics or problems. A similar percentage said they would be likely or very likely to use the tool again in another course. One interesting takeaway from the fall is that there may be synergies from using both AI study tools together (in eTextbook and Mylab/ Mastering) – students who used both had more positive views and were more likely to say they’d use the tools again compared to students who only used one or the other.

It’s tricky to determine whether the AI study tool is helping students be more successful in their course because assigning final course grades is at the discretion of the instructor and happens outside of our products. However, we have looked at whether using the AI study tool influenced learner behavior in their etextbook. In particular, we were interested in how actively or passively students were studying and how well they were spacing out their study sessions.

In order to understand whether usage of the tool was influencing behavior (rather than the other way around), we restricted our analysis to returning students – students who were P+ etextbook users in the spring that came back in the fall. This approach allowed us to focus on whether their behavior changed after using the tool. We saw that AI study tool users were more than three times as likely to start or continue using those more active and effective study behaviors within the etextbook compared to non-users (when controlling for spring behavior and title). Although this research is correlational and not causal, it provides support for the hypothesis that using the AI study tool can lead students to engage more deeply in their etextbook.

How do the actual results and findings from the beta test compare with the team's initial expectations and hypotheses? In what ways have the outcomes influenced the team's understanding of the product's potential and its market fit?

We’ve been really pleased at how students and faculty have embraced the tool. Given how disruptive Gen AI has been to college campuses, and given concerns around AI short-circuiting the learning process, it’s been encouraging to see how useful this tool has been for students and how easily they integrated it into their learning practice. Even more exciting is the evidence that seems to suggest that the AI study tool is pushing students to be more active and effective learners!

In case you missed it!

  • Part I - We sat down with our SVP of Product Management, David Kokorowski, to capture his insights on AI in higher education and the path forward at Pearson. View the Part 1 blog
  • Part II - Hear from Chris Hess and Emily Ockay from the Higher Ed Product Management team who are working diligently to bring the power of AI thoughtfully into Pearson learning platforms. View the Part II blog