Teaching and Learning blog

Explore insights, trends, and research that impact teaching, learning, and leading.

Explore posts in other areas.

PreK-12Pearson studentsProfessional

  • A man is sitting within his home office, interacting on his laptop while writing down information.

    Getting to the heart of great courseware

    By Pearson

    For instructors and students alike, the path to success has become far more challenging. Students are arriving with different life and learning priorities, and varying levels of preparation. Everyone’s working harder, in the face of greater obstacles and deeper uncertainty. Instructors and students both need more effective support, in an era where resources are scarce. Courseware has always been a key resource, but today it needs to deliver more than ever. This makes your courseware decisions even more crucial. 

    Great courseware doesn’t just happen: everything about it is intentional. In this blog post, we’ll discuss how we're delivering on three of Pearson’s core priorities for building courseware that helps instructors and learners thrive – outcomes, equity, and accessibility.  

    Achieve the outcomes that matter

    The most important outcomes are those that learners and instructors want, to help them realize the lives they imagine. Our outcome-based design processes help us understand and identify those upfront, as a “north star” to keep all of us aligned and on track. 

    When we say “all of us,” we’re talking about a wide array of world-class, cross-disciplinary experts all working together, including: 

    • Learning scientists who ensure our products reflect the latest, best evidence on what helps students learn, helps instructors teach, helps people effectively use technology, and helps promote career progress 
    • User experience and content professionals who build and evolve engaging and personalized digital learning platforms, maximize relevance, and present material in powerfully compelling ways
    • Assessment experts who embed opportunities for continual student progress assessment, and identify opportunities to improve our products
    • 6,000+ trusted authors who bring their unique voices and cutting-edge knowledge -- so students never forget they’re learning from other remarkable human beings.

    All this expertise translates into real effectiveness and strong outcomes. Take, for example, the experience of The University of Maryland Global Campus (UMGC), which serves 305,000 online students worldwide, many non-traditional or not fully prepared for college-level work.

    Responding to a goal of reducing developmental prerequisites in college-level math and statistics, UMGC faculty assessed Pearson’s MyLab® and an OER alternative through a 2.5-year pilot encompassing 12 instructors and 6,500 students. Based on the pilot’s remarkable results, UMGC has rolled out MyLab widely. That’s translated into dramatic improvements: from 60% to 80% student success in statistics and from 50% to 80% in algebra compared with OER.

    Faculty evaluations have improved, too. Freed from grading, instructors had more time to guide individual students, and they also had richer data to tailor courses around their needs. 

    UMGC’s experience is just one example of how Pearson’s outcome-based design is rooted in superior learning science is helping real learners. Outcomes like these thrill us – they’re why we do what we do. 

    Extend great learning to everyone

    At Pearson, the words “diversity, equity, and inclusion” aren’t cliches or trendy buzzwords. They’re a way of life deeply grounded in beliefs we’ve held for generations: Every individual can benefit from learning, and learning is a powerful force for positive change. Everyone should be welcomed into learning. Everyone should have a fair opportunity to learn, and learning should work for all students.

    What matters more than our beliefs is what we do about them. We’ve built, and we enforce, comprehensive policies for making sure we authentically, inclusively, and respectfully represent people of all kinds. We are committed to minimizing bias. Our content celebrates diverse identities and lived experiences (see some complimentary examples here). We draw on many best practices and frameworks to provide high-quality inclusive content. We offer practical ways to report and dialogue about potential bias in our products.We do all of this so that our products are more inclusive, more relevant, and more accurate. Our DE&I approach to content development results in better products that center learners and increase student engagement.

    Finally, we understand that effectively embedding diversity, equity, and inclusion in our work is a journey. We honor and promote DE&I internally, to ensure that our offerings are created by teams who reflect those we serve. We’re proud to have earned the Human Rights Campaign’s “Best Place to Work for LGBTQ Equality” award, inclusion in Bloomberg’s Gender Equality Index, and a top grade in the Disability Equality Index, the most comprehensive benchmark for disability inclusion.

    By doing all this, we’re serving learners’ demands. Our 2021 Global Learner Survey found that 80% of learners were trying to educate themselves about issues related to social justice, diversity or gender equality, rising to 84% among millennials and 85% among Gen Z.

    Ensure accessibility to meet everyone’s potential

    For too long, people were excluded from full access to education based on disabilities that were irrelevant to their potential. We’re determined to overcome that, one individual at a time. Our commitment is woven into our learning materials, development processes, innovation efforts, employee culture, and partnerships.

    More specifically: We follow Web Content Accessibility Guidelines (WCAG) 2.1 guidelines and Section 508 of the Rehabilitation Act for products copyrighted 2022 or later. We’ve established comprehensive accessibility standards for creating products that are perceivable, operable, understandable, and robust. We’ve built a roadmap for addressing accessibility issues in our existing MyLab and Mastering courses, and we’re doing extensive audits to remove barriers elsewhere. Our teams participate in rigorous, ongoing accessibility training. As of this writing, we offer nearly 900 accessible eTextbooks, and we’re working with T-Base Communications to accelerate delivery of top-selling Pearson titles in braille and reflowed large print.

    Finally, to make sure we truly understand what learners need, we work closely on an ongoing basis with key members of the disability and advocacy community, and with organizations such as W3C, DIAGRAM Center, DAISY Consortium, Benetech, and the National Federation of the Blind.

    Get what your learners deserve

    Delivering on these commitments to outcomes, equity, and accessibility requires extensive resources, skills, and commitment. Not all of the world’s courseware reflects these values. But we think today’s learners should expect no less – and neither should you.

    Explore new ways to help your students succeed.

  • College student reading digital content on a laptop

    Digital reading strategies to improve student success

    By Dr. Rachel Hopman-Droste

    As a learning scientist and former instructor, I’ve been watching the topic of digital content develop for a while now. In the past, it’s been regarded as a poor substitute for the printed text when it comes to student comprehension. However, new research shows we’ve reached a turning point in digital reading. My colleagues Dr. Clint Johns, Julia Ridley, and I reviewed 40 peer-reviewed research studies from the last five years, focused mostly in higher education learners in the US1. Based on our review, most research shows that well-designed digital content can be understood as effectively as print and includes added benefits for readers.

  • blog image alt text

    New report: Demand-driven education

    By Caroline Leary, Manager, Pearson

    A new report responds to The Future of Skills by exploring its implications for education systems and offers up practical solutions for higher education to more closely align with what the workforce needs.

    We are excited to share a new report by Jobs for the Future (JFF) and Pearson that explores the changing world of work and provides recommendations for shifting from the traditional route to employment to a network of pathways that is flexible, dynamic, and ultimately serves more learners.

    Released at the Horizons conference in June, Demand-Driven Education: Merging work and learning to develop the human skills that matter looks at what is required for transitioning to the third wave in postsecondary education reform – demand driven education.

    The first wave – access – was focused on getting more people to enter higher education. The second wave was focused on improving achievement – getting more students to earn degrees and certificates.

    In this third wave, the worlds of education and work will converge producing programs that ensure students are job-ready and primed for lifelong career success.

    Adapting to the needs of both the learner and the employer, “demand-driven education takes account of the emerging global economy — technology-infused, gig-oriented, industry-driven — while also striving to ensure that new graduates and lifelong learners alike have the skills required to flourish.”

    The report states, “as the future of work unfolds, what makes us human is what will make us employable.”

    While technological literacy is critical, learners need educational experiences that cultivate skills, including fluency of ideas, originality, judgment, decision-making, and active learning, all supported by collaborative academic and career paths.

    Higher education and employers are making headway in this arena with innovative programs like University of North Texas’s Career Connect and Brinker International’s Best You EDU.

    In a recent interview, Joe Deegan, co-author of the report and senior program manager at JFF, said,“although technology such as digital assessment might enable educators to make programs faster and more adaptive, the most significant change is one of mindset.”

    The future is bright. And there’s a lot of good work to do through active collaboration and partnership to create rewarding postsecondary learning experiences that are responsive to our changing world and inclusive of all learners.


  • blog image alt text

    What do Generation Z and millennials expect from technology in education?

    By Pearson

    Pearson study reveals Generation Z and millennials’ learning preferences

    Young people are the first to admit they can easily spend hours a day on the internet—whether it’s via a desktop computer, tablet, or smartphone. While they may be tech-savvy by nature, this innate connectivity poses the question of technology’s place as it relates to how Generation Z and millennials learn.

    In a recent survey of 2,558 14-40 year olds in the US, Pearson explored attitudes, preferences, and behaviors around technology in education, identifying some key similarities and differences between Gen Z and millennials.

    While 39% of Gen Z prefer learning with a teacher leading the instruction, YouTube is also their #1 preferred learning method. And 47% of them spend three hours or more a day on the video platform. On the other hand, millennials need more flexibility—they are more likely to prefer self-directed learning supported by online courses with video lectures. And while they are known for being the “plugged in” generation, it’s apparent that plenty of millennials still prefer a good old-fashioned book to learn.

    Regardless of their differences, the vast majority of both Gen Z and millennials are positive about the future of technology in education. 59% of Gen Z and 66% of millennials believe technology can transform the way college students learn in the future.

    See below for the infographic, “Meeting the Expectations of Gen Z in Higher Ed” for additional insights on Generation Z and millennials’ learning preferences.

  • blog image alt text

    Grade Increase: Tracking Distance Education in the U.S. [Infographic]

    By Caroline Leary, Manager, Pearson

    In 2016, distance education enrollment continued to grow for the 14th straight year.

    This is the headline coming out of Grade Increase: Tracking Distance Education in the United States – a recent report released by Babson Survey Research Group (BSRG).

    As stated in BSRG’s press release: “The growth of distance enrollments has been relentless,” said study co-author Julia E. Seaman, research director of the Babson Survey Research Group. “They have gone up when the economy was expanding, when the economy was shrinking, when overall enrollments were growing, and now when overall enrollments are shrinking.”

    Explore the key findings from Grade Increase in our infographic below and download the full report to dive in deeper.

     

  • blog image alt text

    Teaching collaboration skills from cradle to career

    By Emily Lai, Ph.D, Kristen DiCerbo, Ph.D, Peter Foltz, Ph.D

    We’ve heard from Emily Lai, Ph.D., twice before. Last year, she shared the story of her work in Jordan to improve learning opportunities for the children of Syrian refugees. More recently, she offered her tips for parents and teachers on helping students improve their information literacy.

    The Components of Collaboration

    “Most of us know what collaboration is, at least in its most basic sense,” says Emily Lai, Ph.D.

    “It means working with others to achieve a common goal.”

    Emily is Director of Formative Assessment and Feedback for Pearson. Her work is focused on improving the ways we assess learners’ knowledge and skills, and ensuring results support further learning and development.

    “We’ve been reviewing the research, trying to figure out what we know about collaboration and how to support it. For example, we know that collaboration skills have an impact on how successful somebody is in all kinds of group situations—at school, on the job, and even working with others within a community to address social issues.”

    Teaching Collaboration in the Classroom

    Teaching collaboration skills in the classroom can be harder than expected, Emily says.

    “When a teacher assigns a group project, oftentimes students will divide up the task into smaller pieces, work independently, and then just shove their parts together at the very end.”

    “In that case, the teacher likely had good intentions to help develop collaboration skills in students. But it didn’t happen.”

    Checking all the Boxes

    “Tasks that are truly supportive of collaboration are not easy to create,” Emily says.

    Digging deeper, Emily says there are three sub-components of successful collaboration:

    Interpersonal communication – how you communicate verbally and non-verbally with your teammates.

    Conflict resolution – your ability to acknowledge and resolve disagreements in a manner consistent with the best interest of the team.

    Task management – your ability to set goals, organize tasks, track team progress against goals, and adjust the process along the way as needed.

    Emily says she understands how difficult it can be for educators to check all three boxes.

    Before beginning an assignment, Emily suggests teachers talk to students explicitly about collaboration: what makes a good team member versus what makes a difficult one, as well as strategies for working with others, sharing the load responsibly, and overcoming disagreements.

    During group work, she says, observe students’ verbal and non-verbal behavior carefully and provide real-time feedback.

    “Talk with them about how they’re making decisions as a group, sharing responsibility, and dealing with obstacles,” Emily says.

    “In the classroom, it’s all about the combination of teaching collaboration skills explicitly, giving students opportunities to practice those skills, and providing feedback along the way so those skills continue to develop.”

    “The research shows that students who develop strong collaboration skills get more out of those cooperative learning situations at school.”

    Teaching Collaboration at Home

    Emily is a mother of two daughters, 4 and 8.

    At home, she says, there’s one part of collaboration that is especially valuable: conflict resolution.

    “Most often, it comes in handy on movie nights.”

    “The 8-year-old tends to gravitate towards movies that are a little too scary for the 4-year-old, and the 4-year-old tends to gravitate towards movies that are a little too babyish for the 8-year-old.”

    “It would be easy to intervene and just pick a movie for them, but my husband and I do our best to stay out of it,” Emily says.

    “We’ve established the procedure that they have to negotiate with each other and agree on a movie, and now they have a collaborative routine in place.”

    “They know they get to watch a movie, and we know they’re learning along the way.”

    “Taking turns in conversation is another big one for the four-year-old,” Emily says.

    “She doesn’t like to yield the floor, but it’s something we’re working on.”

    “I know from the research that if my daughters learn these collaboration skills, they are more likely to be successful in their future careers.”

    Sharing the Latest Research

    This week, Emily and two of her colleagues are releasing a research paper entitled “Skills for Today: What We Know about Teaching and Assessing Collaboration.”

    The paper will be jointly released by Pearson and The Partnership for 21st Century Learning (P21), a Washington, DC-based coalition that includes leaders from the business, education, and government sectors.

    “We teamed up on this paper because we both believe collaboration is too important for college, career, and life to leave to chance,” Emily says.

    It is the first in a four-part series on what is known about teaching and assessing “the Four Cs”: collaboration, critical thinking, creativity, and communication.

    “P21 is the perfect partner for this effort,” Emily says.

    “Our partnership signifies a joint commitment to helping stakeholders—educators, parents, policy-makers, and employers—understand what skills are needed to be successful today, and how to teach them effectively at any age.”


    To download the full version of “Skills for Today: What We Know about Teaching and Assessing Collaboration,” click here.

    Three executive summaries of the paper are also available:

    Pearson LearnEd originally published this article on April 24th, 2017, and it was re-posted here with permission.

     
  • blog image alt text

    Is ed tech really working? 5 core tenets to rethink how we buy, use, and measure new tools

    By Todd Bloom, David Deschryver, Pam Moran, Chrisandra Richardson, Joseph South, Katrina Stevens

    This is the fifth in a series of essays surrounding the EdTech Efficacy Research Symposium, a gathering of 275 researchers, teachers, entrepreneurs, professors, administrators, and philanthropists to discuss the role efficacy research should play in guiding the development and implementation of education technologies. This series was produced in partnership with Pearson, a co-sponsor of the symposium co-hosted by the University of Virginia’s Curry School of Education, Digital Promise, and the Jefferson Education Accelerator. Click through to read the firstsecondthird, and fourth pieces.

    Education technology plays an essential role in our schools today. Whether the technology supports instructional intervention, personalized learning, or school administration, the successful application of that technology can dramatically improve productivity and student learning.

    That said, too many school leaders lack the support they need to ensure that educational technology investment and related activities, strategies, or interventions are evidence-based and effective. This gap between opportunity and capacity is undermining the ability of school leaders to move the needle on educational equity and to execute on the goals of today’s K-16 policies. The education community needs to clearly understand this gap and take some immediate steps to close it.

    The time is ripe

    The new federal K-12 law, the Every Students Succeeds Act, elevates the importance of evidence-based practices in school purchasing and implementation practices. The use of the state’s allocation for school support and improvement illustrates the point. Schools that receive these funds must invest only in activities, strategies, or interventions that demonstrate a statistically significant effect on improving student outcomes or other relevant outcomes.

    That determination must rely on research that is well designed and well implemented, as defined in the law. And once implementation begins, the U.S. Department of Education asks schools to focus on continuous improvement by collecting information about the implementation and making necessary changes to advance the goals of equity and educational opportunity for at-risk students. The law, in short, links compliance with evidence-based procurement and implementation that is guided by continuous improvement.

    New instructional models in higher education rely on evidence-based practices if they are to take root. School leaders are under intense pressure to find ways to make programs more affordable, student-centered, and valuable to a rapidly changing labor market. Competency-based education (the unbundling of certificates and degrees into discrete skills and competencies) is one of the better-known responses to the challenge, but the model will likely stay experimental until there is more evidence of success.

    “We are still just beginning to understand CBE,” Southern New Hampshire University President Paul LeBlanc said. “Project-based learning, authentic learning, well-done assessment rubrics — those are all good efforts, but do we have the evidence to pass muster with a real assessment expert? Almost none of higher ed would.”

    It is easy to forget that the abundance of educational technology is a relatively new thing for schools and higher ed institutions. Back in the early 2000s, the question was how to make new educational technologies viable instructional and management tools. Education data was largely just a lagging measure used for school accountability and reporting.

    Today, the data can provide strong, real-time signals that advance productivity through, for example, predictive analytics, personalized learning, curriculum curating and delivery, and enabling the direct investigation into educational practices that work in specific contexts. The challenge is how to control and channel the deluge of bytes and information streaming from the estimated $25.4 billion K-16 education technology industry.

    “It’s [now] too easy to go to a conference and load up at the buffet of innovations. That’s something we try hard not to do,” said Chad Ratliff, director of instructional programs for Virginia’s Albemarle County Schools. The information has to be filtered and vetted, which takes time and expertise.

    Improving educational equity is the focus of ESSA, the Higher Education Act, and a key reason many school leaders chose to work in education. Moving the needle increasingly relies on evidence-based practices. As the Aspen Institute and Council of Chief State School Officers point out in a recent report, equity means — at the very least — that “every student has access to the resources and educational rigor they need at the right moment in their education despite race, gender, ethnicity, language, disability, family background, or family income.”

    Embedded in this is the presumption that the activities, strategies, or interventions actually work for the populations they intend to benefit.

    Educators cannot afford to invest in ineffective activities. At the federal K-12 level, President Donald Trump is proposing that, next year, Congress cut spending for the Education Department and eliminate many programs, including $2.3 billion for professional development programs, $1.2 billion for after-school funds, and the new Title IV grant that explicitly supports evidence-based and effective technology practices in our schools.

    Higher education is also in a tight spot. The president seeks to cut spending in half for Federal Work-Study programs, eliminate Supplemental Educational Opportunity grants, and take nearly $4 million from the Pell Grant surplus for other government spending. At the same time, Education Secretary Betsy DeVos is reviewing all programs to explore which can be eliminated, reduced, consolidated, or privatized.

    These proposed cuts and reductions increase the urgency for school leaders to tell better stories about the ways they use the funds to improve educational opportunities and learning outcomes. And these stories are more compelling (and protected from budget politics) when they are built upon evidence.

    Too few resources

    While this is a critical time for evidence-based and effective program practices, here is the rub: The education sector is just beginning to build out this body of knowledge, so school leaders are often forging ahead without the kind of guidance and research they need to succeed.

    The challenges are significant and evident throughout the education technology life cycle. For example, it is clear that evidence should influence procurement standards, but that is rarely the case. The issue of “procurement standards” is linked to cost thresholds and related competitive and transparent bidding requirements. It is seldom connected with measures of prior success and research related to implementation and program efficacy. Those types of standards are foreign to most state and local educational agencies, left to “innovative” educational agencies and organizations, like Digital Promise’s League of Innovative Schools, to explore.

    Once the trials of implementation begin, school leaders and their vendors typically act without clear models of success and in isolation. There just are not good data on efficacy for most products and implementation practices, which means that leaders cannot avail themselves of models of success and networks of practical experience. Some schools and institutions with the financial wherewithal, like Virginia’s Albemarle and Fairfax County Public Schools, have created their own research process to produce their own evidence.

    In Albemarle, for example, learning technology staff test-bed solutions to instructional and enterprise needs. Staff spend time observing students and staff using new devices and cloud-based services. They seek feedback and performance data from both teachers and students in response to questions about the efficacy of the solution. They will begin with questions like “If a service is designed to support literacy development, what variable are we attempting to affect? What information do we need to validate significant impact?” Yet, like the “innovators” of procurement standards, these are the exceptions to the rule.

    And as schools make headway and immerse themselves in new technologies and services, the bytes of data and useful information multiply, but the time and capacity necessary to make them useful remains scarce. Most schools are not like Fairfax and Albemarle counties. They do not have the staff and experts required to parse the data and uncover meaningful insights into what’s working and what’s not. That kind of work and expertise isn’t something that can be simply layered onto existing responsibilities without overloading and possibly burning out staff.

    “Many schools will have clear goals, a well-defined action plan that includes professional learning opportunities, mentoring, and a monitoring timeline,” said Chrisandra Richardson, a former associate superintendent for Montgomery County Public Schools in Maryland. “But too few schools know how to exercise a continuous improvement mindset, how to continuously ask: ‘Are we doing what we said we would do — and how do we course-correct if we are not?’ ”

    Immediate next steps

    So what needs to be done? Here are five specific issues that the education community (philanthropies, universities, vendors, and agencies) should rally around.

    • Set common standards for procurement. If every leader must reinvent the wheel when it comes to identifying key elements of the technology evaluation rubric, we will ensure we make little progress — and do so slowly. The sector should collectively secure consensus on the baseline procurement standards for evidence-based and research practices and provide them to leaders through free or open-source evaluative rubrics or “look fors” they can easily access and employ.
    • Make evidence-based practice a core skill for school leadership. Every few years, leaders in the field try to pin down exactly what core competencies every school leader should possess (or endeavor to develop). If we are to achieve a field in which leaders know what evidence-based decision-making looks like, we must incorporate it into professional standards and include it among our evaluative criteria.
    • Find and elevate exemplars. As Charles Duhigg points out in his recent best seller Smarter Faster Better, productive and effective people do their work with clear and frequently rehearsed mental models of how something should work. Without them, decision-making can become unmoored, wasteful, and sometimes even dangerous. Our school leaders need to know what successful evidence-based practices look like. We cannot anticipate that leader or educator training will incorporate good decision-making strategies around education technologies in the immediate future, so we should find alternative ways of showcasing these models.
    • Define “best practice” in technology evaluation and adoption. Rather than force every school leader to develop and struggle to find funds to support their own processes, we can develop models that can alleviate the need for schools to develop and invest in their own research and evidence departments. Not all school districts enjoy resources to investigate their own tools, but different contexts demand differing considerations. Best practices help leaders navigate variation within the confines of their resources. The Ed Tech RCE Coach is one example of a set of free, open-source tools available to help schools embed best practices in their decision-making.
    • Promote continuous evaluation and improvement. Decisions, even the best ones, have a shelf life. They may seem appropriate until evidence proves otherwise. But without a process to gather information and assess decision-making efficacy, it’s difficult to learn from any decisions (good or bad). Together, we should promote school practices that embrace continuous research and improvement practices within and across financial and program divisions to increase the likelihood of finding and keeping the best technologies.

    The urgency to learn about and apply evidence to buying, using, and measuring success with ed tech is pressing, but the resources and protocols they need to make it happen are scarce. These are conditions that position our school leaders for failure — unless the education community and its stakeholders get together to take some immediate actions.

    This series is produced in partnership with Pearson. The 74 originally published this article on September 11th, 2017, and it was re-posted here with permission.

  • blog image alt text

    Communicate often and better: How to make education research more meaningful

    By Jay Lynch, PhD and Nathan Martin, Pearson

    Question: What do we learn from a study that shows a technique or technology likely has affected an educational outcome?

    Answer: Not nearly enough.

    Despite widespread criticism, the field of education research continues to emphasize statistical significance—rejecting the conclusion that chance is a plausible explanation for an observed effect—while largely neglecting questions of precision and practical importance. Sure, a study may show that an intervention likely has an effect on learning, but so what? Even researchers’ recent efforts to estimate the size of an effect don’t answer key questions. What is the real-world impact on learners? How precisely is the effect estimated? Is the effect credible and reliable?

    Yet it’s the practical significance of research findings that educators, administrators, parents and students really care about when it comes to evaluating educational interventions. This has led to what Russ Whitehurst has called a “mismatch between what education decision makers want from the education research and what the education research community is providing.”

    Unfortunately, education researchers are not expected to interpret the practical significance of their findings or acknowledge the often embarrassingly large degree of uncertainty associated with their observations. So, education research literature is filled with results that are almost always statistically significant but rarely informative.

    Early evidence suggests that many edtech companies are following the same path. But we believe that they have the opportunity to change course and adopt more meaningful ways of interpreting and communicating research that will provide education decision makers with the information they need to help learners succeed.

    Admitting What You Don’t Know

    For educational research to be more meaningful, researchers will have to acknowledge its limits. Although published research often projects a sense of objectivity and certainty about study findings, accepting subjectivity and uncertainty is a critical element of the scientific process.

    On the positive side, some researchers have begun to report what is known as standardized effect sizes, a calculation that helps compare outcomes in different groups on a common scale. But researchers rarely interpret the meaning of these figures. And the figures can be confusing. A ‘large’ effect actually may be quite small when compared to available alternatives or when factoring in the length of treatment, and a ‘small’ effect may be highly impactful because it is simple to implement or cumulative in nature.

    Confused? Imagine the plight of a teacher trying to decide what products to use, based on evidence—an issue of increased importance since the Every Student Succeeds Act (ESSA) promotes the use of federal funds for certain programs, based upon evidence of effectiveness. The newly-launched Evidence for ESSA admirably tries to help support that process, complementing the What Works Clearinghouse and pointing to programs that have been deemed “effective.” But when that teacher starts comparing products, say Math in Focus (effect size: +0.18) and Pirate Math (effect size: +0.37), the best choice isn’t readily apparent.

    It’s also important to note that every intervention’s observed “effect” is associated with a quantifiable degree of uncertainty. By glossing over this fact, researchers risk promoting a false sense of precision and making it harder to craft useful data-driven solutions. While acknowledging uncertainty is likely to temper excitement about many research findings, in the end it will support more honest evaluations of an intervention’s likely effectiveness.

    Communicate Better, Not Just More

    In addition to faithfully describing the practical significance and uncertainty around a finding, there also is a need to clearly communicate information regarding research quality, in ways that are accessible to non-specialists. There has been a notable unwillingness in the broader educational research community to tackle the challenge of discriminating between high quality research and quackery for educators and other non-specialists. As such, there is a long overdue need for educational researchers to be forthcoming about the quality and reliability of interventions in ways that educational practitioners can understand and trust.

    Trust is the key. Whatever issues might surround the reporting of research results, educators are suspicious of people who have never been in the classroom. If a result or debunked academic fad (e.g. learning styles) doesn’t match their experience, they will be tempted to dismiss it. As education research becomes more rigorous, relevant, and understandable, we hope that trust will grow. Even simply categorizing research as either “replicated” or “unchallenged” would be a powerful initial filtering technique given the paucity of replication research in education. The alternative is to leave educators and policy-makers intellectually adrift, susceptible to whatever educational fad is popular at the moment.

    At the same time, we have to improve our understanding of how consumers of education research understand research claims. For instance, surveys reveal that even academic researchers commonly misinterpret the meaning of common concepts like statistical significance and confidence intervals. As a result, there is a pressing need to understand how those involved in education interpret (rightly or wrongly) common statistical ideas and decipher research claims.

    A Blueprint For Change

    So, how can the education technology community help address these issues?

    Despite the money and time spent conducting efficacy studies on their products, surveys reveal that research often plays a minor role in edtech consumer purchasing decisions. The opaqueness and perceived irrelevance of edtech research studies, which mirror the reporting conventions typically found in academia, no doubt contribute to this unfortunate fact. Educators and administrators rarely possess the research and statistical literacy to interpret the meaning and implications of research focused on claims of statistical significance and measuring indirect proxies for learning. This might help explain why even well-meaning educators fall victim to “learning myths.”

    And when nearly every edtech company is amassing troves of research studies, all ostensibly supporting the efficacy of their products (with the quality and reliability of this research varying widely), it is understandable that edtech consumers treat them all with equal incredulity.

    So, if the current edtech emphasis on efficacy is going to amount to more than a passing fad and avoid devolving into a costly marketing scheme, edtech companies might start by taking the following actions:

    • Edtech researchers should interpret the practical significance and uncertainty associated with their study findings. The researchers conducting an experiment are best qualified to answer interpretive questions around the real-world value of study findings and we should expect that they make an effort to do so.
    • As an industry, edtech needs to work toward adopting standardized ways to communicate the quality and strength of evidence as it relates to efficacy research. The What Works Clearinghouse has made important steps, but it is critical that relevant information is brought to the point of decision for educators. This work could resemble something like food labels for edtech products.
    • Researchers should increasingly use data visualizations to make complex findings more intuitive while making additional efforts to understand how non-specialists interpret and understand frequently reported statistical ideas.
    • Finally, researchers should employ direct measures of learning whenever possible rather than relying on misleading proxies (e.g., grades or student perceptions of learning) to ensure that the findings reflect what educators really care about. This also includes using validated assessments and focusing on long-term learning gains rather than short-term performance improvement.

    This series is produced in partnership with Pearson. EdSurge originally published this article on April 1, 2017, and it was re-posted here with permission.

     

  • blog image alt text

    Can Edtech support - and even save - educational research?

    By Jay Lynch, PhD and Nathan Martin, Pearson

    There is a crisis engulfing the social sciences. What was thought to be known about psychology—based on published results and research—is being called into question by new findings and the efforts of individual groups like the Reproducibility Project. What we know is under question and so is how we come to know. Long institutionalized practices of scientific inquiry in the social sciences are being actively questioned, proposals put forth for needed reforms.

    While the fields of academia burn with this discussion, education results have remained largely untouched. But education is not immune to problems endemic in fields like psychology and medicine. In fact, there’s a strong case that the problems emerging in other fields are even worse in educational research. External or internal critical scrutiny has been lacking. A recent review of the top 100 education journals found that only 0.13% of published articles were replication studies. Education waits for its own crusading Brian Nosek to disrupt the canon of findings. Winter is coming.

    This should not be breaking news. Education research has long been criticized for its inability to generate a reliable and impactful evidence base. It has been derided for problematic statistical and methodological practices that hinder knowledge accumulation and encourage the adoption of unproven interventions. For its failure to communicate the uncertainty and relevance associated with research findings, like Value-Added Measures for teachers, in ways that practitioners can understand. And for struggling to impact educational habits (at least in the US) and how we develop, buy, and learn from (see Mike Petrilli’s summation) the best practices and tools.

    Unfortunately, decades of withering criticism have done little to change the methods and incentives of educational research in ways necessary to improve the reliability and usefulness of findings. The research community appears to be in no rush to alter its well-trodden path—even if the path is one of continued irrelevance. Something must change if educational research is to meaningfully impact teaching and learning. Yet history suggests the impetus for this change is unlikely to originate from within academia.

    Can edtech improve the quality and usefulness of educational research? We may be biased (as colleagues at a large and scrutinized edtech company), but we aren’t naïve. We know it might sound farcical to suggest technology companies may play a critical role in improving the quality of education research, given almost weekly revelations about corporations engaging in concerted efforts to distort and shape research results to fit their interests. It’s shocking to read efforts to warp public perception on the effects of sugar on heart disease or the effectiveness of antidepressants. It would be foolish not to view research conducted or paid for by corporations with a healthy degree of skepticism.

    Yet we believe there are signs of promise. The last few years has seen a movement of companies seeking to research and report on the efficacy of educational products. The movement benefited from the leadership of the Office of Education Technology, the Gates FoundationLearning AssemblyDigital Promise and countless others. Our own company has been on this road since 2013. (It’s not been easy!)

    These efforts represent opportunities to foment long-needed improvements in the practice of education research. A chance to redress education research’s most glaring weakness: its historical inability to appreciably impact the everyday activities of learning and teaching.

    Incentives for edtech companies to adopt better research practices already exist and there is early evidence of openness to change. Edtech companies possess a number of crucial advantages when it comes to conducting the types of research education desperately needs, including:

    • access to growing troves of digital learning data;
    • close partnerships with institutions, faculty, and students;
    • the resources necessary to conduct large and representative intervention studies;
    • in-house expertise in the diverse specialties (e.g., computer scientists, statisticians, research methodologists, educational psychologists, UX researchers, instructional designers, ed policy experts, etc.) that must increasingly collaborate to carry out more informative research;
    • a research audience consisting primarily of educators, students, and other non-specialists

    The real worry with edtech companies’ nascent efforts to conduct efficacy research is not that they will fail to conduct research with the same quality and objectivity typical of most educational research, but that they will fall into the same traps that currently plague such efforts. Rather than looking for what would be best for teachers and learners, entrepreneurs may focus on the wrong measures (p-values, for instance) that obfuscate people rather than enlighten them.

    If this growing edtech movement repeats the follies of the current paradigm of educational research, it will fail to seize the moment to adopt reforms that can significantly aid our efforts to understand how best to help people teach and learn. And we will miss an important opportunity to enact systemic changes in research practice across the edtech industry with the hope that academia follows suit.

    Our goal over the next three articles is to hold a mirror up, highlighting several crucial shortcomings of educational research. These institutionalized practices significantly limit its impact and informativeness.

    We argue that edtech is uniquely incentivized and positioned to realize long-needed research improvements through its efficacy efforts.

    Independent education research is a critical part of the learning world, but it needs improvement. It needs a new role model, its own George Washington Carver, a figure willing to test theories in the field, learn from them, and then to communicate them to back to practitioners. In particular, we will be focusing on three key ideas:

    Why ‘What Works’ Doesn’t: Education research needs to move beyond simply evaluating whether or not an effect exists; that is, whether an educational intervention ‘works’. The ubiquitous use of null hypothesis significance testing in educational research is an epistemic dead end. Instead, education researchers need to adopt more creative and flexible methods of data analysis, focus on identifying and explaining important variations hidden under mean scores, and devote themselves to developing robust theories capable of generating testable predictions that are refined and improved over time.

    Desperately Seeking Relevance: Education researchers are rarely expected to interpret the practical significance of their findings or report results in ways that are understandable to non-specialists making decisions based on their work. Although there has been progress in encouraging researchers to report standardized mean differences and correlation coefficients (i.e., effect sizes), this is not enough. In addition, researchers need to clearly communicate the importance of study findings within the context of alternative options and in relation to concrete benchmarks, openly acknowledge uncertainty and variation in their results, and refuse to be content measuring misleading proxies for what really matters.

    Embracing the Milieu: For research to meaningfully impact teaching and learning, it will need to expand beyond an emphasis on controlled intervention studies and prioritize the messy, real-life conditions facing teachers and students. More energy must be devoted to the creative and problem-solving work of translating research into useful and practical tools for practitioners, an intermediary function explicitly focused on inventing, exploring, and implementing research-based solutions that are responsive the needs and constraints of everyday teaching.

    Ultimately education research is about more than just publication. It’s about improving the lives of students and teachers. We don’t claim to have the complete answers but, as we expand these key principles over coming weeks, we want to offer steps edtech companies can take to improve the quality and value of educational research. These are things we’ve learned and things we are still learning.

    This series is produced in partnership with Pearson. EdSurge originally published this article on January 6, 2017, and it was re-posted here with permission.