PreK-12 blog

Join the conversation and stay informed about the latest trends, perspectives, and successes in PreK-12 education.

Explore posts in other areas.

Higher EducationPearson studentsProfessional

  • blog image alt text

    Games-based learning from "content" to "creation" (Episode 8)

    By Dr. Kristen DiCerbo, Vice President of Education Research, Pearson

    This series, produced with The Edtech Podcast, explores the implications of and questions around future tech for education. Listen for insights from experts — including contrarians — from across industry, research, and academia. Get caught up with episodes 1-7.  

    What initiatives are supporting teachers and students to co-create games together? In this episode of our Future Tech for Education podcast series, hear from educators, gaming companies, and researchers on the evolution of games-based learning from “content” to “creation”.

    Subscribe to the Future Tech for Education on iTunes.

     

  • blog image alt text

    Student, software and teacher in "personalized learning" (Episode 7)

    By Dr. Kristen DiCerbo, Vice President of Education Research, Pearson

    This series, produced with The Edtech Podcast, explores the implications of and questions around future tech for education. Listen for insights from experts — including contrarians — from across industry, research, and academia. Get caught up with episodes 1-6.  

    In episode 7 of our Future Tech for Education podcast series, we explore: What is personalized learning? What is it not? Is there an evidence base yet for personalized learning and what does the research evidence show us about the contexts where personalized learning works best? What is the role of student, software and teacher in a personalized learning context? What questions should we be asking?

    Subscribe to the Future Tech for Education on iTunes.

     

  • blog image alt text

    Analysis: Why school districts need a 'Consumer Reports' for ed tech

    By Bart Epstein, CEO, Jefferson Education Accelerator

    This is the sixth in a series of essays surrounding the EdTech Efficacy Research Symposium, a gathering of 275 researchers, teachers, entrepreneurs, professors, administrators, and philanthropists to discuss the role efficacy research should play in guiding the development and implementation of education technologies. This series was produced in partnership with Pearson, a co-sponsor of the symposium co-hosted by the University of Virginia’s Curry School of Education, Digital Promise, and the Jefferson Education Accelerator. Click through to read the firstsecondthirdfourth, and fifth pieces.

    Economists define a collective action problem as one in which a collection of people (or organizations) each have an interest in seeing an action happen, but the cost of any one of them independently taking the action is so high that no action is taken — and the problem persists.

    The world of education swirls with collective action problems. But when it comes to understanding the efficacy of education technology products and services, it’s a problem that costs schools and districts billions of dollars, countless hours, and (sadly) missed opportunities to improve outcomes for students.

    Collectively, our nation’s K-12 schools and institutions of higher education spend more than $13 billion annually on education technology. And yet we have a dearth of data to inform our understanding of which products (or categories of products) are most likely to “work” within a particular school or classroom. As a result, we purchase products that often turn out to be a poor match for the needs of our schools or students. Badly matched and improperly implemented, too many fall short of their promise of enabling better teaching — and learning.

    It’s not that the field is devoid of research. Quantifying the efficacy of ed tech is a favorite topic for a growing cadre of education researchers and academics. Most major publishers and dozens of educational technology companies conduct research in the form of case studies and, in some cases, randomized control trials that showcase the potential outcomes for their products. The What Works Clearinghouse, now entering its 15th year, sets a gold standard for educational research but provides very little context about why the same product “works” in some places but not others. And efficacy is a topic that has now come to the forefront of our policy discourse, as debates at the state and local level center on the proper interpretation of ESSA’s mercurial “evidence” requirements. Set too high a bar, and we’ll artificially contract a market laden with potential. Miss the mark, and we’ll continue to let weak outcomes serve as evidence.

    The problem is that most research only addresses a tiny part of the ed tech efficacy equation. Variability among and between school cultures, priorities, preferences, professional development, and technical factors tend to affect the outcomes associated with education technology. A district leader once put it to me this way: “a bad intervention implemented well can produce far better outcomes than a good intervention implemented poorly.”

    After all, a reading intervention might work well in a lab or school — but if teachers in your school aren’t involved in the decision-making or procurement process, they may very well reject the strategy (sometimes with good reason). The Rubik’s Cube of master scheduling can also create variability in efficacy outcomes: Do your teachers have time to devote to high-quality implementation and troubleshooting, and then to make good use of the data for instructional purposes? At its best, ed tech is about more than tech-driven instruction. It’s about the shift toward the use of more real-time data to inform instructional strategy. In some ways, matching an ed tech product with the unique environment and needs of a school or district is a lot like matching a diet to a person’s habits, lifestyle, and preferences: Implementation rules. Matching matters. We know what “works.” But we know far less about what works where, when, and why.

    Thoughtful efforts are underway to help school and district leaders understand the variables likely to shape the impact of their ed tech investments and strategies. Organizations like LEAP Innovations are doing pioneering work to better understand and document the implementation environment, creating a platform for sharing experiences, matching schools with products, and establishing a common framework to inform practice — with or without technology. Not only are they on the front lines of addressing the ed tech implementation problem, but they are also on the leading edge of a new discipline of “implementation research.”

    Implementation research is rooted in the capture of detailed descriptions of the myriad variables that undergird your school’s success — or failure — with a particular product or approach. It’s about understanding school cultures and user personas. It’s about respecting and valuing the insights and perspectives of educators. And presenting insights in ways that enable your peers to know whether they should expect similar results in their school.

    Building a body of implementation research will involve hard work on an important problem. And it’s work that no one institution — or even a small group of institutions — can do alone. The good news is that solving this rather serious problem doesn’t require a grand political compromise or major new legislation. We can address it by engaging in collective action to formalize, standardize, and share information that hundreds of thousands of educators are already collecting in informal and non-standard ways.

    The first step in understanding and documenting a multiplicity of variables across a range of implementation environments is creating a common language to describe our schools and classrooms in terms that are relevant to the implementation of education technology. We’ll need to identify the factors that may explain why the same ed tech product can thrive in your school but flop in my school. That doesn’t mean that every educator in the country needs to document their ed tech implementations and impact. It doesn’t require the development of a scary database of student or educator data. We can start small, honing our list of variables and learning, over time, what sorts of factors enable or impede expected outcomes.

    The next step is translating those variables into metadata, and creating a common, interoperable language for incorporating the insights and experiences of individuals and organizations already doing similar work. We know that there is demand for information and insights rooted in the implementation experiences and lessons of peers. If we build an accessible and consistently organized system for understanding, collecting, and sharing information, we can chip away at the collective action problem by making it easier and less expensive to capture — and share — perspectives from across the field.

    The final step is addressing accessibility to shared insights, facilitating a community of connected decision makers who work together both to call upon the system for information and to continue to make contributions to it. Think of it as a Consumer Reports for ed tech. We’ll use the data we’ve collected to hone a shared understanding of the implementation factors that matter — but we’ll also continue to rely upon lived experiences of users to inform and grow the data set. Over time, we can achieve a shared way of thinking about a complex problem that has the potential to bring decision-making out of the dark and into a well-informed, community-supported environment.

    My work with colleagues at the first-ever EdTech Efficacy Research Symposium found that a growing number of providers, organizations, and associations are already working with educators to crowdsource efficacy data. And educators across the country are already doing this work in informal but valuable ways. Bringing these efforts together and creating a more standard approach to their collection and dissemination is a critical step toward improving decision-making. My observation from both research and discussion with the field is that the effort is not only deeply needed — it also already enjoys great support. If we take collective action, we can develop a democratic approach to improving the fit between ed tech tools and the educators who use them.

    This series is produced in partnership with Pearson. The 74 originally published this article on January 2nd, 2018 and it was re-posted here with permission.

     

  • blog image alt text

    Imagine (a world of assessment without tests) (Episode 6)

    By Dr. Kristen DiCerbo, Vice President of Education Research, Pearson

    This series, produced with The Edtech Podcast, explores the implications of and questions around future tech for education. Listen for insights from experts — including contrarians — from across industry, research, and academia. Get caught up with episodes 1-5

    How do we get beyond the tick-box or bubble filling exercise of exams and tests, whilst also measuring ‘progress’? In episode 6, we review ideas around ‘invisible assessment’ and question who benefits from ‘traditional’ and re-imagined forms of assessment, including games-based assessment. Can ‘tests’ be fun and should they be? How do we measure collaboration?

    Subscribe to the Future Tech for Education on iTunes.

     

  • blog image alt text

    What can VR, AR & Simulation offer teaching & learning? Plus, strategies to avoid the technopanic (Episode 5)

    By Denis Hurley, Director of Future Technologies, Pearson

    This series, produced with The Edtech Podcast, explores the implications of and questions around future tech for education. Listen for insights from experts — including contrarians — from across industry, research, and academia. Get caught up here with  episode 1,  episode 2, episode 3, and episode 4

    In the latest episode of our Future Tech in Education podcast series, we dip into the world of VR and mixed reality to uncover what high-cost, high-risk learning opportunities are being made more accessible to all by this technology.

    Plus, we wrap our co-curated mini series with practical suggestions for educators: be mindfully skeptical, resist fear, understand that you can start small and grow, and avoid technology for technology’s sake. This last one is harder than it sounds. Many new technologies wow us but do not have useful application to education. Learn how to make the most of technology.

    Subscribe to the Future Tech for Education on iTunes. 

     

  • blog image alt text

    Language learning as the test-bunny for educational future tech (Episode 4)

    By Denis Hurley, Director of Future Technologies, Pearson

    This series, produced with The Edtech Podcast, explores the implications of and questions around future tech for education. Listen for insights from experts — including contrarians — from across industry, research, and academia. Watch episode 1,  episode 2, episode 3.

    Technological change is exponential, which means it will only impact our lives more and more quickly. Among the aspects of our lives undergoing change, language usage is one of the ones being altered most drastically. New technologies also create new opportunities for learning. How must we adjust and what can we take advantage of?

    Subscribe to the Future Tech for Education on iTunes.

     

  • blog image alt text

    Developing responsible and calm digital citizenship (Episode 3)

    By Denis Hurley, Director of Future Technologies, Pearson

    This series, produced with The Edtech Podcast, explores the implications of and questions around future tech for education. Listen for insights from experts — including contrarians — from across industry, research, and academia.

    Technology is a part of almost every aspect of our lives: buildings can be 3D printed, cars can drive themselves, and algorithms can direct our education.

    In the third episode of this series (catch episode 1 and episode 2), we explore how do we react to, interact with, and create with the tools of technology? It’s essential that we understand how these function and what the implications.

    We also look into the changing world of work and how we can best prepare.

    View on YouTube

    For more information, check out the Pearson Future Skills report.

    Subscribe to the Future Tech for Education on iTunes.

     

  • blog image alt text

    What is AI & what has it got to do with me and my students? (Episode 2)

    By Denis Hurley, Director of Future Technologies, Pearson

    This series, produced with The Edtech Podcast, explores the implications of and questions around future tech for education. Listen for insights from experts — including contrarians — from across industry, research, and academia. Subscribe to the Future Tech for Education on iTunes here.

    Smarter digital tools, such as artificial intelligence (AI), offer up the promise of learning that is more personalized, inclusive and flexible. Many see the benefits of AI, some are skeptical – but it’s crucial we understand what these tools can do and how they work.

    In the first episode of this series, we talked about the how to navigate the challenges and opportunities tech brings to the future of education. In episode two, we explore: What is AI and what is it not? What’s the difference between narrow AI, general AI, and super-intelligence? What type of AI is used now in education? What type do people fear? What questions might teachers want to use when thinking about AI in education?

    View on YouTube

    For more information, check out the report, Intelligence Unleashed: An argument for AI in Education.

     

  • blog image alt text

    What does future tech for education look like? (Episode 1)

    By Denis Hurley, Director of Future Technologies, Pearson

    This series, produced with The Edtech Podcast, explores the implications of and questions around future tech for education. Listen for insights from experts — including contrarians — from across industry, research, and academia. Subscribe to the Future Tech for Education on iTunes.

    In our first episode of the Future Tech for Education podcast series, we put “future-forecasting” in perspective through a few useful but simple models. We talk about the history of the future and mindful skepticism, and we delve into the four foci of edtech technologies — mixed reality, data science (AI), biosyncing, and human-machine relations — and their effect on education, teaching, and learning.

    View on YouTube 

    Employ mindful skepticism. This means not accepting a new technology as inherently good or evil. But try to understand what the possibilities are. Try to understand what can it be used for; how can I make the most of this technology.