A new report responds toThe Future of Skillsby exploring its implications for education systems and offers up practical solutions for higher education to more closely align with what the workforce needs.
We are excited to share a new report by Jobs for the Future (JFF) and Pearson that explores the changing world of work and provides recommendations for shifting from the traditional route to employment to a network of pathways that is flexible, dynamic, and ultimately serves more learners.
The first wave – access – was focused on getting more people to enter higher education. The second wave was focused on improving achievement – getting more students to earn degrees and certificates.
In this third wave, the worlds of education and work will converge producing programs that ensure students are job-ready and primed for lifelong career success.
Adapting to the needs of both the learner and the employer, “demand-driven education takes account of the emerging global economy — technology-infused, gig-oriented, industry-driven — while also striving to ensure that new graduates and lifelong learners alike have the skills required to flourish.”
The report states, “as the future of work unfolds, what makes us human is what will make us employable.”
While technological literacy is critical, learners need educational experiences that cultivate skills, including fluency of ideas, originality, judgment, decision-making, and active learning, all supported by collaborative academic and career paths.
In a recent interview, Joe Deegan, co-author of the report and senior program manager at JFF, said,“although technology such as digital assessment might enable educators to make programs faster and more adaptive, the most significant change is one of mindset.”
The future is bright. And there’s a lot of good work to do through active collaboration and partnership to create rewarding postsecondary learning experiences that are responsive to our changing world and inclusive of all learners.
Pearson study reveals Generation Z and millennials’ learning preferences
Young people are the first to admit they can easily spend hours a day on the internet—whether it’s via a desktop computer, tablet, or smartphone. While they may be tech-savvy by nature, this innate connectivity poses the question of technology’s place as it relates to how Generation Z and millennials learn.
In a recent survey of 2,558 14-40 year olds in the US, Pearson explored attitudes, preferences, and behaviors around technology in education, identifying some key similarities and differences between Gen Z and millennials.
While 39% of Gen Z prefer learning with a teacher leading the instruction, YouTube is also their #1 preferred learning method. And 47% of them spend three hours or more a day on the video platform. On the other hand, millennials need more flexibility—they are more likely to prefer self-directed learning supported by online courses with video lectures. And while they are known for being the “plugged in” generation, it’s apparent that plenty of millennials still prefer a good old-fashioned book to learn.
Regardless of their differences, the vast majority of both Gen Z and millennials are positive about the future of technology in education. 59% of Gen Z and 66% of millennials believe technology can transform the way college students learn in the future.
In 2016, distance education enrollment continued to grow for the 14th straight year.
This is the headline coming out of Grade Increase: Tracking Distance Education in the United States – a recent report released by Babson Survey Research Group (BSRG).
As stated in BSRG’s press release: “The growth of distance enrollments has been relentless,” said study co-author Julia E. Seaman, research director of the Babson Survey Research Group. “They have gone up when the economy was expanding, when the economy was shrinking, when overall enrollments were growing, and now when overall enrollments are shrinking.”
This is the sixth in a series of essays surrounding the EdTech Efficacy Research Symposium, a gathering of 275 researchers, teachers, entrepreneurs, professors, administrators, and philanthropists to discuss the role efficacy research should play in guiding the development and implementation of education technologies. This series was produced in partnership with Pearson, a co-sponsor of the symposium co-hosted by the University of Virginia’s Curry School of Education, Digital Promise, and the Jefferson Education Accelerator. Click through to read the first, second, third, fourth, and fifth pieces.
Economists define a collective action problem as one in which a collection of people (or organizations) each have an interest in seeing an action happen, but the cost of any one of them independently taking the action is so high that no action is taken — and the problem persists.
The world of education swirls with collective action problems. But when it comes to understanding the efficacy of education technology products and services, it’s a problem that costs schools and districts billions of dollars, countless hours, and (sadly) missed opportunities to improve outcomes for students.
Collectively, our nation’s K-12 schools and institutions of higher education spend more than $13 billion annually on education technology. And yet we have a dearth of data to inform our understanding of which products (or categories of products) are most likely to “work” within a particular school or classroom. As a result, we purchase products that often turn out to be a poor match for the needs of our schools or students. Badly matched and improperly implemented, too many fall short of their promise of enabling better teaching — and learning.
It’s not that the field is devoid of research. Quantifying the efficacy of ed tech is a favorite topic for a growing cadre of education researchers and academics. Most major publishers and dozens of educational technology companies conduct research in the form of case studies and, in some cases, randomized control trials that showcase the potential outcomes for their products. The What Works Clearinghouse, now entering its 15th year, sets a gold standard for educational research but provides very little context about why the same product “works” in some places but not others. And efficacy is a topic that has now come to the forefront of our policy discourse, as debates at the state and local level center on the proper interpretation of ESSA’s mercurial “evidence” requirements. Set too high a bar, and we’ll artificially contract a market laden with potential. Miss the mark, and we’ll continue to let weak outcomes serve as evidence.
The problem is that most research only addresses a tiny part of the ed tech efficacy equation. Variability among and between school cultures, priorities, preferences, professional development, and technical factors tend to affect the outcomes associated with education technology. A district leader once put it to me this way: “a bad intervention implemented well can produce far better outcomes than a good intervention implemented poorly.”
After all, a reading intervention might work well in a lab or school — but if teachers in your school aren’t involved in the decision-making or procurement process, they may very well reject the strategy (sometimes with good reason). The Rubik’s Cube of master scheduling can also create variability in efficacy outcomes: Do your teachers have time to devote to high-quality implementation and troubleshooting, and then to make good use of the data for instructional purposes? At its best, ed tech is about more than tech-driven instruction. It’s about the shift toward the use of more real-time data to inform instructional strategy. In some ways, matching an ed tech product with the unique environment and needs of a school or district is a lot like matching a diet to a person’s habits, lifestyle, and preferences: Implementation rules. Matching matters. We know what “works.” But we know far less about what works where, when, and why.
Thoughtful efforts are underway to help school and district leaders understand the variables likely to shape the impact of their ed tech investments and strategies. Organizations like LEAP Innovations are doing pioneering work to better understand and document the implementation environment, creating a platform for sharing experiences, matching schools with products, and establishing a common framework to inform practice — with or without technology. Not only are they on the front lines of addressing the ed tech implementation problem, but they are also on the leading edge of a new discipline of “implementation research.”
Implementation research is rooted in the capture of detailed descriptions of the myriad variables that undergird your school’s success — or failure — with a particular product or approach. It’s about understanding school cultures and user personas. It’s about respecting and valuing the insights and perspectives of educators. And presenting insights in ways that enable your peers to know whether they should expect similar results in their school.
Building a body of implementation research will involve hard work on an important problem. And it’s work that no one institution — or even a small group of institutions — can do alone. The good news is that solving this rather serious problem doesn’t require a grand political compromise or major new legislation. We can address it by engaging in collective action to formalize, standardize, and share information that hundreds of thousands of educators are already collecting in informal and non-standard ways.
The first step in understanding and documenting a multiplicity of variables across a range of implementation environments is creating a common language to describe our schools and classrooms in terms that are relevant to the implementation of education technology. We’ll need to identify the factors that may explain why the same ed tech product can thrive in your school but flop in my school. That doesn’t mean that every educator in the country needs to document their ed tech implementations and impact. It doesn’t require the development of a scary database of student or educator data. We can start small, honing our list of variables and learning, over time, what sorts of factors enable or impede expected outcomes.
The next step is translating those variables into metadata, and creating a common, interoperable language for incorporating the insights and experiences of individuals and organizations already doing similar work. We know that there is demand for information and insights rooted in the implementation experiences and lessons of peers. If we build an accessible and consistently organized system for understanding, collecting, and sharing information, we can chip away at the collective action problem by making it easier and less expensive to capture — and share — perspectives from across the field.
The final step is addressing accessibility to shared insights, facilitating a community of connected decision makers who work together both to call upon the system for information and to continue to make contributions to it. Think of it as a Consumer Reports for ed tech. We’ll use the data we’ve collected to hone a shared understanding of the implementation factors that matter — but we’ll also continue to rely upon lived experiences of users to inform and grow the data set. Over time, we can achieve a shared way of thinking about a complex problem that has the potential to bring decision-making out of the dark and into a well-informed, community-supported environment.
We’ve heard from Emily Lai, Ph.D., twice before. Last year, she shared the story of her work in Jordan to improve learning opportunities for the children of Syrian refugees. More recently, she offered her tips for parents and teachers on helping students improve their information literacy.
The Components of Collaboration
“Most of us know what collaboration is, at least in its most basic sense,” says Emily Lai, Ph.D.
“It means working with others to achieve a common goal.”
Emily is Director of Formative Assessment and Feedback for Pearson. Her work is focused on improving the ways we assess learners’ knowledge and skills, and ensuring results support further learning and development.
“We’ve been reviewing the research, trying to figure out what we know about collaboration and how to support it. For example, we know that collaboration skills have an impact on how successful somebody is in all kinds of group situations—at school, on the job, and even working with others within a community to address social issues.”
Teaching Collaboration in the Classroom
Teaching collaboration skills in the classroom can be harder than expected, Emily says.
“When a teacher assigns a group project, oftentimes students will divide up the task into smaller pieces, work independently, and then just shove their parts together at the very end.”
“In that case, the teacher likely had good intentions to help develop collaboration skills in students. But it didn’t happen.”
Checking all the Boxes
“Tasks that are truly supportive of collaboration are not easy to create,” Emily says.
Digging deeper, Emily says there are three sub-components of successful collaboration:
Interpersonal communication – how you communicate verbally and non-verbally with your teammates.
Conflict resolution – your ability to acknowledge and resolve disagreements in a manner consistent with the best interest of the team.
Task management – your ability to set goals, organize tasks, track team progress against goals, and adjust the process along the way as needed.
Emily says she understands how difficult it can be for educators to check all three boxes.
Before beginning an assignment, Emily suggests teachers talk to students explicitly about collaboration: what makes a good team member versus what makes a difficult one, as well as strategies for working with others, sharing the load responsibly, and overcoming disagreements.
During group work, she says, observe students’ verbal and non-verbal behavior carefully and provide real-time feedback.
“Talk with them about how they’re making decisions as a group, sharing responsibility, and dealing with obstacles,” Emily says.
“In the classroom, it’s all about the combination of teaching collaboration skills explicitly, giving students opportunities to practice those skills, and providing feedback along the way so those skills continue to develop.”
“The research shows that students who develop strong collaboration skills get more out of those cooperative learning situations at school.”
Teaching Collaboration at Home
Emily is a mother of two daughters, 4 and 8.
At home, she says, there’s one part of collaboration that is especially valuable: conflict resolution.
“Most often, it comes in handy on movie nights.”
“The 8-year-old tends to gravitate towards movies that are a little too scary for the 4-year-old, and the 4-year-old tends to gravitate towards movies that are a little too babyish for the 8-year-old.”
“It would be easy to intervene and just pick a movie for them, but my husband and I do our best to stay out of it,” Emily says.
“We’ve established the procedure that they have to negotiate with each other and agree on a movie, and now they have a collaborative routine in place.”
“They know they get to watch a movie, and we know they’re learning along the way.”
“Taking turns in conversation is another big one for the four-year-old,” Emily says.
“She doesn’t like to yield the floor, but it’s something we’re working on.”
“I know from the research that if my daughters learn these collaboration skills, they are more likely to be successful in their future careers.”
Sharing the Latest Research
This week, Emily and two of her colleagues are releasing a research paper entitled “Skills for Today: What We Know about Teaching and Assessing Collaboration.”
The paper will be jointly released by Pearson and The Partnership for 21st Century Learning (P21), a Washington, DC-based coalition that includes leaders from the business, education, and government sectors.
“We teamed up on this paper because we both believe collaboration is too important for college, career, and life to leave to chance,” Emily says.
It is the first in a four-part series on what is known about teaching and assessing “the Four Cs”: collaboration, critical thinking, creativity, and communication.
“P21 is the perfect partner for this effort,” Emily says.
“Our partnership signifies a joint commitment to helping stakeholders—educators, parents, policy-makers, and employers—understand what skills are needed to be successful today, and how to teach them effectively at any age.”
To download the full version of “Skills for Today: What We Know about Teaching and Assessing Collaboration,” click here.
Three executive summaries of the paper are also available:
This is the fifth in a series of essays surrounding the EdTech Efficacy Research Symposium, a gathering of 275 researchers, teachers, entrepreneurs, professors, administrators, and philanthropists to discuss the role efficacy research should play in guiding the development and implementation of education technologies. This series was produced in partnership with Pearson, a co-sponsor of the symposium co-hosted by the University of Virginia’s Curry School of Education, Digital Promise, and the Jefferson Education Accelerator. Click through to read the first, second, third, and fourth pieces.
Education technology plays an essential role in our schools today. Whether the technology supports instructional intervention, personalized learning, or school administration, the successful application of that technology can dramatically improve productivity and student learning.
That said, too many school leaders lack the support they need to ensure that educational technology investment and related activities, strategies, or interventions are evidence-based and effective. This gap between opportunity and capacity is undermining the ability of school leaders to move the needle on educational equity and to execute on the goals of today’s K-16 policies. The education community needs to clearly understand this gap and take some immediate steps to close it.
The time is ripe
The new federal K-12 law, the Every Students Succeeds Act, elevates the importance of evidence-based practices in school purchasing and implementation practices. The use of the state’s allocation for school support and improvement illustrates the point. Schools that receive these funds must invest only in activities, strategies, or interventions that demonstrate a statistically significant effect on improving student outcomes or other relevant outcomes.
That determination must rely on research that is well designed and well implemented, as defined in the law. And once implementation begins, the U.S. Department of Education asks schools to focus on continuous improvement by collecting information about the implementation and making necessary changes to advance the goals of equity and educational opportunity for at-risk students. The law, in short, links compliance with evidence-based procurement and implementation that is guided by continuous improvement.
New instructional models in higher education rely on evidence-based practices if they are to take root. School leaders are under intense pressure to find ways to make programs more affordable, student-centered, and valuable to a rapidly changing labor market. Competency-based education (the unbundling of certificates and degrees into discrete skills and competencies) is one of the better-known responses to the challenge, but the model will likely stay experimental until there is more evidence of success.
“We are still just beginning to understand CBE,” Southern New Hampshire University President Paul LeBlanc said. “Project-based learning, authentic learning, well-done assessment rubrics — those are all good efforts, but do we have the evidence to pass muster with a real assessment expert? Almost none of higher ed would.”
It is easy to forget that the abundance of educational technology is a relatively new thing for schools and higher ed institutions. Back in the early 2000s, the question was how to make new educational technologies viable instructional and management tools. Education data was largely just a lagging measure used for school accountability and reporting.
Today, the data can provide strong, real-time signals that advance productivity through, for example, predictive analytics, personalized learning, curriculum curating and delivery, and enabling the direct investigation into educational practices that work in specific contexts. The challenge is how to control and channel the deluge of bytes and information streaming from the estimated $25.4 billion K-16 education technology industry.
“It’s [now] too easy to go to a conference and load up at the buffet of innovations. That’s something we try hard not to do,” said Chad Ratliff, director of instructional programs for Virginia’s Albemarle County Schools. The information has to be filtered and vetted, which takes time and expertise.
Improving educational equity is the focus of ESSA, the Higher Education Act, and a key reason many school leaders chose to work in education. Moving the needle increasingly relies on evidence-based practices. As the Aspen Institute and Council of Chief State School Officers point out in a recent report, equity means — at the very least — that “every student has access to the resources and educational rigor they need at the right moment in their education despite race, gender, ethnicity, language, disability, family background, or family income.”
Embedded in this is the presumption that the activities, strategies, or interventions actually work for the populations they intend to benefit.
Educators cannot afford to invest in ineffective activities. At the federal K-12 level, President Donald Trump is proposing that, next year, Congress cut spending for the Education Department and eliminate many programs, including $2.3 billion for professional development programs, $1.2 billion for after-school funds, and the new Title IV grant that explicitly supports evidence-based and effective technology practices in our schools.
Higher education is also in a tight spot. The president seeks to cut spending in half for Federal Work-Study programs, eliminate Supplemental Educational Opportunity grants, and take nearly $4 million from the Pell Grant surplus for other government spending. At the same time, Education Secretary Betsy DeVos is reviewing all programs to explore which can be eliminated, reduced, consolidated, or privatized.
These proposed cuts and reductions increase the urgency for school leaders to tell better stories about the ways they use the funds to improve educational opportunities and learning outcomes. And these stories are more compelling (and protected from budget politics) when they are built upon evidence.
Too few resources
While this is a critical time for evidence-based and effective program practices, here is the rub: The education sector is just beginning to build out this body of knowledge, so school leaders are often forging ahead without the kind of guidance and research they need to succeed.
The challenges are significant and evident throughout the education technology life cycle. For example, it is clear that evidence should influence procurement standards, but that is rarely the case. The issue of “procurement standards” is linked to cost thresholds and related competitive and transparent bidding requirements. It is seldom connected with measures of prior success and research related to implementation and program efficacy. Those types of standards are foreign to most state and local educational agencies, left to “innovative” educational agencies and organizations, like Digital Promise’s League of Innovative Schools, to explore.
Once the trials of implementation begin, school leaders and their vendors typically act without clear models of success and in isolation. There just are not good data on efficacy for most products and implementation practices, which means that leaders cannot avail themselves of models of success and networks of practical experience. Some schools and institutions with the financial wherewithal, like Virginia’s Albemarle and Fairfax County Public Schools, have created their own research process to produce their own evidence.
In Albemarle, for example, learning technology staff test-bed solutions to instructional and enterprise needs. Staff spend time observing students and staff using new devices and cloud-based services. They seek feedback and performance data from both teachers and students in response to questions about the efficacy of the solution. They will begin with questions like “If a service is designed to support literacy development, what variable are we attempting to affect? What information do we need to validate significant impact?” Yet, like the “innovators” of procurement standards, these are the exceptions to the rule.
And as schools make headway and immerse themselves in new technologies and services, the bytes of data and useful information multiply, but the time and capacity necessary to make them useful remains scarce. Most schools are not like Fairfax and Albemarle counties. They do not have the staff and experts required to parse the data and uncover meaningful insights into what’s working and what’s not. That kind of work and expertise isn’t something that can be simply layered onto existing responsibilities without overloading and possibly burning out staff.
“Many schools will have clear goals, a well-defined action plan that includes professional learning opportunities, mentoring, and a monitoring timeline,” said Chrisandra Richardson, a former associate superintendent for Montgomery County Public Schools in Maryland. “But too few schools know how to exercise a continuous improvement mindset, how to continuously ask: ‘Are we doing what we said we would do — and how do we course-correct if we are not?’ ”
Immediate next steps
So what needs to be done? Here are five specific issues that the education community (philanthropies, universities, vendors, and agencies) should rally around.
Set common standards for procurement. If every leader must reinvent the wheel when it comes to identifying key elements of the technology evaluation rubric, we will ensure we make little progress — and do so slowly. The sector should collectively secure consensus on the baseline procurement standards for evidence-based and research practices and provide them to leaders through free or open-source evaluative rubrics or “look fors” they can easily access and employ.
Make evidence-based practice a core skill for school leadership. Every few years, leaders in the field try to pin down exactly what core competencies every school leader should possess (or endeavor to develop). If we are to achieve a field in which leaders know what evidence-based decision-making looks like, we must incorporate it into professional standards and include it among our evaluative criteria.
Find and elevate exemplars. As Charles Duhigg points out in his recent best seller Smarter Faster Better, productive and effective people do their work with clear and frequently rehearsed mental models of how something should work. Without them, decision-making can become unmoored, wasteful, and sometimes even dangerous. Our school leaders need to know what successful evidence-based practices look like. We cannot anticipate that leader or educator training will incorporate good decision-making strategies around education technologies in the immediate future, so we should find alternative ways of showcasing these models.
Define “best practice” in technology evaluation and adoption. Rather than force every school leader to develop and struggle to find funds to support their own processes, we can develop models that can alleviate the need for schools to develop and invest in their own research and evidence departments. Not all school districts enjoy resources to investigate their own tools, but different contexts demand differing considerations. Best practices help leaders navigate variation within the confines of their resources. The Ed Tech RCE Coach is one example of a set of free, open-source tools available to help schools embed best practices in their decision-making.
Promote continuous evaluation and improvement. Decisions, even the best ones, have a shelf life. They may seem appropriate until evidence proves otherwise. But without a process to gather information and assess decision-making efficacy, it’s difficult to learn from any decisions (good or bad). Together, we should promote school practices that embrace continuous research and improvement practices within and across financial and program divisions to increase the likelihood of finding and keeping the best technologies.
The urgency to learn about and apply evidence to buying, using, and measuring success with ed tech is pressing, but the resources and protocols they need to make it happen are scarce. These are conditions that position our school leaders for failure — unless the education community and its stakeholders get together to take some immediate actions.
This series is produced in partnership with Pearson. The 74 originally published this article on September 11th, 2017, and it was re-posted here with permission.
Question: What do we learn from a study that shows a technique or technology likely has affected an educational outcome?
Answer: Not nearly enough.
Despite widespread criticism, the field of education research continues to emphasize statistical significance—rejecting the conclusion that chance is a plausible explanation for an observed effect—while largely neglecting questions of precision and practical importance. Sure, a study may show that an intervention likely has an effect on learning, but so what? Even researchers’ recent efforts to estimate the size of an effect don’t answer key questions. What is the real-world impact on learners? How precisely is the effect estimated? Is the effect credible and reliable?
Unfortunately, education researchers are not expected to interpret the practical significance of their findings or acknowledge the often embarrassingly large degree of uncertainty associated with their observations. So, education research literature is filled with results that are almost always statistically significant but rarely informative.
Early evidence suggests that many edtech companies are following the same path. But we believe that they have the opportunity to change course and adopt more meaningful ways of interpreting and communicating research that will provide education decision makers with the information they need to help learners succeed.
Admitting What You Don’t Know
For educational research to be more meaningful, researchers will have to acknowledge its limits. Although published research often projects a sense of objectivity and certainty about study findings, accepting subjectivity and uncertainty is a critical element of the scientific process.
On the positive side, some researchers have begun to report what is known as standardized effect sizes, a calculation that helps compare outcomes in different groups on a common scale. But researchers rarely interpret the meaning of these figures. And the figures can be confusing. A ‘large’ effect actually may be quite small when compared to available alternatives or when factoring in the length of treatment, and a ‘small’ effect may be highly impactful because it is simple to implement or cumulative in nature.
Confused? Imagine the plight of a teacher trying to decide what products to use, based on evidence—an issue of increased importance since the Every Student Succeeds Act (ESSA) promotes the use of federal funds for certain programs, based upon evidence of effectiveness. The newly-launched Evidence for ESSA admirably tries to help support that process, complementing the What Works Clearinghouse and pointing to programs that have been deemed “effective.” But when that teacher starts comparing products, say Math in Focus (effect size: +0.18) and Pirate Math (effect size: +0.37), the best choice isn’t readily apparent.
It’s also important to note that every intervention’s observed “effect” is associated with a quantifiable degree of uncertainty. By glossing over this fact, researchers risk promoting a false sense of precision and making it harder to craft useful data-driven solutions. While acknowledging uncertainty is likely to temper excitement about many research findings, in the end it will support more honest evaluations of an intervention’s likely effectiveness.
Communicate Better, Not Just More
In addition to faithfully describing the practical significance and uncertainty around a finding, there also is a need to clearly communicate information regarding research quality, in ways that are accessible to non-specialists. There has been a notable unwillingness in the broader educational research community to tackle the challenge of discriminating between high quality research and quackery for educators and other non-specialists. As such, there is a long overdue need for educational researchers to be forthcoming about the quality and reliability of interventions in ways that educational practitioners can understand and trust.
Trust is the key. Whatever issues might surround the reporting of research results, educators are suspicious of people who have never been in the classroom. If a result or debunked academic fad (e.g. learning styles) doesn’t match their experience, they will be tempted to dismiss it. As education research becomes more rigorous, relevant, and understandable, we hope that trust will grow. Even simply categorizing research as either “replicated” or “unchallenged” would be a powerful initial filtering technique given the paucity of replication research in education. The alternative is to leave educators and policy-makers intellectually adrift, susceptible to whatever educational fad is popular at the moment.
At the same time, we have to improve our understanding of how consumers of education research understand research claims. For instance, surveys reveal that even academic researchers commonly misinterpret the meaning of common concepts like statistical significance and confidence intervals. As a result, there is a pressing need to understand how those involved in education interpret (rightly or wrongly) common statistical ideas and decipher research claims.
A Blueprint For Change
So, how can the education technology community help address these issues?
Despite the money and time spent conducting efficacy studies on their products, surveys reveal that research often plays a minor role in edtech consumer purchasing decisions. The opaqueness and perceived irrelevance of edtech research studies, which mirror the reporting conventions typically found in academia, no doubt contribute to this unfortunate fact. Educators and administrators rarely possess the research and statistical literacy to interpret the meaning and implications of research focused on claims of statistical significance and measuring indirect proxies for learning. This might help explain why even well-meaning educators fall victim to “learning myths.”
And when nearly every edtech company is amassing troves of research studies, all ostensibly supporting the efficacy of their products (with the quality and reliability of this research varying widely), it is understandable that edtech consumers treat them all with equal incredulity.
So, if the current edtech emphasis on efficacy is going to amount to more than a passing fad and avoid devolving into a costly marketing scheme, edtech companies might start by taking the following actions:
Edtech researchers should interpret the practical significance and uncertainty associated with their study findings. The researchers conducting an experiment are best qualified to answer interpretive questions around the real-world value of study findings and we should expect that they make an effort to do so.
As an industry, edtech needs to work toward adopting standardized ways to communicate the quality and strength of evidence as it relates to efficacy research. The What Works Clearinghouse has made important steps, but it is critical that relevant information is brought to the point of decision for educators. This work could resemble something like food labels for edtech products.
Researchers should increasingly use data visualizations to make complex findings more intuitive while making additional efforts to understand how non-specialists interpret and understand frequently reported statistical ideas.
Finally, researchers should employ direct measures of learning whenever possible rather than relying on misleading proxies (e.g., grades or student perceptions of learning) to ensure that the findings reflect what educators really care about. This also includes using validated assessments and focusing on long-term learning gains rather than short-term performance improvement.
This series is produced in partnership with Pearson. EdSurge originally published this article on April 1, 2017, and it was re-posted here with permission.
There is a crisis engulfing the social sciences. What was thought to be known about psychology—based on published results and research—is being called into question by new findings and the efforts of individual groups like the Reproducibility Project. What we know is under question and so is how we come to know. Long institutionalized practices of scientific inquiry in the social sciences are being actively questioned, proposals put forth for needed reforms.
While the fields of academia burn with this discussion, education results have remained largely untouched. But education is not immune to problems endemic in fields like psychology and medicine. In fact, there’s a strong case that the problems emerging in other fields are even worse in educational research. External or internal critical scrutiny has been lacking. A recent review of the top 100 education journals found that only 0.13% of published articles were replication studies. Education waits for its own crusading Brian Nosek to disrupt the canon of findings. Winter is coming.
This should not be breaking news. Education research has long been criticized for its inability to generate a reliable and impactful evidence base. It has been derided for problematic statistical and methodological practices that hinder knowledge accumulation and encourage the adoption of unproven interventions. For its failure to communicate the uncertainty and relevance associated with research findings, like Value-Added Measures for teachers, in ways that practitioners can understand. And for struggling to impact educational habits (at least in the US) and how we develop, buy, and learn from (see Mike Petrilli’s summation) the best practices and tools.
Unfortunately, decades of withering criticism have done little to change the methods and incentives of educational research in ways necessary to improve the reliability and usefulness of findings. The research community appears to be in no rush to alter its well-trodden path—even if the path is one of continued irrelevance. Something must change if educational research is to meaningfully impact teaching and learning. Yet history suggests the impetus for this change is unlikely to originate from within academia.
Can edtech improve the quality and usefulness of educational research? We may be biased (as colleagues at a large and scrutinized edtech company), but we aren’t naïve. We know it might sound farcical to suggest technology companies may play a critical role in improving the quality of education research, given almost weekly revelations about corporations engaging in concerted efforts to distort and shape research results to fit their interests. It’s shocking to read efforts to warp public perception on the effects of sugar on heart disease or the effectiveness of antidepressants. It would be foolish not to view research conducted or paid for by corporations with a healthy degree of skepticism.
These efforts represent opportunities to foment long-needed improvements in the practice of education research. A chance to redress education research’s most glaring weakness: its historical inability to appreciably impact the everyday activities of learning and teaching.
Incentives for edtech companies to adopt better research practices already exist and there is early evidence of openness to change. Edtech companies possess a number of crucial advantages when it comes to conducting the types of research education desperately needs, including:
access to growing troves of digital learning data;
close partnerships with institutions, faculty, and students;
the resources necessary to conduct large and representative intervention studies;
in-house expertise in the diverse specialties (e.g., computer scientists, statisticians, research methodologists, educational psychologists, UX researchers, instructional designers, ed policy experts, etc.) that must increasingly collaborate to carry out more informative research;
a research audience consisting primarily of educators, students, and other non-specialists
The real worry with edtech companies’ nascent efforts to conduct efficacy research is not that they will fail to conduct research with the same quality and objectivity typical of most educational research, but that they will fall into the same traps that currently plague such efforts. Rather than looking for what would be best for teachers and learners, entrepreneurs may focus on the wrong measures (p-values, for instance) that obfuscate people rather than enlighten them.
If this growing edtech movement repeats the follies of the current paradigm of educational research, it will fail to seize the moment to adopt reforms that can significantly aid our efforts to understand how best to help people teach and learn. And we will miss an important opportunity to enact systemic changes in research practice across the edtech industry with the hope that academia follows suit.
Our goal over the next three articles is to hold a mirror up, highlighting several crucial shortcomings of educational research. These institutionalized practices significantly limit its impact and informativeness.
We argue that edtech is uniquely incentivized and positioned to realize long-needed research improvements through its efficacy efforts.
Independent education research is a critical part of the learning world, but it needs improvement. It needs a new role model, its own George Washington Carver, a figure willing to test theories in the field, learn from them, and then to communicate them to back to practitioners. In particular, we will be focusing on three key ideas:
Why ‘What Works’ Doesn’t: Education research needs to move beyond simply evaluating whether or not an effect exists; that is, whether an educational intervention ‘works’. The ubiquitous use of null hypothesis significance testing in educational research is an epistemic dead end. Instead, education researchers need to adopt more creative and flexible methods of data analysis, focus on identifying and explaining important variations hidden under mean scores, and devote themselves to developing robust theories capable of generating testable predictions that are refined and improved over time.
Desperately Seeking Relevance: Education researchers are rarely expected to interpret the practical significance of their findings or report results in ways that are understandable to non-specialists making decisions based on their work. Although there has been progress in encouraging researchers to report standardized mean differences and correlation coefficients (i.e., effect sizes), this is not enough. In addition, researchers need to clearly communicate the importance of study findings within the context of alternative options and in relation to concrete benchmarks, openly acknowledge uncertainty and variation in their results, and refuse to be content measuring misleading proxies for what really matters.
Embracing the Milieu: For research to meaningfully impact teaching and learning, it will need to expand beyond an emphasis on controlled intervention studies and prioritize the messy, real-life conditions facing teachers and students. More energy must be devoted to the creative and problem-solving work of translating research into useful and practical tools for practitioners, an intermediary function explicitly focused on inventing, exploring, and implementing research-based solutions that are responsive the needs and constraints of everyday teaching.
Ultimately education research is about more than just publication. It’s about improving the lives of students and teachers. We don’t claim to have the complete answers but, as we expand these key principles over coming weeks, we want to offer steps edtech companies can take to improve the quality and value of educational research. These are things we’ve learned and things we are still learning.
This series is produced in partnership with Pearson. EdSurge originally published this article on January 6, 2017, and it was re-posted here with permission.
Answering that question through null hypothesis significance testing (NHST), which explores whether an intervention or product has an effect on the average outcome, undermines the ability to make sustained progress in helping students learn. It provides little useful information and fails miserably as a method for accumulating knowledge about learning and teaching. For the sake of efficiency and learning gains, edtech companies need to understand the limits of this practice and adopt a more progressive research agenda that yields actionable data on which to build useful products.
How does NHST look in action? A typical research question in education might be whether average test scores differ for students who use a new math game and those who don’t. Applying NHST, a researcher would assess whether a positive—i.e. non-zero—difference in scores is significant enough to conclude that the game has had an impact, or, in other words, that it ‘works’. Left unanswered is why and for whom.
This approach pervades education research. It is reflected in the U.S. government-supported initiative to aggregate and evaluate educational research, aptly named the What Works Clearinghouse, and frequently serves as a litmus test for publication worthiness in education journals. Yet it has been subjected to scathing criticism almost since its inception, criticism that centers on two issues.
False Positives And Other Pitfalls
First, obtaining statistical evidence of an effect is shockingly easy in experimental research. One of the emerging realizations from the current crisis in psychology is that rather than serving as a responsible gatekeeper ensuring the trustworthiness of published findings, reliance on statistical significance has had the opposite effect of creating a literature filled with false positives, overestimated effect sizes, and grossly underpowered research designs.
Assuming a proposed intervention involves students doing virtually anything more cognitively challenging than passively listening to lecturing-as-usual (the typical straw man control in education research), then a researcher is very likely to find a positive difference as long as the sample size is large enough. Showing that an educational intervention has a positive effect is quite a feeble hurdle to overcome. It isn’t at all shocking, therefore, that in education almost everything seems to work.
But even if these methodological concerns with NHST were addressed, there is a second serious flaw undermining the NHST framework upon which most experimental educational research rests.
Null hypothesis significance testing is an epistemic dead end. It obviates the need for researchers to put forward testable models of theories to predict and explain the effects that interventions have. In fact, the only hypothesis evaluated within the framework of NHST is a caricature, a hypothesis the researcher doesn’t believe—which is that an intervention has zero effect. A researcher’s own hypothesis is never directly tested. And yet with almost universal aplomb, education researchers falsely conclude that a rejection of the null hypothesis counts as strong evidence in favor of their preferred theory.
As a result, NHST encourages and preserves hypotheses so vague, so lacking in predictive power and theoretical content, as to be nearly useless. As researchers in psychology are realizing, even well-regarded theories, ostensibly supported by hundreds of randomized controlled experiments, can start to evaporate under scrutiny because reliance on null hypothesis significance testing means a theory is never really tested at all. As long as educational research continues to rely on testing the null hypothesis of no difference as a universal foil for establishing whether an intervention or product ‘works,’ it will fail to improve our understanding of how to help students learn.
As analysts Michael Horn and Julia Freeland have noted, this dominant paradigm of educational research is woefully incomplete and must change if we are going make progress in our understanding of how to help students learn:
“An effective research agenda moves beyond merely identifying correlations of what works on average to articulate and test theories about how and why certain educational interventions work in different circumstances for different students.”
Yet for academic researchers concerned primarily with producing publishable evidence of interventions that ‘work,’ the vapid nature of NHST has not been recognized as a serious issue. And because the NHST approach to educational research is relatively straightforward and safe to conduct (researchers have an excellent chance of getting the answer they want), a quick perusal of the efficacy pages at leading edtech companies shows that it holds as the dominant paradigm in edtech.
Are there, however, reasons to think edtech companies might be incentivized to abandon the current NHST paradigm? We think there are.
What About The Data You’re Not Capturing?
Consider a product owner at an edtech company. Although evidence that an educational product has a positive effect is great for producing compelling marketing brochures, it provides little information regarding why a product works, how well it works in different circumstances, or really any guidance for how to make it more effective.
Are some product features useful and others not? Are some features actually detrimental to learners but masked by more effective elements?
Is the product more or less effective for different types of learners or levels of prior expertise?
What elements should be added, left alone or removed in future versions of the product?
Testing whether a product works doesn’t provide answers to these questions. In fact, despite all the time, money, and resources spent conducting experimental research, a company actually learns very little about their product’s efficacy when evaluated using NHST. There is minimal ability to build on research of this sort. So product research becomes a game of efficacy roulette, with the company just hoping that findings show a positive effect each time it spins the NHST wheel. Companies truly committed to innovation and improving the effectiveness of their products should find this a very bitter pill to swallow.
A Blueprint For Change
We suggest edtech companies can vastly improve both their own product research as well as our understanding of how to help students learn by modifying their approach to research in several ways.
Recognize the limited information NHST can provide. As the primary statistical framework for moving our understanding of learning and teaching forward, it is misapplied because it ultimately tells us nothing that we actually want to know. Furthermore, it contributes to the proliferation of spurious findings in education by encouraging questionable research practices and the reporting of overestimated intervention effects.
Instead of relying on NHST, edtech researchers should focus on putting forward theoretically informed predictions and then designing experiments to test them against meaningful alternatives. Rather than rejecting the uninteresting hypothesis of “no-difference,” the primary goal of edtech research should be to improve our understanding of the impact that interventions have, and the best way to do this is to compare models that compete to describe observations that arise from experimentation.
Rather than dichotomous judgments about whether an intervention works on average, greater evaluative emphasis should be devoted to exploring the impact of interventions across subsets of students and conditions. No intervention works equally well for every student and it’s the creative and imaginative work of trying to understand why and where an intervention fails or succeeds that is most valuable.
Returning to our original example, rather than relying on NHST to evaluate a math game, a company will learn more by trying to improve its estimates and measurements of important variables, looking beneath group mean differences to explore why the game worked better or worse for sub-groups of students, and directly testing competing theoretical mechanisms proposed to explain the game’s influence on learner achievement. It is in this way that practical, problem-solving tools will develop and evolve to improve the lives of all learners.
This series is produced in partnership with Pearson. EdSurge originally published this article on February 12, 2017, and it was re-posted here with permission.
This is the second in a series of essays surrounding the EdTech Efficacy Research Symposium, a gathering of 275 researchers, teachers, entrepreneurs, professors, administrators, and philanthropists to discuss the role efficacy research should play in guiding the development and implementation of education technologies. This series was produced in partnership with Pearson, a co-sponsor of the symposium co-hosted by the University of Virginia’s Curry School of Education, Digital Promise, and the Jefferson Education Accelerator. Read the first piece here.
But as curricula and learning tools are prepared for rigorous evaluation, we should think about how existing research on teaching and learning have informed their design. Building a movement around research and impact must include advocating for products based on learning research. Otherwise, we are essentially taking a “wait and hope” strategy to development: wait until we have something built and hope it works.
When we make a meal, we want to at least have a theory about what each ingredient we include will contribute to the overall meal. How much salt do we put in to flavor it perfectly? When do we add it in? Similarly, when creating a curriculum or technology tool, we should be thinking about how each element impacts and optimizes overall learning. For example, how much and when do we add in a review of already-learned material to ensure memory retention? For this, we can turn to learning science as a guide.
We know a lot about how people learn. Our understanding comes from fields as varied as cognitive and educational psychology, motivational psychology, neuroscience, behavioral economics, and computer science. There are research findings that have been replicated repeatedly across dozens of studies. If we want to create educational technology tools that ultimately demonstrate efficacy, these learning science findings should serve as the foundation, integrating the insights from decades of research into how people learn and how teachers teach into product design from the beginning.
Spaced practice: We know that extending practice over time is better than cramming all practice into the few days before an exam. Spaced practice strengthens information retention and keeps it fresh over time, interrupting the “forgetting curve.” Implementing spaced practice could be as simple as planning out review time. Technology can help implement spaced practice in at least two ways: 1) prompting students to make their own study calendars and 2) proactively presenting already-learned information for periodic review.
Retrieval practice: What should that practice look like? Rather than rereading or reading and highlighting, we know it is better for students to actually retrieve the information from memory because retrieving the information actually changes the nature of the memory for the information. It strengthens and solidifies the learning, as well as provides more paths to access the learning when you need it. Learners creating flashcards have known about this strategy for a long time. RetrievalPractice.org offers useful information and helpful applications building on this important principle. There is a potential danger point here for designers not familiar with learning literature. Since multiple-choice activities are easier to score with technology, it is tempting to create these kinds of easy questions for retrieval practice. However, learning will be stronger if students practice freely recalling the information rather than simply recognizing the answer from choices.
Elaboration: Taking new information and expanding on it, linking it to other known information and personal experience, is another way to improve memory for new concepts. Linking new information to information that is already known can make it easy to recall later. In addition, simply expanding on information and explaining it in different ways can make retrieval easier. One way to practice this is to take main ideas and ask how they work and why. Another method is to have students draw or fill in concept maps, visually linking ideas and experiences together. There are a number of online tools that have been developed for creating concept maps, and current research is focusing on how to provide automated feedback on them.
So how many educational technology products actually incorporate these known practices? How do they encourage students to engage in these activities in a systematic way?
Existing research on instructional use of technology
For example, there is a solid research base on how to design activities that introduce new material prior to formal instruction. It suggests that students should initially be given a relatively difficult, open-ended problem that they are asked to solve. Students, of course, tend to struggle with this activity, with almost no students able to generate the “correct” approach. However, the effort students spend in this activity has been shown to build a better foundation for future instruction to build on as students have a better understanding of the problem to be solved (e.g., Wiedmann, Leach, Rummel & Wiley, 2012Belenky & Nokes-Malach, 2012. It is clearly important that this type of activity be presented to students as a chance to explore and that failure is accepted, expected, and encouraged. In contrast, an activity meant to be part of practice following direct instruction would likely include more step-by-step feedback and hints. So, if someone wants to design activities to be used prior to instruction, they might 1) select a fundamental idea from a lesson, 2) create multiple cases for which students must find an all-encompassing rule, and 3) situate those cases in an engaging scenario.
Schwartz of Stanford University tested this idea with students learning about ratios — without telling them they were learning about ratios. Three cases with different ratios were created based on the number of objects in a space. This was translated into the number of clowns in different-sized vehicles, and students were asked to develop a “crowded clowns index” to measure how crowded the clowns are in the vehicles. Students are not specifically told about ratios, but must uncover that concept themselves.
Product developers should consider research like this when designing their ed tech tools, as well as when they’re devising professional development programs for educators who will use those technologies in the classroom.
Product makers must consider these questions when designing ed tech: Will the activity the technology facilitates be done before direct instruction? Will it be core instruction? Will it be used to review? How much professional development needs to be provided to teachers to ensure the fidelity of implementation at scale?
Too often, designers think there is a singular answer to this series of questions: “Yes.” But in trying to be everything, we are likely to end up being nothing. Existing research on instructional uses of technology can help developers choose the best approach and design for effective implementation.
With this research as foundation, though, we still have to cook the dish and taste it. Ultimately, applying learning science at scale to real-world learning situations is an engineering activity. It may require repeated iterations and ongoing measurement to get the mix of ingredients “just right” for a given audience, or a given challenging learning outcome. We need to make sure to carefully understand and tweak our learning environments, using good piloting techniques to find out both whether our learners and teachers can actually execute what we intend as we intended it (Is the learning intervention usable? Are teachers and students able to implement it as intended?), and whether the intervention gives us the learning benefits we hoped for (effectiveness).
The key is that research should be informing development from the very beginning of an idea for a product, and an evidence-based “learning engineering” orientation should continue to be used to monitor and iterate changes to optimize impact. If we are building from a foundation of research, we are greatly increasing the probability that, when we get to those iterated and controlled trials after the product is created, we will in fact see improvements over time in learning outcomes.
This is the first in a series of essays surrounding the EdTech Efficacy Research Symposium, a gathering of 275 researchers, teachers, entrepreneurs, professors, administrators, and philanthropists to discuss the role efficacy research should play in guiding the development and implementation of education technologies. This series was produced in partnership with Pearson, a co-sponsor of the symposium co-hosted by the University of Virginia’s Curry School of Education, Digital Promise, and the Jefferson Education Accelerator.
To improve education in America, we must improve how we develop and use education technology.
Teachers and students are increasingly using digital tools and platforms to support learning inside and outside the classroom every day. There are 3.6 million teachers using ed tech, and approximately one in four college students take online courses — four times as many as a decade earlier. Technology will impact the 74 million children currently under the age of 18 as they progress through the pre-K–12 education system. The key question is: What can we do to make sure that the education technology being developed and deployed today fits the needs of 21st-century learners?
Our teachers and students deserve high-quality tools that provide evidence of student learning, and that provide the right kind of evidence — evidence that can tell us whether the tool is influencing the intended learning outcomes.
Evidence and efficacy can no longer be someone else’s problem to be solved at some uncertain point in the future. The stakes are too high. We all have a role to play in ensuring that the money spent in ed tech (estimated at $13.2 billion in 2016 for K-12) lives up to the promise of enabling more educators, schools, and colleges to genuinely improve outcomes for students and help close persistent equity gaps.
Still, education is complex. Regardless of the quality of a learning tool, there will be no singular, foolproof ed tech solution that will work for every student and teacher across the nation. Context matters. Implementation matters. Technology will always only be one element of an instructional intervention, which will also include instructor practices, student experiences, and multiple other contextual factors.
Figuring out what actually works and why it works requires intentional planning, dedicated professional development, thoughtful implementation, and appropriate evaluation. This all occurs within a context of inconsistent and shifting incentives and, in the U.S., involves a particularly complex ecosystem of stakeholders. And unfortunately, despite the deep and vested interest of improving the system, the current ecosystem is many times better at supporting the status quo than introducing a potentially better-suited learning tool.
That’s the challenge to be taken up by the EdTech Efficacy Research Symposium in Washington, D.C., this week, and the work underway as part of the initiative convened by the University of Virginia’s Curry School of Education, Digital Promise, and the Jefferson Education Accelerator. People like us rarely have the opportunity to collaborate, but this issue is too important to go it alone.
Over the past six months, 10 working groups consisting of approximately 150 people spent valuable hours together learning about the challenges associated with improving efficacy and exploring opportunities to address these challenges. We’ve looked at issues such as how ed tech decisions are made in K-12 and higher education, what philanthropy can do to encourage more evidence-based decision-making, as well as what will be necessary to make the focus on efficacy and transparency of outcomes core to how ed tech companies operate.
Over the next six weeks, we’ll explore these themes here, sharing findings and recommendations from the working groups. Our hope is to stimulate not just discussion but also practical action and concrete progress.
Action and progress might look like new ways to use research in decision-making such as informational site Evidence for ESSA or tools that make it easier for education researchers to connect with teachers, districts, and ed tech companies, like the forthcoming National Education Researcher Database. Collaboration is critical to improving how we use research in ed tech, but it’s not easy. Building a common framework takes time. Acting on that framework is harder.
So, as a starting point, here are three broader issues that we’ve learned about efficacy and evidence from our work so far.
Everyone wants research and implementation analysis done, but nobody wants to pay more for it
We know it’s not realistic to expect that the adoption of each ed tech product or curricular innovation will be backed up by a randomized control trial.
Investors are reticent to fund these studies, while schools or developers rarely want to pick up the price tag for expensive studies. When Richard Culatta and Katrina Stevens were still at the U.S. Department of Education’s Office of Educational Technology, they pointed out that “it wouldn’t be economically feasible for most app creators (or schools) to spend $250k (a low price tag for traditional educational research) to evaluate the effectiveness of an app that only cost a total of $50k to build.”
We could spend more efficiently, leveraging the 15,000 tiny pilots and decisions underway into new work and new insights without spending more money. This could look like a few well-designed initiatives to gather and share relevant information about implementations and efficacy. Critically, we’ll need to find a sustainability model for that type of rigorous evaluation to ensure this becomes a key feature in how adoption decisions are made.
We need to recognize that evidence exists on a continuum
Different types of evidence can support different purposes. What is important is that each decision is supported by an appropriate level of evidence. This guide by Mathematica provides a useful reference for educators on different evidence types and how they should be viewed. For educators, it would be wise to look at the scale and cost of the decision and determine the appropriate type of evidence.
It’s important to remember that researchers and philanthropists may use education research for different purposes than would a college, university system, or districts. Academic researchers may be looking to identify causal connections, learning gains, or retention rates, while a district is often focused on a specific context and implementation (what works for schools similar to mine).
When possible, traditional randomized control trials provide useful information, but they’re often not affordable, feasible, or even necessarily appropriate. For example, many districts, schools, or colleges are not accustomed to or well versed in undertaking this type of research themselves.
It’s easy to blame other actors for the current lack of evidence-driven decisions in education
Everyone we spoke to agrees that decisions about ed tech should be made on the basis of merit and fit, not marketing or spin. But nearly everyone thinks that this problem is caused by other actors in the ecosystem, and this means that progress here will require hard work and coordination.
For example, investors often don’t screen their investments for efficacy, nor do they promote their portfolio companies to necessarily undertake sufficient research. Not surprisingly, this tends to be because such research is costly and doesn’t necessarily drive market growth. It’s also because market demand is not driven by evidence. It’s simply not the case that selection choices for tools or technologies are most often driven by learning impact or efficacy research. That may be shifting slowly, but much more needs to be done.
Entrepreneurs and organizations whose products are of the highest quality are frustrated that schools are too often swayed by their competitors’ flashy sales tactics. Researchers feel that their work is underappreciated and underutilized. Educators feel overwhelmed by volume and claims, and are frustrated by a lack of independent information and professional support. We have multiple moving pieces that must be brought together in order to improve our system.
Ensuring that ed tech investments truly help close achievement gaps and expand student opportunity will require engagement and commitments from a disparate group of stakeholders to help invent a new normal so that our collective progress is directional and meaningful. To make progress on this, we must bring the conversation of efficacy and the use of evidence to center stage.
That’s what we’re hoping to help continue with this symposium. We’ve learned much, but we know that the journey is just beginning. We can’t do it alone. Feel free to follow and join the conversation on Twitter with #ShowTheEvidence.
Aubrey Francisco, Chief Research Officer, Digital Promise
Bart Epstein, Founding CEO, Jefferson Education Accelerator
Gunnar Counselman, Chief Executive Officer, Fidelis Education
Katrina Stevens, former Deputy Director, Office of Educational Technology, U.S. Department of Education
Luyen Chou, Chief Product Officer, Pearson
Mahnaz Charania, Director, Strategic Planning and Evaluation, Fulton County Schools, Georgia
Mark Grovic, Co-Founder and General Partner, New Markets Venture Partners
Rahim Rajan, Senior Program Officer, Bill & Melinda Gates Foundation
Robert Pianta, Dean, University of Virginia Curry School of Education
Rebecca Griffiths, Senior Researcher, Center for Technology in Learning, SRI International
This series is produced in partnership with Pearson. The 74 originally published this article on May 1, 2017, and it was re-posted here with permission.
On top of the traditional challenges of balancing their classwork, part-/full-time jobs, extracurricular activities, and social lives, today’s higher education students also face the challenge of the ever-present information firehose that is the Internet. Every day, they receive a constant stream of emails, push notifications, instant messages, social media comments, and other digital content — all of which they can carry in their pockets, and more importantly, can interrupt whatever they’re doing at a moment’s notice.
As a result, one major challenge for today’s students is to manage the ever-growing amount of information, communication, and priorities competing for their time and attention — especially when they need to study.
We’ve been hearing from many students that when they do make time to sit down and study, they find it difficult to manage that time efficiently — particularly making decisions on what to study, when to study, how often to study it, and how long to study until they become confident enough in preparation for multiple upcoming exams.
Fortunately, researchers have been investigating this problem for decades and have identified multiple methods for getting the most out of study sessions. Accordingly, here are some research-based best practices that students (or anyone else, for that matter) can use to boost their memorization skills.
Memorization takes practice
Every time you recall a piece of information (your mother’s birthday, a favorite meal at a restaurant, a key term’s definition for an exam) you retrieve it from the vast trove of knowledge that is your long-term memory. However, you’ve probably found that some pieces of information are easier to remember than others.
You’re likely to recall your home address easily because you constantly need it when filling out online forms and ensuring Amazon knows where to ship your limited edition Chewbacca mask. On the other hand, it may not be as easy to recall a friend’s phone number because it’s stored in your contacts and you rarely need to actually dial the numbers.
Unsurprisingly, researchers have found similar results to these — the more often people “practice” retrieving a certain piece of information, the easier it is for them to remember it. More importantly, scientists have demonstrated that getting yourself on a regular studying schedule can take advantage of this using what is called “spaced practice” — studying in short sessions spaced out over long periods of time. Essentially, spaced practice involves quizzing yourself and giving yourself many opportunities to practice pulling information out of your long-term memory — and doing it often over an extended period of time.
Want to give spaced practice a try? Here are some key guidelines to ensure you’re getting the most out of it.
Study early and daily
One of the most important things to remember when using spaced practice is to give yourself enough lead time before an exam. Research has shown that in general, the earlier in advance students start studying and keep studying until an exam, the higher their scores.
For example, if you have an exam in two weeks, you could begin studying for 20 minutes every day for those two weeks. That way, you’ll have many opportunities to practice retrieving the information, increasing the likelihood that you’ll remember it the day of the exam.
In contrast, if you start studying only a few days before the exam, you’ll have fewer opportunities to practice retrieving the material, and are less likely to remember it. So while there isn’t a magic recipe to determine the exact moment to start studying based on the amount of material you need to remember, it’s clear that the earlier you start studying every day, the better.
Short and sweet beats long and grueling
Another key component to spaced practice is the length of the study session. While it is common for students to embark upon marathon, multi-hour study sessions, researchers have found that when using spaced practice, long study sessions are not necessarily more effective than short study sessions. In other words, committing to studying certain material every day for 30 minutes is likely just as effective as studying that same material for an hour every day.
Now, this doesn’t mean we should all keep our study sessions as short as humanly possible and expect amazing results. Instead, it reinforces the concept of spaced practice. For instance, let’s say your goal is to memorize 15 definitions for a quiz, and you’re committed to practicing every day until that quiz. You sit down to practice each definition twice, which takes 30 minutes. (Remember, the aim of spaced practice is to retrieve a memory, and then leave a “space” of time before you retrieve it again.)
Because your brain has already retrieved each definition twice in that sitting, you may not benefit much more from studying the same words for an additional 30 minutes and reviewing each definition a total of four times. In short, once you’ve started studying early and daily, make sure to practice each concept, definition or item a few times per session — but more than that in a single sitting is likely overkill.
Don’t break the chain
I’ve emphasized the importance of practicing daily quite a bit here, and there is also a scientific reason behind that. A solid spaced practice routine means we’re continually retrieving certain information and keeping it fresh in our minds. However, if we stop practicing before something is committed to our long term memories, we’ll eventually forget it. Scientists have charted out this phenomenon in what is referred to as “The Forgetting Curve.”
In the same way that continual practice with short spaces between each session helps us to remember information, scientists have found that our ability to remember something decreases over time if we don’t practice or use the information — which is what the steep downward slope of the Forgetting Curve is meant to illustrate. When we learn new information and are immediately asked to recall it, we’re likely to remember it (the very left side of the graph).
However, from that moment on, the likelihood that we’ll remember decreases quickly and drastically unless we recall or use the memory again. If we do, then we can keep resetting or “recharging” that Forgetting Curve and keep remembering the information over time with daily practice.
For example, if you took a foreign language in high school, it’s likely that being in class five days a week, doing homework and studying for the exams kept the language’s vocabulary words fresh in your mind. However, unless you have continual opportunities to practice speaking that language after high school, it’s likely that you won’t be able to recall words, phrases, and verb conjugations over time — unless you start practicing again.
With this all in mind, if your goal is to remember something, the Forgetting Curve suggests that daily practice is key. Essentially, it’s “use it or lose it.”
Start early, finish quickly, practice daily
Although memorizing material for an exam (or multiple exams) can be intimidating, research on learning has given us a few key guidelines that have consistently demonstrated results:
Start early. The earlier in advance you start studying daily for the exam, the better
Finish quickly. Cover all of the material you need to remember in your daily session, but keep it short and sweet.
Practice daily. Don’t break the daily studying chain.
While today’s students may struggle with numerous competing priorities, incorporating these habits into their routines when they do sit down to study is sure to make their sessions much more efficient.
Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychological Bulletin, 132(3), 354–380.
Ebbinghaus, H. (1964). Memory: A contribution to experimental psychology (H. A. Ruger, C. E. Bussenius, & E. R. Hilgard, Trans.). New York: Dover Publications. (Original work published 1885)
Nathan, M. J., & Sawyer, R. K. (2014). Foundations of the Learning Sciences. In R. K. Sawyer (Ed.) Cambridge Handbook of The Learning Sciences. New York: Cambridge University Press.
Pavlik, P. I., & Anderson, J. R. (2005). Practice and forgetting effects on vocabulary memory: An activation-based model of the spacing effect. Cognitive Science, 29(4), 559-586.
Rohrer, D., Taylor, K., Pashler, H., Wixted, J. T., & Cepeda, N. J. (2005). The effect of overlearning on long-term retention. Applied Cognitive Psychology, 19(3), 361–374.
Stahl, S. M., Davis, R. L., Kim, D. H., Lowe, N. G., Carlson, R. E., Fountain, K., & Grady, M. M. (2010). Play it Again: The Master Psychopharmacology Program as an Example of Interval Learning in Bite-Sized Portions. CNS Spectrums, 15(8), 491–504.