All Pearson Blog Posts

Select the month AND year to filter

  • Rigor and Readiness: Measuring the Impact of Algebra II

    by Katie McClarty

    Students in a lecture

    There has been a lot of discussion lately about the role of advanced high-school mathematics courses — in particular, Algebra II — in promoting college and career readiness. On one side of the debate, the champions of Algebra II cite research demonstrating that completing the course leads to success in higher education and to higher earnings (Adelman, 2006; Carnevale & Desrochers, 2003). Achieve has been one of the leading advocates for including advanced mathematics in required high school curricula, suggesting there are not only practical advantages (e.g., prerequisites for future study), but also benefits to students’ general academic development. Skills acquired through Algebra II (including but not limited to logical thinking, cognitive capacity, and complex problem solving) can support success in areas far beyond a day-to-day work environment.

    This isn’t to say the debate is settled. A recent report from the National Center for Education and the Economy (NCEE) found that the skills most important for succeeding in community college math courses were those introduced in middle school. By analyzing textbooks, assignments, and tests at seven community colleges, the researchers concluded that few students need to master advanced algebra to be successful. The NCEE report comes at a time when several states (e.g., Florida, Texas) are changing graduation requirements to make Algebra II optional, provide more flexible pathways toward high school graduation, and create space in students’ schedules for more vocational training.

    Isolating the causal effect of taking Algebra II on future outcomes is a serious challenge, thanks to selection bias. It is likely that students who choose to take Algebra II in high school are higher performing and more motivated than many of their peers and thus more likely to attend and do well in college. In other words, it’s something about the type of students that take Algebra II, rather than completing the course itself, that leads to better student outcomes.

    In a recent research study, my co-authors and I set about tackling this thorny issue — separating selection effects from Algebra II’s true causal effects. We will be presenting our work next week at the Association for Institutional Research Annual Forum in Long Beach, CA. We used national datasets spanning multiple decades and sophisticated econometric techniques to isolate cause-and-effect relationships between completing Algebra II in high school and subsequent college and career outcomes.

    The verdict? Algebra II seems to matter more for college outcomes (including community colleges, technical colleges, and four-year institutions) than for career outcomes. Compared to their counterparts who didn’t finish Algebra II, those who did were more likely to be admitted to selective colleges, maintain higher college GPAs, stay in school, and graduate. Conversely, for students who did not apply to college after high school, completing Algebra II was not related to finding a job immediately after high school, initial occupational prestige, earnings, or career advancement.

    This research indicates that students not planning to attend any college (two-year or four-year) may not benefit substantially from finishing Algebra II. That said, it’s important to highlight one caveat: Algebra II does not seem to negatively impact any career outcomes. In that respect, completing the course will keep doors open to college for the many students who do not solidify their postsecondary plans before enrolling in high school courses or starting their mathematics sequence. Some of our other interesting findings from this study will be the topic of future blog posts.

  • Grit, Tenacity, and Perseverance

    by Katie McClarty

    In my last blog, I discussed the importance of metacognitive learning skills—attitudes, behaviors, and beliefs about learning. These skills continue to garner attention from educational researchers and policy-makers. The Office of Education and Technology (OET) at the U.S. Department of Education recently released a report, Promoting Grit, Tenacity, and Perseverance—Critical Factors for Success in the 21st Century, which takes a closer look at defining, measuring, and developing these skills. Grit was defined as “perseverance to accomplish long-term or higher-order goals in the face of challenges and setbacks, engaging the student’s psychological resources, such as their academic mindsets, effortful control, and strategies and tactics” (p. 15).

    The task of defining and measuring grit is not simply an academic exercise; this is a trait associated with important student outcomes, including success in college. Angela Duckworth’s research shows that people with a college degree (Associate’s or higher) tend to be grittier than people without a degree. Moreover, and perhaps not surprisingly, grit seems to be associated with success in particularly challenging postsecondary environments. It is associated with retention at West Point, and research by Terrell Strayhorn has shown grit is a significant predictor of college grades for black males attending predominantly while institutions.

    Because grit may play a key role in overcoming adversity, it is encouraging that grit, tenacity, and perseverance are skills that can be developed with the right supports. For example, the OET report recommends designing learning environments that provide students opportunities to take on long-term, higher-order goals aligned with their interests. These goals are optimally challenging and intrinsically motivating. Meeting them takes perseverance. By developing such skills early, students may be more likely to persevere through challenges that are bound arise along their college and career paths.

    The central tenets of personalized learning echo these themes. First, we must identify where each student is on a learning trajectory. We use that information to provide each student with a challenging, but attainable next step. Technology and digital learning environments can facilitate the personalization process. With these tools we can collect information about students’ strengths, weaknesses, and behaviors, and then adapt learning systems to set reasonable goals for every student. By creating personalized learning solutions, we can do more than just deliver the appropriate academic content. We can set students on a path to increase their grit.

  • Look at Your Data: Administrator Salary and Tuition

    Visualizing your data gives you clues about how two variables relate to each other. Ignoring clues from the visualization can you lead to potentially inaccurate conclusions.

    Last week Education Sector, a nonprofit education think tank announced something they are calling “Higher Ed Data Central.” They have taken a bunch of publicly available data sets and combined them into a database.

    On their blog, the Quick and the Ed, they started showing examples of what they could do with this data. On Friday they published a post including the graph below of the number of administrators who make over $100k per 1,000 students versus tuition at private non-profit 4 year universities.

  • Explaining “Field Tests”: Top Six Things Parents Should Know

    by Jon Twing

    Field testing is a routine part of standardized test administration and many such field tests are occurring in a number of states this spring in one form or another. Because such field testing is so important and because it comes in many different varieties, it is important to understand some of the background.

    1) Let’s start with the basics. What is a field test?

    A field test (as defined by the National Council on Measurement in Education) is a test administration used during the test development process to check on the quality and appropriateness of test items, administration procedures, scoring, and/or reporting. Basically, this means that an “item” / test question (including reading passages, essay prompts) itself is tested, enabling educators and test developers to make sure that an item does measure what it is intended to measure—that the questions provide an accurate, fair and valid representation of what students know and can do.

    2) Do field tests count toward my child’s grades or impact his or her achievement?

    No. Field tests (be they separately administered tests or groups of items embedded within a ongoing assessment) never count toward a student’s score or ability to advance to the next grade. Students’ scores on these field-test items are only used to evaluate how well the items or test questions capture the knowledge and skills they are designed to measure.

    3) If field tests aren’t used for scoring or grading, why are they done?

    They are a vital element to the development of fair, high-quality tests. Field tests are done to help ensure questions used in upcoming standardized tests that count are fair for all students, of high quality and rigorous enough to comply with professional standards. It’s important for a state to know that questions, prompts, reading passages, or other test elements are worthy of being used to assess skills and knowledge appropriately.

    Many needs are balanced when field testing is conducted, but two are very critical: (1) minimizing burden on students and schools and (2) administering tests that meet recommended industry standards. Minimizing field testing is vital so that time can be spent on instruction, but it’s also important to gather enough data to be able to evaluate the fairness of questions, to eliminate flawed items, and to build tests each year that cover a range of curriculum from the very easy to the very difficult.

    4) What does field testing mean for my child?

    Field testing is conducted to make sure that the standardized assessments used in your school or your state meet professional standards for quality and fairness. The goal of field testing is to make sure all questions are free from bias, are aligned to academic standards of your state and function appropriately. However, if you are concerned with how field testing may impact your child then contact your child’s school to learn more.

    5) What kinds of field tests are there?

    Generally, there are two approaches to field tests: embedding questions within assessments that count for students and standalone field-testing. In both cases, any question deemed unfair after field testing is thrown out and won’t appear on any future assessments.

    Embedded Field Tests

    Students take embedded field-test questions at the same time they take the rest of their standardized test. This is typically done for multiple-choice assessments. Whenever possible, states embed field-test questions in multiple forms of “live” tests so that these field-test questions are randomly distributed to a representative student population. Experience shows that these procedures can give the state an appropriate amount of data to ensure fairness in a very efficient manner. The embedded field-test questions are not counted on a student’s score.

    Standalone Field Tests

    Sometimes separate field tests are necessary due to factors like test structure (i.e., tests with open-ended questions, tests that required students to perform tasks or lengthy essays), a small student population, or method of test delivery. States administer these separate field tests at a different time than the state assessments that are reported publicly. As with embedded field-test items, a separate field test does not count toward student scores.

    6) Once gathered, how is the information from field tests used?

    After field testing, a range of stakeholders – generally teachers, school administrators, curriculum and assessment specialists who represent a range of ethnicities, genders, types and sizes of schools district, and geographical regions – all gather to review the data collected from the field test. This “data review” committee examines each test question (and related collateral like reading passages) to determine if each question is free from bias (economic, regional, cultural, gender, and ethnic) and that each is appropriately measuring what it was expected to measure. Questions that pass all stages of development—including field testing and this data review process— become eligible for use on future tests. Rejected questions are precluded from use on any test.

  • Standardized testing. What is it and how does it work?

    by Kimberly O'Malley

    Standardized assessment is a lens into the classroom. It sheds light on why a child might be struggling, succeeding, or accelerating on specific elements of their grade-level standards. Results from standardized tests help inform the next step in learning for our students. But, sometimes it isn’t always crystal clear to students, parents and the public how and why the tests are developed. Let’s delve into that.

    As it stands, most states are still administering end-of-year tests as required by federal law under No Child Left Behind. For the most part, this means students take annual tests in English Language Arts and Mathematics in grades 3-8; they are tested at least once in high school. Science is tested at least once in elementary, middle and high school. Additional testing in high school often is seen after completing specific courses, like Algebra or Biology, or as a gateway to graduation.
    Each state plans the specifics of its testing program, deciding elements like how many questions to put on a test, the dates for testing, whether tests are given on paper or on computer, to name a few. But, some similarities in the creation of the tests cut across the board.

    Standardized tests undergo a very rigorous development process so here’s a bit about the five major steps that go into making a test.

    States Adopt Content Standards

    This is where it all begins. Everything starts with the content standards developed by states and/or a group of states, as seen with the Common Core State Standards. Content standards outline what a student should be able to know at the end of each school year. These standards are the foundation for instruction in the classroom as well as the assessment.

    Given the huge range of knowledge and skills each student is supposed to master by year’s end, the assessment development process includes a determination of what will be assessed on each test for each grade. Because we can’t test everything covered in a year (no one wants the test to be longer than necessary), decisions must be made.

    Item Development

    Here’s where we get into the nitty gritty. Experts, most of whom are former or current teachers with experience and knowledge of the subject matter and grade level, create “items” that test the content selected in step two. These items can be multiple-choice questions, essay prompts, tasks, situations, activities, and the like.

    Of note, significant time is even spent deciding which WRONG answers to make available for multiple-choice questions. Why’s that? Every item is a chance to identify what our students really know. Incorrect answers can actually tell us a lot about what students misunderstood. For instance, did they add instead of subtract? Multiply instead of divide? Every bit of data helps disentangle what kids really, truly know, which makes the assessment process complex and the final product a very powerful education tool.

    Once the items are developed, then teachers, content experts, higher education faculty, and the testing entity at the state level review them. This diverse group of stakeholders works together to create items that are fair, reliable and accurate. Lots of revisions happen at this stage. And, during this process many items are thrown out — for any number of reasons — and never see the light of day.

    Field Testing or Field Trials

    Now, we test the items by giving them to students. Items developed in step three are “field tested” to gauge how each works when students respond to them. Here, and I can’t stress this enough, we’re testing the item itself – not the kids. We want to know that the question itself is worthy of being used to assess skills and knowledge appropriately. Students’ scores on these field-test items are only used to evaluate the items; they are not used to calculate a student’s score for the year.

    By doing these trials, we can see if gender, ethnicity or even English proficiency impact a child’s ability to successfully perform the task at hand. All of this is done to verify that each and every question is fair. Yet again, a range of stakeholders and experts are involved in the process, reviewing the results and making decisions along the way. The reality is this: if an item doesn’t meet expectations, it’s cut.

    Build the Test

    Using field-tested and approved items, systematically and thoughtfully the test takes its final form. Easy and hard items, tasks, and activities are incorporated. Items that assess varying skills and content areas are added. This part of the process helps us understand what a child really knows at the end of the assessment. As they say, variety is the spice of life. Same goes for an assessment. A mixture of challenging and easy items enable a range of knowledge and skills to be assessed.
    Setting Performance Standards – Finally, states with teachers and their testing partners to make decisions about how well students must perform to pass, or be proficient. For example, performance can be defined as basic, passing, proficient, or advanced. These “performance standards” provide a frame of reference for interpreting the test scores. They help students, parents, educators, administrators, and policymakers understand how well a student did by using a category rating.
    After – and only after – this rigorous, multi-step, multi-year process involving a range of stakeholders is complete, do the tests enter the classroom.

  • NPR Says Learning Styles Are A Myth…

    I’m on several listservs. I enjoy watching the dialogue between instructors and administrators about everything from the coolest new techno-widget to research questions and answers for at-risk reports. The conversations are typically interesting and challenging.

    One that I’ve been watching for the past 48 hours is no different. There is a pretty significant debate going on with regard to Learning Styles. NPR ran a story a few days ago suggesting there is no such thing as proven learning styles (NPR story) and that educators are wasting their time trying to use them in teaching.

    The listserv I have been watching began with a light-hearted response to the NPR story and it soon turned downright ugly! Professors wrote in explaining how over-joyed they were to hear a story about something they knew to be “crap all along” (quote from the listserv – name withheld). The visceral rhetoric talked about ridiculous trainings on the subject and that differentiation equates to edutainment (which essentially is teaching to the lowest common denominator).

    (It was interesting that many of the anti-learing sytlists ignored a component of the story that explains how, “Mixing things up is something we know is scientifically supported as something that boosts attention…”)

    So, several posts centered around the idea that we should all go back to lecturing as it has never been proven to be ineffective…

    I’m quite troubled by this conversation. (I don’t typically blog about other digital conversations I’m watching.) Although I must admit that I’m not surprised. As a faculty member and someone who speaks about the future of education, I come across a fair share of educators who disagree with topics of all scope and sequence. And I hear often about the lack of evidence for Dale’s Cone, Learning Styles, and the need for differentiation.

    But as I watch and listen to the debate, I have to ask myself some basic questions of motivation. Who has a stake in the fight and why?

    It certainly does not surprise me that faculty would want to replicate the manner of teaching that was modeled for them. Most people parent the way they were parented. Most people use manners as they were shown to use manners. And so, it makes sense that most people teach the way they were taught. Especially considering that the overwhelming majority of instructors have never had a single class on how to best teach or educate anyone. (We’ll not talk about the assmption that because someone is a subject matter expert they inherently know how to teach others for now…)

    But, as stakeholders in the debate, I believe it is important to ask about their motivation. Now please don’t get me wrong, I LOVE to lecture. I actually won Lecturer of the Year at Metro State before my Pearson days. I enjoy the attention, the control, and the challenge of connecting to the crowd. I like trying to find ways to challenge, engage, focus, inform, and persuade. I really enjoy a good lecture. But that actually leads to my first point. As much as I like lecturing, I have to admit that it’s easy in contrast to creating differentiated learning modules for my students. And there is a major semantic elephant in the room…I said “good lecture” above. I would argue that most lectures are NOT good. I know there are a few great lecturers out there, but most instructors are not them. (Yes, I read Nudge and I know that most instructors believe they are in the top 10% of eductators…but I have bad news for most of you…) Want me to prove it?

    Go to a conference. ANY conference. I’m particularly embarrassed by my own disclipline of communication in terms of conference presentations. You all are probably nodding already, because you know what I’m going to say. 90% of the presentations are just awful. They are boring, uninspiring lectures (sometimes more appropriately called a reading…) where the presenters (aka instructors) do not connect to the audience, the material, or the event. Most conference presentations are lectures and if you scan the room during one of these lectures, do you know what you see? You see OTHER EDUCATORS who are sleeping, texting, Facebooking, or otherwise not paying attention.

    So, it seems to me that the first reason a person would want to go back to lectures all the time is because it’s known and easy. Haven’t we all wondered if a college instructor just rolled out of bed and stood before the class expounding on things they “just knew” without any prep? And even if a lecturer does prep, how much prep actually takes place? While it may be days or weeks for a precious few, it’s likely less than an hour for most.

    OK, so why else would teachers not want to differentiate instruction? I think it’s actually simple. People hate change about as much as they hate for anyone to tell them what to do. And college educators (I believe) are particularly hard on those who give an opposing view. Think about it. Professors give red marks for a living. THEY are the ones to tell someone else that what they have done or thought about is wrong…not the other way around. So, when someone says, “I don’t think you’re teaching these students in the best possible way…” they tend to get pretty defensive.

    Finally, one more thought around the motivation of anti-learning style debaters that may come into play here. It’s actually a fallacy, typically known as the fallacy of tradition. It’s the idea that ‘if it ain’t broke, don’t fix it’ – or more appropriately here, “We should do it this way because we’ve always done it this way?” In these listserv conversations, I watched faculty say that everyone on the list went to college and made it through lecturers, so it must be fine. Hmmm….I’ll let go the problem with educators loving education far more than non-educators. But, there is a problem with the whole line of reasoning. The reason people started asking questions in the first place was because it was NOT working. The cracks in the armor first showed up in K-12 and then quickly moved to higher education. Our students started doing poorly on local tests, national tests, and finally world tests. Our students stopped being as employable as more and more white collar jobs went to foreign-educated graduates. So, to say that it isn’t broken is wrong. And going on the old addage about insanity being an action of doing something the same way twice and expecting different results, it doesn’t seem to fly here.

    So, let me wrap up what has become a very long posting with two final thoughts. First, I will concede that the term “learning style” has become so bastardized that it may no longer be meaningful. If we need to think of better ways to express our research and to explore the extraordinarily complex human mind, so be it. While I believe we will someday understand how individuals learn better, I also feel that the brain is as complex as the cosmos and we just don’t have the technology yet. But researching and framing are two different things. A learning style framework, regardless of the author, is at its core, a way to promote differentiation. And again, differentiation HAS been proven to be better teaching.

    Second, if you doubt that learning styles exist, talk to parents. Specifically, talk to parents of two or more kids. I am willing to bet that 99% will tell you that their kids both learn quite differently. So, from a very practical standpoint, let’s start using effective teaching and learning techniques that promote the BEST learning in all situations, for all students…not just the few who can manage to stay with us as we lecture. It’s time to change the conversation…

    Good luck and good teaching.