Technology may be one of the keys to increasing the number of engaged students in America’s classrooms. In our multi-phase Teaching in a Digital Age study, we are working with many partners to research digital teaching strategies and how they positively affect student learning. One of these positive effects reported by educators is the increased intensity of student engagement that occurs when technology is integrated into the classroom.
Technology as a tool helps teachers create and present content and instruction that is interesting and relevant to students. When learning is relevant to students, then they become engaged, active learners. How does this happen?
With increased access to learning resources, tools and information, students are drawn deeper into a topic than ever before. They can even direct their own learning. In fact, when done well, students don’t just learn with technology- they create. One educator noted:
“When students have this technology, they can create things. They can innovate things…. When they have Photoshop in front of them and I say do this, this, and this, what they can create is always going to be completely, uniquely different. And, they become artists with that or they become filmmakers, or they become web designers. Like they can take on a lot of really advanced roles, and I think that’s something that technology does uniquely provide, because you can’t be a web designer without that technology. You can’t create a film without that technology. And, I feel like that’s really different than a textbook…let me let you take your creativity, and using this technology, create something I would have never made.”
Educators in Meridian, Idaho noted the misconception that students are only engaged individually with technology. Their classrooms don’t look like separate students glued to a screen. Instead, educators can direct students to engage collaboratively with the use of technology. With technology, collaboration among students is easier and broader. It also opens doors to widen the audience and purpose of student work, giving meaning to the schoolwork.
And, with increased student engagement, comes increased learning. There is a strong research base that describes how technology strengthens student engagement and learning. For example, active learning is associated with improved student academic performance (Hake, 1998; Knight & Wood, 2005; Michael, 2006; Freeman, et al., 2007; Chaplin, 2009), and increased student engagement, critical thinking, and better attitudes toward learning (O’Dowd & Aguilar-Roca, 2009). Read more in my paper Teaching in a Digital Age.
If technology supports teachers’ efforts to focus on effective practices that engage students, then we have another tool to engage that half of US students who aren’t currently engaged.
In my previous blog, I talked about the difference between progress monitoring and monitoring progress. Today, I share my ideas of how learning progressions can inform both.
The key to monitoring progress is understanding what students know and don’t know at any given time. Learning progressions use research on how students learn to clearly define the learning pathway and conceptual milestones along that pathway. For example, my fourth grader’s teacher could compare his work to learning progressions so that she understands more clearly what he knows, and what she can do to move him most efficiently from his “check-minuses” to “check-plusses”.
In progress monitoring, teachers use data on a regular basis to understand students’ learning rates, but it is up to the teacher to formulate an instructional response. If the CBM slope is flat, the instructional next steps may not be entirely clear. If CBMs were linked to learning progressions, it could enhance progress monitoring by making clear how students are approaching problems and what misconceptions are preventing their progress.
The National Association of State Directors of Special Education (NASDSE) states that effective progress monitoring (NRCLD, 2006, p. 22):
Assesses the specific skills represented in state and local academic standards.
Assesses marker variables that have been demonstrated to lead to the ultimate instructional targets.
Is sensitive to small increments of growth over time.
Is administered efficiently over short periods.
Is administered repeatedly (using multiple forms).
Results in data that can be summarized in teacher-friendly data displays.
Is comparable across students.
Is applicable for monitoring an individual student’s progress over time.
Is relevant to the development of instructional strategies and use of appropriate curriculum that address the area of need.
These characteristics are quite similar to some of the features of learning progressions:
Many learning progressions have been linked to standards, such as the Common Core State Standards and the Next Generation Science Standards (#1).
What the NRCLD refers to as “marker variables” are commonly referred to in learning progressions as “levels of achievement,” or the conceptual milestones that students pass through as they are learning in a particular domain (#2).
The sensitivity to small increments of growth over time is related to the grain size of a learning progression; to be useful for formative assessments learning progressions usually need to have a relatively fine grain size (#3).
Formative assessments based on learning progressions should also be administered efficiently and repeatedly, and should be useful for monitoring students’ progress over time (#4 ,#5, and #8).
Because learning progressions are based on the scientific literature describing how typical students learn, assessments based on learning progressions should be comparable for most students, although it is necessary to collect empirical evidence that particular subgroups of students follow the same learning pathways (#7).
One of the most promising aspects of learning progressions is the potential for providing teachers with instructionally actionable information in the form of “teacher-friendly” student and classroom performance reports and instructional tools and resources that are aligned to the learning progression (#6 and #9). We are engaging in research to learn about the inferences that teachers make from learning progression-based assessment reports. Stay tuned to learn more about these efforts as the year unfolds.
Can learning progressions live up to their promise and really help educators monitor progress and conduct progress monitoring? It is still too early to tell, but there is some encouraging research showing that with ample training and support, teachers can use learning progressions as a framework for their formative assessment and instruction and by doing so, they come to better understand their students’ learning pathways.
What do these three terms have in common? Progress, of course. Educators and parents across the globe all want to enable their students to make progress. When my fourth grader’s teacher sends home a weekly folder with his work samples and tests, a “check” or a “check-plus” tells me that he gets it, or he pretty much gets it, and a “check-minus” gives me the impression that he has more work to do, but I don’t know what pathway he needs to take to move from the “check-minus” to the “check-plus”, and what is the best way to get him there.
Currently, educators frequently measure what students know and what they don’t know, but this “mastery measurement” does not provide information on students’ progress or learning pace so that they can ultimately meet the standards we set for them. Monitoring is an integral part of ensuring that students make progress, but what is the difference between monitoring progress and progress monitoring? They sound like they’re the same, don’t they? And how do learning progressions fit in? In previous posts I defined and described learning progressions and why the Research & Innovation Network thinks they have promise. In today’s post (Part 1) I will distinguish monitoring progress from progress monitoring. In Part 2, I’ll share ideas of how I think learning progressions can inform both.
Monitoring progress is a core instructional practice that includes formative assessment, questioning, providing feedback, and similar strategies. All teachers monitor their students’ progress throughout the year, using a variety of strategies, but these strategies are not standardized and vary greatly in quantity and quality. Formative assessment plays an important role in monitoring progress, but some teachers are more comfortable with formative assessment than others, and all teachers could use tools and resources that would make conducting formative assessment easier.
Monitoring progress is a core instructional practice that includes formative assessment, questioning, providing feedback, and similar strategies.
Progress monitoring is a term used to describe a formal part of Response to Intervention (RTI); it is a scientifically based practice used to assess students’ academic performance and evaluate the effectiveness of instruction. It was originally designed for use in individualized special education, but is now seen as a useful approach for many different types of students (Safer & Fleischman, 2005). Teachers are trained to use student performance data to continually evaluate the effectiveness of their instruction. Students’ current levels of performance are determined and measured on a regular basis. Progress toward meeting goals is measured by comparing expected and actual rates of learning, and teachers are prompted to adjust their instruction based on these measurements.
Curriculum-based measurement (CBM) is one type of progress monitoring. A CBM test assesses all of the skills covered in a curriculum over the course of a school year. Each weekly test is an alternate form (with different test items but of equivalent difficulty) so that scores can be compared over the school year. Students’ scores are graphed over time to show their progress (see examples here); scores are expected to rise as students are learning and are exposed to the curriculum. The rate of weekly improvement is quantified as the slope of the line, which teachers can compare to normative data. If scores are flat, it signals the need for additional intervention.
How can learning progressions help with both monitoring progress and progress monitoring? Stay tuned for ideas in my next blog.
The end of the school year is a time for field trips, class parties, and final report cards. The iconic report card lets parents know how their student did that year and typically reflects attendance, participation, and performance in class. Parents generally understand how to interpret report card grades: A (great), C (average), or F (failing).
The end of the year also is the time when many parents receive their child’s standardized test scores. These results, however, are not as easy to interpret. For example, in Massachusetts, a student who earns a score of 250 on the state test is considered proficient. In Washington, it takes a score of 400. Each state has its own assessments, and each defines proficiency differently.
Now, however, nearly all the states have agreed to adopt the Common Core State Standards as an outline of what students should be taught in mathematics and English language arts. Educators will use instructional materials appropriate for teaching students the knowledge, skills, and practices laid out in these documents. That should produce less variability in instruction state to state and district to district.
In order to monitor how well students are learning this material, most of the states also have agreed to use one of two Common Core assessments that are being developed. That will make it possible for states to report results on a common scale: a 400 in English in Tennessee, for example, would be the same as a 400 in Florida. But the question remains: is 400 good enough?
To answer that question, states set performance standards. Typically, this is done by educators and other experts who get together and look at assessments and agree on which questions or tasks a proficient (or advanced or in need of improvement) student should be expected to answer or complete. That information is then translated into a specific score. The same process can be used to analyze the quality of examples of student work.
More recently, it’s become possible to answer the question of what is good enough more precisely, based not just on expert judgment but also on data. If by proficiency we mean that a student has learned enough in one grade to be ready to do well in the next one, we can test that definition by tracking how students actually perform. We can look at how well a group of students performs on a 5th grade math test and then look back at how those same kids had done on the 4th grade math test. Using statistics, we can then more accurately define what it means to be proficient in the 4th grade.
This process is called Evidence Based Standard Setting, and because scores can be linked to future performance, it can give parents confidence that if their child is proficient, he or she has not only mastered an important set of knowledge and skills, but also is likely to be successful in the next grade. It can even give students and their parents a sense of whether they’re on track to do well after high school in college or in demanding career training programs. The scores can also help identify students who need extra help before it becomes too late, and parents, using this information, can advocate on their children’s behalf to make sure they receive that help.
The familiar report card is but one source of information about how well students are doing in school. Test results linked to important future outcomes can provide another critical piece of information to teachers, parents, and students.
There has been a lot of discussion lately about the role of advanced high-school mathematics courses — in particular, Algebra II — in promoting college and career readiness. On one side of the debate, the champions of Algebra II cite research demonstrating that completing the course leads to success in higher education and to higher earnings (Adelman, 2006; Carnevale & Desrochers, 2003). Achieve has been one of the leading advocates for including advanced mathematics in required high school curricula, suggesting there are not only practical advantages (e.g., prerequisites for future study), but also benefits to students’ general academic development. Skills acquired through Algebra II (including but not limited to logical thinking, cognitive capacity, and complex problem solving) can support success in areas far beyond a day-to-day work environment.
This isn’t to say the debate is settled. A recent report from the National Center for Education and the Economy (NCEE) found that the skills most important for succeeding in community college math courses were those introduced in middle school. By analyzing textbooks, assignments, and tests at seven community colleges, the researchers concluded that few students need to master advanced algebra to be successful. The NCEE report comes at a time when several states (e.g., Florida, Texas) are changing graduation requirements to make Algebra II optional, provide more flexible pathways toward high school graduation, and create space in students’ schedules for more vocational training.
Isolating the causal effect of taking Algebra II on future outcomes is a serious challenge, thanks to selection bias. It is likely that students who choose to take Algebra II in high school are higher performing and more motivated than many of their peers and thus more likely to attend and do well in college. In other words, it’s something about the type of students that take Algebra II, rather than completing the course itself, that leads to better student outcomes.
In a recent research study, my co-authors and I set about tackling this thorny issue — separating selection effects from Algebra II’s true causal effects. We will be presenting our work next week at the Association for Institutional Research Annual Forum in Long Beach, CA. We used national datasets spanning multiple decades and sophisticated econometric techniques to isolate cause-and-effect relationships between completing Algebra II in high school and subsequent college and career outcomes.
The verdict? Algebra II seems to matter more for college outcomes (including community colleges, technical colleges, and four-year institutions) than for career outcomes. Compared to their counterparts who didn’t finish Algebra II, those who did were more likely to be admitted to selective colleges, maintain higher college GPAs, stay in school, and graduate. Conversely, for students who did not apply to college after high school, completing Algebra II was not related to finding a job immediately after high school, initial occupational prestige, earnings, or career advancement.
This research indicates that students not planning to attend any college (two-year or four-year) may not benefit substantially from finishing Algebra II. That said, it’s important to highlight one caveat: Algebra II does not seem to negatively impact any career outcomes. In that respect, completing the course will keep doors open to college for the many students who do not solidify their postsecondary plans before enrolling in high school courses or starting their mathematics sequence. Some of our other interesting findings from this study will be the topic of future blog posts.
In my last blog, I discussed the importance of metacognitive learning skills—attitudes, behaviors, and beliefs about learning. These skills continue to garner attention from educational researchers and policy-makers. The Office of Education and Technology (OET) at the U.S. Department of Education recently released a report, Promoting Grit, Tenacity, and Perseverance—Critical Factors for Success in the 21st Century, which takes a closer look at defining, measuring, and developing these skills. Grit was defined as “perseverance to accomplish long-term or higher-order goals in the face of challenges and setbacks, engaging the student’s psychological resources, such as their academic mindsets, effortful control, and strategies and tactics” (p. 15).
The task of defining and measuring grit is not simply an academic exercise; this is a trait associated with important student outcomes, including success in college. Angela Duckworth’s research shows that people with a college degree (Associate’s or higher) tend to be grittier than people without a degree. Moreover, and perhaps not surprisingly, grit seems to be associated with success in particularly challenging postsecondary environments. It is associated with retention at West Point, and research by Terrell Strayhorn has shown grit is a significant predictor of college grades for black males attending predominantly while institutions.
Because grit may play a key role in overcoming adversity, it is encouraging that grit, tenacity, and perseverance are skills that can be developed with the right supports. For example, the OET report recommends designing learning environments that provide students opportunities to take on long-term, higher-order goals aligned with their interests. These goals are optimally challenging and intrinsically motivating. Meeting them takes perseverance. By developing such skills early, students may be more likely to persevere through challenges that are bound arise along their college and career paths.
The central tenets of personalized learning echo these themes. First, we must identify where each student is on a learning trajectory. We use that information to provide each student with a challenging, but attainable next step. Technology and digital learning environments can facilitate the personalization process. With these tools we can collect information about students’ strengths, weaknesses, and behaviors, and then adapt learning systems to set reasonable goals for every student. By creating personalized learning solutions, we can do more than just deliver the appropriate academic content. We can set students on a path to increase their grit.
Visualizing your data gives you clues about how two variables relate to each other. Ignoring clues from the visualization can you lead to potentially inaccurate conclusions.
Last week Education Sector, a nonprofit education think tank announced something they are calling “Higher Ed Data Central.” They have taken a bunch of publicly available data sets and combined them into a database.
On their blog, the Quick and the Ed, they started showing examples of what they could do with this data. On Friday they published a post including the graph below of the number of administrators who make over $100k per 1,000 students versus tuition at private non-profit 4 year universities.
Field testing is a routine part of standardized test administration and many such field tests are occurring in a number of states this spring in one form or another. Because such field testing is so important and because it comes in many different varieties, it is important to understand some of the background.
1) Let’s start with the basics. What is a field test?
A field test (as defined by the National Council on Measurement in Education) is a test administration used during the test development process to check on the quality and appropriateness of test items, administration procedures, scoring, and/or reporting. Basically, this means that an “item” / test question (including reading passages, essay prompts) itself is tested, enabling educators and test developers to make sure that an item does measure what it is intended to measure—that the questions provide an accurate, fair and valid representation of what students know and can do.
2) Do field tests count toward my child’s grades or impact his or her achievement?
No. Field tests (be they separately administered tests or groups of items embedded within a ongoing assessment) never count toward a student’s score or ability to advance to the next grade. Students’ scores on these field-test items are only used to evaluate how well the items or test questions capture the knowledge and skills they are designed to measure.
3) If field tests aren’t used for scoring or grading, why are they done?
They are a vital element to the development of fair, high-quality tests. Field tests are done to help ensure questions used in upcoming standardized tests that count are fair for all students, of high quality and rigorous enough to comply with professional standards. It’s important for a state to know that questions, prompts, reading passages, or other test elements are worthy of being used to assess skills and knowledge appropriately.
Many needs are balanced when field testing is conducted, but two are very critical: (1) minimizing burden on students and schools and (2) administering tests that meet recommended industry standards. Minimizing field testing is vital so that time can be spent on instruction, but it’s also important to gather enough data to be able to evaluate the fairness of questions, to eliminate flawed items, and to build tests each year that cover a range of curriculum from the very easy to the very difficult.
4) What does field testing mean for my child?
Field testing is conducted to make sure that the standardized assessments used in your school or your state meet professional standards for quality and fairness. The goal of field testing is to make sure all questions are free from bias, are aligned to academic standards of your state and function appropriately. However, if you are concerned with how field testing may impact your child then contact your child’s school to learn more.
5) What kinds of field tests are there?
Generally, there are two approaches to field tests: embedding questions within assessments that count for students and standalone field-testing. In both cases, any question deemed unfair after field testing is thrown out and won’t appear on any future assessments.
Embedded Field Tests
Students take embedded field-test questions at the same time they take the rest of their standardized test. This is typically done for multiple-choice assessments. Whenever possible, states embed field-test questions in multiple forms of “live” tests so that these field-test questions are randomly distributed to a representative student population. Experience shows that these procedures can give the state an appropriate amount of data to ensure fairness in a very efficient manner. The embedded field-test questions are not counted on a student’s score.
Standalone Field Tests
Sometimes separate field tests are necessary due to factors like test structure (i.e., tests with open-ended questions, tests that required students to perform tasks or lengthy essays), a small student population, or method of test delivery. States administer these separate field tests at a different time than the state assessments that are reported publicly. As with embedded field-test items, a separate field test does not count toward student scores.
6) Once gathered, how is the information from field tests used?
After field testing, a range of stakeholders – generally teachers, school administrators, curriculum and assessment specialists who represent a range of ethnicities, genders, types and sizes of schools district, and geographical regions – all gather to review the data collected from the field test. This “data review” committee examines each test question (and related collateral like reading passages) to determine if each question is free from bias (economic, regional, cultural, gender, and ethnic) and that each is appropriately measuring what it was expected to measure. Questions that pass all stages of development—including field testing and this data review process— become eligible for use on future tests. Rejected questions are precluded from use on any test.
Standardized assessment is a lens into the classroom. It sheds light on why a child might be struggling, succeeding, or accelerating on specific elements of their grade-level standards. Results from standardized tests help inform the next step in learning for our students. But, sometimes it isn’t always crystal clear to students, parents and the public how and why the tests are developed. Let’s delve into that.
As it stands, most states are still administering end-of-year tests as required by federal law under No Child Left Behind. For the most part, this means students take annual tests in English Language Arts and Mathematics in grades 3-8; they are tested at least once in high school. Science is tested at least once in elementary, middle and high school. Additional testing in high school often is seen after completing specific courses, like Algebra or Biology, or as a gateway to graduation.
Each state plans the specifics of its testing program, deciding elements like how many questions to put on a test, the dates for testing, whether tests are given on paper or on computer, to name a few. But, some similarities in the creation of the tests cut across the board.
Standardized tests undergo a very rigorous development process so here’s a bit about the five major steps that go into making a test.
States Adopt Content Standards
This is where it all begins. Everything starts with the content standards developed by states and/or a group of states, as seen with the Common Core State Standards. Content standards outline what a student should be able to know at the end of each school year. These standards are the foundation for instruction in the classroom as well as the assessment.
Given the huge range of knowledge and skills each student is supposed to master by year’s end, the assessment development process includes a determination of what will be assessed on each test for each grade. Because we can’t test everything covered in a year (no one wants the test to be longer than necessary), decisions must be made.
Here’s where we get into the nitty gritty. Experts, most of whom are former or current teachers with experience and knowledge of the subject matter and grade level, create “items” that test the content selected in step two. These items can be multiple-choice questions, essay prompts, tasks, situations, activities, and the like.
Of note, significant time is even spent deciding which WRONG answers to make available for multiple-choice questions. Why’s that? Every item is a chance to identify what our students really know. Incorrect answers can actually tell us a lot about what students misunderstood. For instance, did they add instead of subtract? Multiply instead of divide? Every bit of data helps disentangle what kids really, truly know, which makes the assessment process complex and the final product a very powerful education tool.
Once the items are developed, then teachers, content experts, higher education faculty, and the testing entity at the state level review them. This diverse group of stakeholders works together to create items that are fair, reliable and accurate. Lots of revisions happen at this stage. And, during this process many items are thrown out — for any number of reasons — and never see the light of day.
Field Testing or Field Trials
Now, we test the items by giving them to students. Items developed in step three are “field tested” to gauge how each works when students respond to them. Here, and I can’t stress this enough, we’re testing the item itself – not the kids. We want to know that the question itself is worthy of being used to assess skills and knowledge appropriately. Students’ scores on these field-test items are only used to evaluate the items; they are not used to calculate a student’s score for the year.
By doing these trials, we can see if gender, ethnicity or even English proficiency impact a child’s ability to successfully perform the task at hand. All of this is done to verify that each and every question is fair. Yet again, a range of stakeholders and experts are involved in the process, reviewing the results and making decisions along the way. The reality is this: if an item doesn’t meet expectations, it’s cut.
Build the Test
Using field-tested and approved items, systematically and thoughtfully the test takes its final form. Easy and hard items, tasks, and activities are incorporated. Items that assess varying skills and content areas are added. This part of the process helps us understand what a child really knows at the end of the assessment. As they say, variety is the spice of life. Same goes for an assessment. A mixture of challenging and easy items enable a range of knowledge and skills to be assessed.
Setting Performance Standards – Finally, states with teachers and their testing partners to make decisions about how well students must perform to pass, or be proficient. For example, performance can be defined as basic, passing, proficient, or advanced. These “performance standards” provide a frame of reference for interpreting the test scores. They help students, parents, educators, administrators, and policymakers understand how well a student did by using a category rating.
After – and only after – this rigorous, multi-step, multi-year process involving a range of stakeholders is complete, do the tests enter the classroom.
I’m on several listservs. I enjoy watching the dialogue between instructors and administrators about everything from the coolest new techno-widget to research questions and answers for at-risk reports. The conversations are typically interesting and challenging.
One that I’ve been watching for the past 48 hours is no different. There is a pretty significant debate going on with regard to Learning Styles. NPR ran a story a few days ago suggesting there is no such thing as proven learning styles (NPR story) and that educators are wasting their time trying to use them in teaching.
The listserv I have been watching began with a light-hearted response to the NPR story and it soon turned downright ugly! Professors wrote in explaining how over-joyed they were to hear a story about something they knew to be “crap all along” (quote from the listserv – name withheld). The visceral rhetoric talked about ridiculous trainings on the subject and that differentiation equates to edutainment (which essentially is teaching to the lowest common denominator).
(It was interesting that many of the anti-learing sytlists ignored a component of the story that explains how, “Mixing things up is something we know is scientifically supported as something that boosts attention…”)
So, several posts centered around the idea that we should all go back to lecturing as it has never been proven to be ineffective…
I’m quite troubled by this conversation. (I don’t typically blog about other digital conversations I’m watching.) Although I must admit that I’m not surprised. As a faculty member and someone who speaks about the future of education, I come across a fair share of educators who disagree with topics of all scope and sequence. And I hear often about the lack of evidence for Dale’s Cone, Learning Styles, and the need for differentiation.
But as I watch and listen to the debate, I have to ask myself some basic questions of motivation. Who has a stake in the fight and why?
It certainly does not surprise me that faculty would want to replicate the manner of teaching that was modeled for them. Most people parent the way they were parented. Most people use manners as they were shown to use manners. And so, it makes sense that most people teach the way they were taught. Especially considering that the overwhelming majority of instructors have never had a single class on how to best teach or educate anyone. (We’ll not talk about the assmption that because someone is a subject matter expert they inherently know how to teach others for now…)
But, as stakeholders in the debate, I believe it is important to ask about their motivation. Now please don’t get me wrong, I LOVE to lecture. I actually won Lecturer of the Year at Metro State before my Pearson days. I enjoy the attention, the control, and the challenge of connecting to the crowd. I like trying to find ways to challenge, engage, focus, inform, and persuade. I really enjoy a good lecture. But that actually leads to my first point. As much as I like lecturing, I have to admit that it’s easy in contrast to creating differentiated learning modules for my students. And there is a major semantic elephant in the room…I said “good lecture” above. I would argue that most lectures are NOT good. I know there are a few great lecturers out there, but most instructors are not them. (Yes, I read Nudge and I know that most instructors believe they are in the top 10% of eductators…but I have bad news for most of you…) Want me to prove it?
Go to a conference. ANY conference. I’m particularly embarrassed by my own disclipline of communication in terms of conference presentations. You all are probably nodding already, because you know what I’m going to say. 90% of the presentations are just awful. They are boring, uninspiring lectures (sometimes more appropriately called a reading…) where the presenters (aka instructors) do not connect to the audience, the material, or the event. Most conference presentations are lectures and if you scan the room during one of these lectures, do you know what you see? You see OTHER EDUCATORS who are sleeping, texting, Facebooking, or otherwise not paying attention.
So, it seems to me that the first reason a person would want to go back to lectures all the time is because it’s known and easy. Haven’t we all wondered if a college instructor just rolled out of bed and stood before the class expounding on things they “just knew” without any prep? And even if a lecturer does prep, how much prep actually takes place? While it may be days or weeks for a precious few, it’s likely less than an hour for most.
OK, so why else would teachers not want to differentiate instruction? I think it’s actually simple. People hate change about as much as they hate for anyone to tell them what to do. And college educators (I believe) are particularly hard on those who give an opposing view. Think about it. Professors give red marks for a living. THEY are the ones to tell someone else that what they have done or thought about is wrong…not the other way around. So, when someone says, “I don’t think you’re teaching these students in the best possible way…” they tend to get pretty defensive.
Finally, one more thought around the motivation of anti-learning style debaters that may come into play here. It’s actually a fallacy, typically known as the fallacy of tradition. It’s the idea that ‘if it ain’t broke, don’t fix it’ – or more appropriately here, “We should do it this way because we’ve always done it this way?” In these listserv conversations, I watched faculty say that everyone on the list went to college and made it through lecturers, so it must be fine. Hmmm….I’ll let go the problem with educators loving education far more than non-educators. But, there is a problem with the whole line of reasoning. The reason people started asking questions in the first place was because it was NOT working. The cracks in the armor first showed up in K-12 and then quickly moved to higher education. Our students started doing poorly on local tests, national tests, and finally world tests. Our students stopped being as employable as more and more white collar jobs went to foreign-educated graduates. So, to say that it isn’t broken is wrong. And going on the old addage about insanity being an action of doing something the same way twice and expecting different results, it doesn’t seem to fly here.
So, let me wrap up what has become a very long posting with two final thoughts. First, I will concede that the term “learning style” has become so bastardized that it may no longer be meaningful. If we need to think of better ways to express our research and to explore the extraordinarily complex human mind, so be it. While I believe we will someday understand how individuals learn better, I also feel that the brain is as complex as the cosmos and we just don’t have the technology yet. But researching and framing are two different things. A learning style framework, regardless of the author, is at its core, a way to promote differentiation. And again, differentiation HAS been proven to be better teaching.
Second, if you doubt that learning styles exist, talk to parents. Specifically, talk to parents of two or more kids. I am willing to bet that 99% will tell you that their kids both learn quite differently. So, from a very practical standpoint, let’s start using effective teaching and learning techniques that promote the BEST learning in all situations, for all students…not just the few who can manage to stay with us as we lecture. It’s time to change the conversation…