Explaining computerized English testing in plain English

Pearson Languages
a pair of hands typing at a laptop

Research has shown that automated scoring can give more reliable and objective results than human examiners when evaluating a person’s mastery of English. This is because an automated scoring system is impartial, unlike humans, who can be influenced by irrelevant factors such as a test taker’s appearance or body language. Additionally, automated scoring treats regional accents equally, unlike human examiners who may favor accents they are more familiar with. Automated scoring also allows individual features of a spoken or written test question response to be analyzed independent of one another, so that a weakness in one area of language does not affect the scoring of other areas.

PTE Academic was created in response to the demand for a more accurate, objective, secure and relevant test of English. Our automated scoring system is a central feature of the test, and vital to ensuring the delivery of accurate, objective and relevant results – no matter who the test-taker is or where the test is taken.

Development and validation of the scoring system to ensure accuracy

PTE Academic’s automated scoring system was developed after extensive research and field testing. A prototype test was developed and administered to a sample of more than 10,000 test takers from 158 different countries, speaking 126 different native languages. This data was collected and used to train the automated scoring engines for both the written and spoken PTE Academic items.

To do this, multiple trained human markers assess each answer. Those results are used as the training material for machine learning algorithms, similar to those used by systems like Google Search or Apple’s Siri. The model makes initial guesses as to the scores each response should get, then consults the actual scores to see well how it did, adjusts itself in a few directions, then goes through the training set over and over again, adjusting and improving until it arrives at a maximally correct solution – a solution that ideally gets very close to predicting the set of human ratings.

Once trained up and performing at a high level, this model is used as a marking algorithm, able to score new responses just like human markers would. Correlations between scores given by this system and trained human markers are quite high. The standard error of measurement between Pearson’s system and a human rater is less than that between one human rater and another – in other words, the machine scores are more accurate than those given by a pair of human raters, because much of the bias and unreliability has been squeezed out of them. In general, you can think of a machine scoring system as one that takes the best stuff out of human ratings, then acts like an idealized human marker.

Pearson conducts scoring validation studies to ensure that the machine scores are consistently comparable to ratings given by skilled human raters. Here, a new set of test-taker responses (never seen by the machine) are scored by both human raters and by the automated scoring system. Research has demonstrated that the automated scoring technology underlying PTE Academic produces scores comparable to those obtained from careful human experts. This means that the automated system “acts” like a human rater when assessing test takers’ language skills, but does so with a machine's precision, consistency and objectivity.

Scoring speaking responses with Pearson’s Ordinate technology

The spoken portion of PTE Academic is automatically scored using Pearson’s Ordinate technology. Ordinate technology results from years of research in speech recognition, statistical modeling, linguistics and testing theory. The technology uses a proprietary speech processing system that is specifically designed to analyze and automatically score speech from fluent and second-language English speakers. The Ordinate scoring system collects hundreds of pieces of information from the test takers’ spoken responses in addition to just the words, such as pace, timing and rhythm, as well as the power of their voice, emphasis, intonation and accuracy of pronunciation. It is trained to recognize even somewhat mispronounced words, and quickly evaluates the content, relevance and coherence of the response. In particular, the meaning of the spoken response is evaluated, making it possible for these models to assess whether or not what was said deserves a high score.

Scoring writing responses with Intelligent Essay Assessor™ (IEA)

The written portion of PTE Academic is scored using the Intelligent Essay Assessor™ (IEA), an automated scoring tool powered by Pearson’s state-of-the-art Knowledge Analysis Technologies™ (KAT) engine. Based on more than 20 years of research and development, the KAT engine automatically evaluates the meaning of text, such as an essay written by a student in response to a particular prompt. The KAT engine evaluates writing as accurately as skilled human raters using a proprietary application of the mathematical approach known as Latent Semantic Analysis (LSA). LSA evaluates the meaning of language by analyzing large bodies of relevant text and their meanings. Therefore, using LSA, the KAT engine can understand the meaning of text much like a human.

What aspects of English does PTE Academic assess?

Written scoring

Spoken scoring

  • Word choice
  • Grammar and mechanics
  • Progression of ideas
  • Organization
  • Style, tone
  • Paragraph structure
  • Development, coherence
  • Point of view
  • Task completion
  • Sentence mastery
  • Content
  • Vocabulary
  • Accuracy
  • Pronunciation
  • Intonation
  • Fluency
  • Expressiveness
  • Pragmatics

More blogs from Pearson

  • A woman with headphones dancing in her living room

    Dance your way to fluent language learning and enhanced wellbeing

    By Charlotte Guest
    Reading time: 5 minutes

    Language learning can often feel daunting, with its endless vocabulary lists, grammatical structures and pronunciation rules. However, incorporating dance and movement into your study routine can transform this challenge into an engaging, enjoyable experience while significantly benefiting your overall wellbeing. This unusual approach is not only effective for language learners of all ages but also enriches the learning process with fun and physical activity.

    Engaging in movement and dance can substantially impact mental health, as evidenced by various studies and academic research. For instance, a notable study published in the American Journal of Dance Therapy highlighted that dance, particularly in structured environments, can reduce anxiety and improve mood among participants. This connection between dance and mental health improvement can be attributed to the release of endorphins, often referred to as happiness hormones, which occur during physical activity.

  • A girl sat at a desk with a laptop and notepad studying and taking notes

    AI scoring vs human scoring for language tests: What's the difference?

    By Charlotte Guest
    Reading time: 6 minutes

    When entering the world of language proficiency tests, test takers are often faced with a dilemma: Should they opt for tests scored by humans or those assessed by artificial intelligence (AI)? The choice might seem trivial at first, but understanding the differences between AI scoring and human language test scoring can significantly impact preparation strategy and, ultimately, determine test outcomes.

    The human touch in language proficiency testing and scoring

    Historically, language tests have been scored by human assessors. This method leverages the nuanced understanding that humans have of language, including idiomatic expressions, cultural references, and the subtleties of tone and even writing style, akin to the capabilities of the human brain. Human scorers can appreciate the creative and original use of language, potentially rewarding test takers for flair and originality in their answers. Scorers are particularly effective at evaluating progress or achievement tests, which are designed to assess a student's language knowledge and progress after completing a particular chapter, unit, or at the end of a course, reflecting how well the language tester is performing in their language learning studies.

    One significant difference between human and AI scoring is how they handle context. Human scorers can understand the significance and implications of a particular word or phrase in a given context, while AI algorithms rely on predetermined rules and datasets.

    The adaptability and learning capabilities of human brains contribute significantly to the effectiveness of scoring in language tests, mirroring how these brains adjust and learn from new information.

    Advantages:

    • Nuanced understanding: Human scorers are adept at interpreting the complexities and nuances of language that AI might miss.
    • Contextual flexibility: Humans can consider context beyond the written or spoken word, understanding cultural and situational implications.

    Disadvantages:

    • Subjectivity and inconsistency: Despite rigorous training, human-based scoring can introduce a level of subjectivity and variability, potentially affecting the fairness and reliability of scores.
    • Time and resource intensive: Human-based scoring is labor-intensive and time-consuming, often resulting in longer waiting times for results.
    • Human bias: Assessors, despite being highly trained and experienced, bring their own perspectives, preferences and preconceptions into the grading process. This can lead to variability in scoring, where two equally competent test takers might receive different scores based on the scorer's subjective judgment.

    The rise of AI in language test scoring

    With advancements in technology, AI-based scoring systems have started to play a significant role in language assessment. These systems utilize algorithms and natural language processing (NLP) techniques to evaluate test responses. AI scoring promises objectivity and efficiency, offering a standardized way to assess language and proficiency level.

    Advantages:

    • Consistency: AI scoring systems provide a consistent scoring method, applying the same criteria across all test takers, thereby reducing the potential for bias.
    • Speed: AI can process and score tests much faster than human scorers can, leading to quicker results turnaround.
    • Great for more nervous testers: Not everyone likes having to take a test in front of a person, so AI removes that extra stress.

    Disadvantages:

    • Lack of nuance recognition: AI may not fully understand subtle nuances, creativity, or complex structures in language the way a human scorer can.
    • Dependence on data: The effectiveness of AI scoring is heavily reliant on the data it has been trained on, which can limit its ability to interpret less common responses accurately.

    Making the choice

    When deciding between tests scored by humans or AI, consider the following factors:

    • Your strengths: If you have a creative flair and excel at expressing original thoughts, human-scored tests might appreciate your unique approach more. Conversely, if you excel in structured language use and clear, concise expression, AI-scored tests could work to your advantage.
    • Your goals: Consider why you're taking the test. Some organizations might prefer one scoring method over the other, so it's worth investigating their preferences.
    • Preparation time: If you're on a tight schedule, the quicker turnaround time of AI-scored tests might be beneficial.

    Ultimately, both scoring methods aim to measure and assess language proficiency accurately. The key is understanding how each approach aligns with your personal strengths and goals.

    The bias factor in language testing

    An often-discussed concern in both AI and human language test scoring is the issue of bias. With AI scoring, biases can be ingrained in the algorithms due to the data they are trained on, but if the system is well designed, bias can be removed and provide fairer scoring.

    Conversely speaking, human scorers, despite their best efforts to remain objective, bring their own subconscious biases to the evaluation process. These biases might be related to a test taker's accent, dialect, or even the content of their responses, which could subtly influence the scorer's perceptions and judgments. Efforts are continually made to mitigate these biases in both approaches to ensure a fair and equitable assessment for all test takers.

    Preparing for success in foreign language proficiency tests

    Regardless of the scoring method, thorough preparation remains, of course, crucial. Familiarize yourself with the test format, practice under timed conditions, and seek feedback on your performance, whether from teachers, peers, or through self-assessment tools.

    The distinctions between AI scoring and human in language tests continue to blur, with many exams now incorporating a mix of both to have students leverage their respective strengths. Understanding and interpreting written language is essential in preparing for language proficiency tests, especially for reading tests. By understanding these differences, test takers can better prepare for their exams, setting themselves up for the best possible outcome.

    Will AI replace human-marked tests?

    The question of whether AI will replace markers in language tests is complex and multifaceted. On one hand, the efficiency, consistency and scalability of AI scoring systems present a compelling case for their increased utilization. These systems can process vast numbers of tests in a fraction of the time it takes markers, providing quick feedback that is invaluable in educational settings. On the other hand, the nuanced understanding, contextual knowledge, flexibility, and ability to appreciate the subtleties of language that human markers bring to the table are qualities that AI has yet to fully replicate.

    Both AI and human-based scoring aim to accurately assess language proficiency levels, such as those defined by the Common European Framework of Reference for Languages or the Global Scale of English, where a level like C2 or 85-90 indicates that a student can understand virtually everything, master the foreign language perfectly, and potentially have superior knowledge compared to a native speaker.

    The integration of AI in language testing is less about replacement and more about complementing and enhancing the existing processes. AI can handle the objective, clear-cut aspects of language testing, freeing markers to focus on the more subjective, nuanced responses that require a human touch. This hybrid approach could lead to a more robust, efficient and fair assessment system, leveraging the strengths of both humans and AI.

    Future developments in AI technology and machine learning may narrow the gap between AI and human grading capabilities. However, the ethical considerations, such as ensuring fairness and addressing bias, along with the desire to maintain a human element in education, suggest that a balanced approach will persist. In conclusion, while AI will increasingly play a significant role in language testing, it is unlikely to completely replace markers. Instead, the future lies in finding the optimal synergy between technological advancements and human judgment to enhance the fairness, accuracy and efficiency of language proficiency assessments.

    Tests to let your language skills shine through

    Explore Pearson's innovative language testing solutions today and discover how we are blending the best of AI technology and our own expertise to offer you reliable, fair and efficient language proficiency assessments. We are committed to offering reliable and credible proficiency tests, ensuring that our certifications are recognized for job applications, university admissions, citizenship applications, and by employers worldwide. Whether you're gearing up for academic, professional, or personal success, our tests are designed to meet your diverse needs and help unlock your full potential.

    Take the next step in your language learning journey with Pearson and experience the difference that a meticulously crafted test can make.

  • Woman standing outside with a coffee and headphones

    Using language learning as a form of self-care for wellbeing

    By Charlotte Guest
    Reading time: 6.5 minuts

    In today’s fast-paced world, finding time for self-care is more important than ever. Among a range of traditional self-care practices, learning a language emerges as an unexpected but incredibly rewarding approach. Learning a foreign language is a key aspect of personal development and can help your mental health, offering benefits like improved career opportunities, enhanced creativity, and the ability to connect with people from diverse cultures.