Read administrator FAQs
Need help with a test?
Contact us for support
Log into ScoreKeeper for dedicated support.
Go to ScoreKeeper
What types of speaking questions do you have on your test?
How does it compare to a typical interview-style test?
What is the test development process you use for your speaking tests?
How does Versant compare to other English tests?
Why is your Versant English Test scoring scale from 20 to 80? Why not 0 to 100?
Sometimes when you combine all of the sub-scores from a test, the overall score does not seem to match. How can this be?
How are the individual sub-scores weighted?
Will the reliability of Pearson’s technology stay the same or improve?
How do you know that someone can actually speak in a communicative way?
You have a section of the test where people just read out loud. What good is that for determining English speaking skills?
There is a section where people repeat sentences. Isn’t that just testing memory?
The test doesn’t include “real” examples of speech. Isn’t this artificial? How can you judge someone’s real speaking ability?
How do you deal with accents? In other words, if someone speaks well but with an accent, how will that affect their score?
Does your speaking test perform well across all levels of English?
Hesitation, repetition, pauses, even stuttering are normal parts of speech. Does automated scoring recognize this or penalize you for it?
With so many varieties of English in the world how can technology determine what is or is not acceptable English?
What happens if a test taker answers in another language? Or in gibberish?
What do you do if a test taker is silent a large part of the time?
What about test takers from Japan? It doesn’t seem this test will work for them because English learning and testing in Japan is different.
How can you judge someone’s speaking skills without specific questions in those areas?
How can an English speaking test be automatically graded? How is that possible?
How does the technology work? How do you use it to do automated grading?
Where else is this automated scoring being used?
How can we be sure your technology really works?
How can you be sure that the automated scoring is accurate?
Why would you use automated scoring? What benefits does automated scoring have over human rating?
How is your system trained to score test responses?
How do machines pick up on the subtlety of meaning?
How do you cope with different spellings of English?
I understand that automated scoring can assess grammar, vocabulary, and spelling because Microsoft Word can do this as well. However, how can it evaluate the content of what has been written?