Explaining computerized English testing in plain English

ɫèAV Languages
a pair of hands typing at a laptop

Research has shown that automated scoring can give more reliable and objective results than human examiners when evaluating a person’s mastery of English. This is because an automated scoring system is impartial, unlike humans, who can be influenced by irrelevant factors such as a test taker’s appearance or body language. Additionally, automated scoring treats regional accents equally, unlike human examiners who may favor accents they are more familiar with. Automated scoring also allows individual features of a spoken or written test question response to be analyzed independent of one another, so that a weakness in one area of language does not affect the scoring of other areas.

was created in response to the demand for a more accurate, objective, secure and relevant test of English. Our automated scoring system is a central feature of the test, and vital to ensuring the delivery of accurate, objective and relevant results – no matter who the test-taker is or where the test is taken.

Development and validation of the scoring system to ensure accuracy

PTE Academic’s automated scoring system was developed after extensive research and field testing. A prototype test was developed and administered to a sample of more than 10,000 test takers from 158 different countries, speaking 126 different native languages. This data was collected and used to train the automated scoring engines for both the written and spoken PTE Academic items.

To do this, multiple trained human markers assess each answer. Those results are used as the training material for machine learning algorithms, similar to those used by systems like Google Search or Apple’s Siri. The model makes initial guesses as to the scores each response should get, then consults the actual scores to see well how it did, adjusts itself in a few directions, then goes through the training set over and over again, adjusting and improving until it arrives at a maximally correct solution – a solution that ideally gets very close to predicting the set of human ratings.

Once trained up and performing at a high level, this model is used as a marking algorithm, able to score new responses just like human markers would. Correlations between scores given by this system and trained human markers are quite high. The standard error of measurement between ɫèAV’s system and a human rater is less than that between one human rater and another – in other words, the machine scores are more accurate than those given by a pair of human raters, because much of the bias and unreliability has been squeezed out of them. In general, you can think of a machine scoring system as one that takes the best stuff out of human ratings, then acts like an idealized human marker.

ɫèAV conducts scoring validation studies to ensure that the machine scores are consistently comparable to ratings given by skilled human raters. Here, a new set of test-taker responses (never seen by the machine) are scored by both human raters and by the automated scoring system. Research has demonstrated that the automated scoring technology underlying PTE Academic produces scores comparable to those obtained from careful human experts. This means that the automated system “acts” like a human rater when assessing test takers’ language skills, but does so with a machine's precision, consistency and objectivity.

Scoring speaking responses with ɫèAV’s Ordinate technology

The spoken portion of PTE Academic is automatically scored using ɫèAV’s Ordinate technology. Ordinate technology results from years of research in speech recognition, statistical modeling, linguistics and testing theory. The technology uses a proprietary speech processing system that is specifically designed to analyze and automatically score speech from fluent and second-language English speakers. The Ordinate scoring system collects hundreds of pieces of information from the test takers’ spoken responses in addition to just the words, such as pace, timing and rhythm, as well as the power of their voice, emphasis, intonation and accuracy of pronunciation. It is trained to recognize even somewhat mispronounced words, and quickly evaluates the content, relevance and coherence of the response. In particular, the meaning of the spoken response is evaluated, making it possible for these models to assess whether or not what was said deserves a high score.

Scoring writing responses with Intelligent Essay Assessor™ (IEA)

The written portion of PTE Academic is scored using the Intelligent Essay Assessor™ (IEA), an automated scoring tool powered by ɫèAV’s state-of-the-art Knowledge Analysis Technologies™ (KAT) engine. Based on more than 20 years of research and development, the KAT engine automatically evaluates the meaning of text, such as an essay written by a student in response to a particular prompt. The KAT engine evaluates writing as accurately as skilled human raters using a proprietary application of the mathematical approach known as Latent Semantic Analysis (LSA). LSA evaluates the meaning of language by analyzing large bodies of relevant text and their meanings. Therefore, using LSA, the KAT engine can understand the meaning of text much like a human.

What aspects of English does PTE Academic assess?

Written scoring

Spoken scoring

  • Word choice
  • Grammar and mechanics
  • Progression of ideas
  • Organization
  • Style, tone
  • Paragraph structure
  • Development, coherence
  • Point of view
  • Task completion
  • Sentence mastery
  • Content
  • Vocabulary
  • Accuracy
  • Pronunciation
  • Intonation
  • Fluency
  • Expressiveness
  • Pragmatics

More blogs from ɫèAV

  • Grammar 101: insider tips and tricks to instantly improve your writing (part 4)

    By
    Reading time: 7 minutes

    Punctuation makes your writing easier to read and understand, but it can be tricky to master. As an editor and proofreader, I often notice people confusing semi-colons and colons, so we'll explore the difference between them. And because both are often used in lists, we'll also look at the humble comma – and its sometimes-controversional cousin, the Oxford comma.

    Semi-colons and colons both connect phrases in a sentence but are used in different situations.

    Understanding colons

    Colons introduce important information and explanations. They're often used before lists as a replacement for phrases like "they are" and "which is":

    • He offered me a choice of drinks: tea, coffee or hot chocolate.
    • I packed the essentials in my bag: water, pens and a magazine.
    • She speaks three languages: English, French and Portuguese.

    You can also think of a colon as a spotlight, with the phrase that comes after the colon explaining or expanding what came before it.

    • In 1903, travel was changed forever by an important event: Orville and Wilbur Wright's first successful flight.
    • He loves visiting the animals at the farm: cows are his favourite.
    • There is one rule I live by: I treat others as I wish to be treated.

    The secrets of semi-colons

    A semi-colon links two ideas that are closely related and that would be two complete sentences if you used a period instead. They give a softer transition than a period would, and they're often used instead of conjunctions like "and", "but" and "because":

    • I love eating pizza; my sister loves eating burgers.
    • I wanted to go for a swim; I couldn't find my goggles.
    • It was the best of times; it was the worst of times.

    Semi-colons also seperate items in long lists to make life easier for the reader and stop a sentence becoming a sea of apostrophes. For example:

    • I've got my shopping list ready: peppers, carrots and oranges from the market; toothpaste, shampoo and pain relief from the drugstore; and a newspaper, snack and drink from the newsstand.

    Standard comma or Oxford comma?

    An Oxford comma goes before "and" or "or" at the end of a list. The first example has an Oxford comma, the second doesn't.

    • Please bring me a sandwich made with cheese, lettuce, and tomato.
    • Please bring me a sandwich made with cheese, lettuce and tomato.

    American English generally favors the Oxford comma, British English typically omits it, unless needed for clarity. Compare:

    • I love my parents, Taylor Swift and Keanu Reeves.
    • I love my parents, Taylor Swift, and Keanu Reeves.

    As with many areas of punctuation, whether you choose to use the Oxford comma is a matter of personal preference. However, the most important thing is to be consistent in your usage.

  • A person in a striped shirt writes with a marker on a whiteboard, holding a clip board

    Clear path to fast-track progress: Why choose assessment underpinned by the GSE

    By
    Reading time: 4 minutes

    At the beginning of every school year, we welcome new learners into our classrooms with the same core question: Where are our students now, and how far can we take them?

    For English teachers, this reveals a huge challenge. In a single class, we might have one student at an A2 level, while others are solidly B1 or just entering A2+. Navigating such a wide range of abilities can feel overwhelming.

    We’ve all seen it: students can spend months (or even years) studying English and still feel like they haven’t moved up a level. Teachers work incredibly hard, and students put in the effort, but progress feels intangible. Why is that? And more importantly, how can schools make it easier to see and support that progress?

    In recent years, I have found a powerful ally in answering that question: the Global Scale of English (GSE). Backed by ɫèAV and aligned with the CEFR, the GSE offers more than just levels, it provides a clear, data-informed path to language growth. Most importantly, it gives teachers and school leaders the ability to set meaningful goals and measure real progress.

    But, how is this useful at the beginning of the school year?

    Starting with assessment

    To get a clear picture from the start, assessment is essential; there’s no doubt about it. However, it can't just be a punctuation mark at the end of a term or a requirement from administration. Used strategically, this first assessment can be the compass that guides instruction and curriculum decisions, empowering both teachers and students from day one. This is why choosing the correct assessment tools becomes fundamental.

    The GSE difference: Precision, clarity, confidence

    Unlike the broad bands of the CEFR, the GSE provides a granular scale from 10 to 90, breaking down each skill into precise learning objectives. This allows educators to monitor progress at a much closer level, often identifying improvements that would otherwise go unnoticed.

    When learners see that their score has moved from 36 to 42, even if their overall CEFR level hasn’t changed, they gain confidence. They recognize that learning is a continuous process rather than a series of steps. Teachers, in turn, are able to validate growth, provide clear evidence of learning and tailor instruction to the learner’s current needs, not just their general level.

    For example, two students might both be classified as "A2", but the GSE gives us a much clearer picture: a student with a GSE score of 35 is likely mastering simple sentences, while another student scoring 40 might already be comfortable writing simple stories and is ready to tackle B1-level tasks.

    This isn't just data: it's a roadmap. It tells us exactly what to teach next, allowing us to differentiate with confidence instead of relying solely on gut feeling.

    GSE tools that make it happen

    ɫèAV offers a comprehensive range of GSE-aligned assessment tools that support different stages of the learning journey. Each tool plays a distinct role in placement, diagnosis, benchmarking or certification.

  • A person in a denim jacket and striped shirt holds glasses and a notebook, standing by a window with bright daylight.

    What happens in the brain when you learn a language?

    By
    Reading time: 7 minutes

    Whether you’re picking up Spanish for travel, Mandarin for business or French just for fun, you’re not only expanding your communication skills, you’re also giving your brain a powerful workout. But what actually happens inside your brain when you learn a language?

    The brain’s language centers

    Your brain is made up of many parts and two areas are significant for language:

    • : Located in the frontal lobe, this region helps you produce speech and form sentences.
    • : Found in the temporal lobe, this area helps you understand spoken and written language.

    When you start learning a new language, these areas get busy. They work together to help you listen, speak, read and write in your new language (Friederici, 2011).