Can computers really mark exams? Benefits of ELT automated assessments

ɫèAV Languages
Hands typing at a laptop with symbols

Automated assessment, including the use of Artificial Intelligence (AI), is one of the latest education tech solutions. It speeds up exam marking times, removes human biases, and is as accurate and at least as reliable as human examiners. As innovations go, this one is a real game-changer for teachers and students. 

However, it has understandably been met with many questions and sometimes skepticism in the ELT community – can computers really mark speaking and writing exams accurately? 

The answer is a resounding yes. Students from all parts of the world already take AI-graded tests.  aԻ Versanttests – for example – provide unbiased, fair and fast automated scoring for speaking and writing exams – irrespective of where the test takers live, or what their accent or gender is. 

This article will explain the main processes involved in AI automated scoring and make the point that AI technologies are built on the foundations of consistent expert human judgments. So, let’s clear up the confusion around automated scoring and AI and look into how it can help teachers and students alike. 

AI versus traditional automated scoring

First of all, let’s distinguish between traditional automated scoring and AI. When we talk about automated scoring, generally, we mean scoring items that are either multiple-choice or cloze items. You may have to reorder sentences, choose from a drop-down list, insert a missing word- that sort of thing. These question types are designed to test particular skills and automated scoring ensures that they can be marked quickly and accurately every time.

While automatically scored items like these can be used to assess receptive skills such as listening and reading comprehension, they cannot mark the productive skills of writing and speaking. Every student's response in writing and speaking items will be different, so how can computers mark them?

This is where AI comes in. 

We hear a lot about how AI is increasingly being used in areas where there is a need to deal with large amounts of unstructured data, effectively and 100% accurately – like in medical diagnostics, for example. In language testing, AI uses specialized computer software to grade written and oral tests. 

How AI is used to score speaking exams

The first step is to build an acoustic model for each language that can recognize speech and convert it into waveforms and text. While this technology used to be very unusual, most of our smartphones can do this now. 

These acoustic models are then trained to score every single prompt or item on a test. We do this by using human expert raters to score the items first, using double marking. They score hundreds of oral responses for each item, and these ‘Standards’ are then used to train the engine. 

Next, we validate the trained engine by feeding in many more human-marked items, and check that the machine scores are very highly correlated to the human scores. If this doesn’t happen for any item, we remove it, as it must match the standard set by human markers. We expect a correlation of between .95-.99. That means that tests will be marked between 95-99% exactly the same as human-marked samples. 

This is incredibly high compared to the reliability of human-marked speaking tests. In essence, we use a group of highly expert human raters to train the AI engine, and then their standard is replicated time after time.  

How AI is used to score writing exams

Our AI writing scoring uses a technology called . LSA is a natural language processing technique that can analyze and score writing, based on the meaning behind words – and not just their superficial characteristics. 

Similarly to our speech recognition acoustic models, we first establish a language-specific text recognition model. We feed a large amount of text into the system, and LSA uses artificial intelligence to learn the patterns of how words relate to each other and are used in, for example, the English language. 

Once the language model has been established, we train the engine to score every written item on a test. As in speaking items, we do this by using human expert raters to score the items first, using double marking. They score many hundreds of written responses for each item, and these ‘Standards’ are then used to train the engine. We then validate the trained engine by feeding in many more human-marked items, and check that the machine scores are very highly correlated to the human scores. 

The benchmark is always the expert human scores. If our AI system doesn’t closely match the scores given by human markers, we remove the item, as it is essential to match the standard set by human markers.

AI’s ability to mark multiple traits 

One of the challenges human markers face in scoring speaking and written items is assessing many traits on a single item. For example, when assessing and scoring speaking, they may need to give separate scores for content, fluency and pronunciation. 

In written responses, markers may need to score a piece of writing for vocabulary, style and grammar. Effectively, they may need to mark every single item at least three times, maybe more. However, once we have trained the AI systems on every trait score in speaking and writing, they can then mark items on any number of traits instantaneously – and without error. 

AI’s lack of bias

A fundamental premise for any test is that no advantage or disadvantage should be given to any candidate. In other words, there should be no positive or negative bias. This can be very difficult to achieve in human-marked speaking and written assessments. In fact, candidates often feel they may have received a different score if someone else had heard them or read their work.

Our AI systems eradicate the issue of bias. This is done by ensuring our speaking and writing AI systems are trained on an extensive range of human accents and writing types. 

We don’t want perfect native-speaking accents or writing styles to train our engines. We use representative non-native samples from across the world. When we initially set up our AI systems for speaking and writing scoring, we trialed our items and trained our engines using millions of student responses. We continue to do this now as new items are developed.

The benefits of AI automated assessment

There is nothing wrong with hand-marking homework tests and exams. In fact, it is essential for teachers to get to know their students and provide personal feedback and advice. However, manually correcting hundreds of tests, daily or weekly, can be repetitive, time-consuming, not always reliable and takes time away from working alongside students in the classroom. The use of AI in formative and summative assessments can increase assessed practice time for students and reduce the marking load for teachers.

Language learning takes time, lots of time to progress to high levels of proficiency. The blended use of AI can:

  • address the increasing importance of formative assessmentto drive personalized learning and diagnostic assessment feedback 

  • allow students to practice and get instant feedback inside and outside of allocated teaching time

  • address the issue of teacher workload

  • create a virtuous combination between humans and machines, taking advantage of what humans do best and what machines do best. 

  • provide fair, fast and unbiased summative assessment scores in high-stakes testing.

We hope this article has answered a few burning questions about how AI is used to assess speaking and writing in our language tests. An interesting quote from Fei-Fei Li, Chief scientist at Google and Stanford Professor describes AI like this:

“I often tell my students not to be misled by the name ‘artificial intelligence’ — there is nothing artificial about it; A.I. is made by humans, intended to behave [like] humans and, ultimately, to impact human lives and human society.”

AI in formative and summative assessments will never replace the role of teachers. AI will support teachers, provide endless opportunities for students to improve, and provide a solution to slow, unreliable and often unfair high-stakes assessments.

Examples of AI assessments in ELT

At ɫèAV, we have developed a range of assessments using AI technology.

Versant

The Versant tests are a great tool to help establish language proficiency benchmarks in any school, organization or business. They are specifically designed for placement tests to determine the appropriate level for the learner.

PTE Academic

The  is aimed at those who need to prove their level of English for a university place, a job or a visa. It uses AI to score tests and results are available within five days. 

ɫèAV English International Certificate (PEIC)

ɫèAV English International Certificate (PEIC) also uses automated assessment technology. With a two-hour test available on-demand to take at home or at school (or at a secure test center). Using a combination of advanced speech recognition and exam grading technology and the expertise of professional ELT exam markers worldwide, our patented software can measure English language ability.

Read more about the use of AI in our learning and testing here, or if you're wondering which English test is right for your students make sure to check out our post 'Which exam is right for my students?'.

More blogs from ɫèAV

  • A teacher sat with young students while they work and hold crayons

    Icebreaker activities for the beginning of the school year

    By
    Reading time: 3 minutes

    The beginning days of school are both exciting and occasionally nerve-wracking for teachers and students alike. Everyone is adjusting to new faces, routines and a fresh environment. As a teacher, you can help make this shift smooth, inviting and enjoyable. One effective way to achieve this is by using icebreaker activities.

    Icebreakers are simple games or activities that help students get to know each other, feel comfortable and start building a positive classroom community. When students feel connected, they are more likely to participate, help each other and enjoy learning. Here are some easy-to-use icebreaker activities and tips for making the beginning of the school year memorable for everyone. Here are just a few ideas for icebreakers you can use in your classroom.

  • A teacher with students stood around him while he is on a tablet

    How AI and the GSE are powering personalized learning at scale

    By
    Reading time: 4 minutes

    In academic ops, we’re always finding the balance between precision and practicality. On one side: the goal of delivering lessons that are level-appropriate, relevant and tied to real learner needs. On the other hand, we juggle hundreds of courses, support teachers, handle last-minute changes and somehow keep the whole system moving without losing momentum or our minds.

    That’s exactly where AI and the Global Scale of English (GSE) have changed the game for us at Bridge. Over the past year, we’ve been using AI tools to streamline lesson creation, speed up course design and personalize instruction in a way that’s scalable and pedagogically sound.

    Spoiler alert: it’s working.

    The challenge: Customization at scale

    Our corporate English learners aren’t just “students”. They’re busy professionals: engineers, sales leads, analysts. They need immediate impact. They have specific goals, high expectations and very little patience for anything that feels generic.

    Behind the scenes, my team is constantly:

    • Adapting content to real company contexts
    • Mapping GSE descriptors to measurable outcomes
    • Designing lessons that are easy for teachers to deliver
    • Keeping quality high across dozens of industries and levels

    The solution: Building personalized courses at scale

    To address this challenge, we developed an internal curriculum engine that blends the GSE, AI and practical, job-focused communication goals into a system that can generate full courses in minutes.

    It is built around 21 workplace categories, including Conflict Resolution, Business Travel and Public Speaking. Each category has five lessons mapped to CEFR levels and GSE descriptors, sequenced to support real skill development.

    Then the fun part: content creation. Using GPT-based AI agents trained on GSE Professional objectives, we feed in a few parameters like:

    • Category: Negotiation
    • Lesson: Staying Professional Under Pressure
    • Skills: Speaking (GSE 43, 44), Reading (GSE 43, 45)

    In return, we get:

    • A teacher plan with clear prompts, instructions and model responses
    • Student slides or worksheets with interactive, GSE-aligned tasks
    • Learning outcomes tied directly to the descriptors

    Everything is structured, leveled and ready to go.

    One Example: “Staying Organized at Work”

    This A2 lesson falls under our Time Management module and hits descriptors like:

    • Reading 30: Can ask for repetition and clarification using basic fixed expressions
    • Speaking 33: Can describe basic activities or events happening at the time of speaking

    Students work with schedules, checklists and workplace vocabulary. They build confidence by using simple but useful language in simulated tasks. Teachers are fully supported with ready-made discussion questions and roleplay prompts.

    Whether we’re prepping for a quick demo or building a full 20-hour course, the outcome is the same. We deliver scalable, teacher-friendly, learner-relevant lessons that actually get used.

    Beyond the framework: AI-generated courses for individual learner profiles

    While our internal curriculum engine helps us scale structured, GSE-aligned lessons across common workplace themes, we also use AI for one-on-one personalization. This second system builds fully custom courses based on an individual’s goals, role, and communication challenges.

    One of our clients, a global mining company, needed a course for a production engineer in field ops. His English level was around B1 (GSE 43 to 50). He didn’t need grammar. He needed to get better at safety briefings, reports and meetings. Fast.

    He filled out a detailed needs analysis, and I fed the data into our first AI agent. It created a personalized GSE-aligned syllabus based on his job, challenges and goals. That syllabus was passed to a second agent, preloaded with the full GSE Professional framework, which then generated 20 complete lessons.

    The course looked like this:

    • Module 1: Reporting project updates
    • Module 2: Supply chain and logistics vocabulary
    • Module 3: Interpreting internal communications
    • Module 4: Coordination and problem-solving scenarios
    • Module 5: Safety presentation with feedback rubric

    From start to finish, the course took under an hour to build. It was tailored to his actual workday. His teacher later reported that his communication had become noticeably clearer and more confident.

    This was not a one-off. We have now repeated this flow for dozens of learners in different industries, each time mapping everything back to GSE ranges and skill targets.

    Why it works: AI + GSE = The right kind of structure

    AI helps us move fast. But the GSE gives us the structure to stay aligned.

    Without it, we’re just generating content. With it, we’re creating instruction that is:

    • Measurable and appropriate for the learner’s level
    • Easy for teachers to deliver
    • Consistent and scalable across programs

    The GSE gives us a shared language for goals, outcomes and progress. That is what keeps it pedagogically sound.

    Final thought

    A year ago, I wouldn’t have believed we could design a 20-lesson course in under an hour that actually delivers results. But now it’s just part of the workflow.

    AI doesn’t replace teaching. It enhances it. And when paired with the GSE, it gives us a way to meet learner needs with speed, clarity, and purpose. It’s not just an upgrade. It’s what’s next.

  • Children sat at desks in a classroom with their hands all raised smiling

    Back to school: Inclusive strategies to welcome and support students from day one

    By
    Reading time: 3 minutes

    As the new school year begins, teachers have an opportunity to set the tone for inclusion, belonging and respect. With the right strategies and activities, you can ensure every student feels seen, heard and valued from the very first day. Embracing diversity isn’t just morally essential: it’s a proven pathway to deeper learning, greater engagement and a more equitable society (Gay, 2018).

    Research consistently shows that inclusive classrooms foster higher academic achievement, improved social skills and increased self-esteem for all students (Banks, 2015). When students feel safe and respected, they are more likely to take risks, collaborate and reach their full potential.