Are our exams fair? Rethinking assessment for neurodivergent learners
Are our exams fair? Rethinking assessment for neurodivergent learners
There’s a quiet, yet urgent, conversation happening across the UK education sector: are our assessment systems doing what they claim to do — measuring learning — or are they mostly measuring how well a student copes with a single, stressful format?
A recent piece in FE News lays this out plainly. Timed, high-stakes exams were built for comparability and reliability. That’s useful. But when the format (silent, handwritten, rigid timing, dense language) becomes the gatekeeper, students with ADHD, dyslexia or autism can be seriously disadvantaged. The result is not only lower grades for capable learners — it’s anxiety, shame and disengagement. That’s an outcome none of us wants.
Here’s why this matters — and what practical steps schools, colleges and exam boards should consider now.
The problem in plain terms
Exams test performance under pressure. For many learners, that’s a fair test of recall and speed. For neurodivergent learner,s it often tests transcription speed, sensory tolerance, reading of dense prompts, and the ability to filter the testing environment — not the depth of their understanding.
Common ways neurodivergence shows up during exams:
ADHD: difficulty sustaining attention across long, quiet papers; impulsive answers; time management difficulties.
Dyslexia: slow handwriting, difficulty decoding dense wording, and slower proofreading.
Autism: sensory overload in exam halls, literal interpretation of questions, anxiety about ambiguous wording.
If the paper is full of layered language, heavy instructions, or requires sustained handwriting, a student’s method of expression becomes the barrier — not their knowledge.
Simple adjustments that make a big difference
Some changes are low-cost and easy to implement, yet hugely effective:
Plain-English questions. Short, clear prompts reduce misreading and lower anxiety for everyone — especially for those with a literal mindset.
Extra time for those who need it. It doesn’t give an unfair content advantage; it levels the mechanical playing field.
Word-processing as standard option. Typing removes the handwriting bottleneck, allowing students to show their thinking.
Separate, low-stimulus rooms for students who need them (with familiar proctors, if possible).
Rest breaks are built into the assessment where concentration fatigue is likely.
Scaffolded rubrics that reward understanding, even if expression is unconventional.
These aren’t “special favours”. They’re reasonable design choices that support fair demonstration of learning.
Alternatives to rethink assessment design
Beyond adjustments, there are assessment models that measure understanding in richer, fairer ways:
Coursework & portfolios. Allow students to develop work over time; good for showing progress, research skills and synthesis.
Two-stage exams. An individual phase followed by a collaborative phase (students revisit problems in groups) can boost retention and reduce isolation.
Oral or recorded responses. For some students, speaking or presenting is a more accessible mode to show mastery.
Peer assessment and project work. These can show application, reasoning and teamwork — valuable skills not captured by a single paper.
Open-book assessments with applied problems. These measure how students think rather than how they memorise.
Each alternative has trade-offs. Coursework can be variably supported at home; oral exams need standardisation. But well-designed mixed portfolios can combine rigour and accessibility.
What schools and teachers can do now
Audit your assessment practices. Which pupils consistently underperform in timed, handwritten exams but excel in other settings? Patterns tell a story.
Pilot alternatives. Start with one subject: run a typed exam option, a small coursework project, or a two-stage mock. Collect outcomes.
Train question writers. Use plain language checks; avoid unnecessary complexity in wording.
Teach format skills explicitly. Typing, structuring extended answers, exam technique for collaboration formats — practise these as part of normal lessons.
Engage students in choice. Where possible, allow students to choose the format that best shows their learning — with transparent moderation rules.
Gather evidence for change. Use your pilot data to approach exam boards or trust leaders; evidence is persuasive.
For policy makers and exam boards
If assessment is about public trust, boards should be willing to innovate. Small, well-evaluated pilots that scale up (with rigorous moderation) are a sensible route. Consider rethinking what “standardisation” means: it could become standard in fairness, not standard in format.
Final thought
Assessment should answer one question: “What does this student know and understand?” It shouldn’t test whether they can withstand an environment that suits only some ways of thinking.
We can hold on to rigour without clinging to ritual. Plain-English questions, typing, extra time, mixed assessment formats — these are practical, respectful, and evidence-aligned moves. They unlock the potential of learners who currently spend more time fighting the format than demonstrating what they know.
If we want true inclusion, we must design systems that invite everyone in — not just those who fit a single exam template.



