The Room Is Never Uniform
Everyone who has run a training program knows this before the first session even starts: the cohort is not a single entity. It's a range. Some students arrive with an extensive practical experience and treat the early modules as a warm-up. Others are technically at the entry threshold but are genuinely building from scratch. And everyone in between fills out the middle.
That range is not the problem. The problem is what most programmes do about it - which is, ultimately, the same thing: pick a pace, accept it won't work for everyone, and move on.
When the Feedback Becomes Background Noise
Ask any instructor who has run a mixed-skill cohort long enough and you'll hear the same two pieces of feedback, reliably, from opposite ends of the room.
The stronger students say the pace is too slow. They disengage during lectures. They finish exercises early and sit idle. Over time, they may even stop investing fully because the curriculum isn't asking them to.
The weaker students say the pace is too fast. They can't consolidate one topic before the next one arrives. They're practicing under time pressure, which is the worst condition for building genuine skill.
Both pieces of feedback are so consistently predictable that instructors stop reacting to them. In a large cohort with a normal distribution of ability, there's no realistic response available within the current model. So the feedback gets absorbed as background noise; Not because instructors don't care, but because the structure doesn't give them anything to do with it.
That normalisation is worth naming clearly. It isn't a motivation problem. It's a model problem.
Two Students, Two Programmes, Same Structural Failure
I've seen both ends of this spectrum play out in programmes I've run; not in the same cohort, but the contrast is sharp enough to be worth putting side by side.
In one programme, a student finished the exercises for the next topic before the rest of the class had completed the third exercise of the current one - before that next topic's lecture had even been delivered. He had run ahead of the curriculum himself. And at some point, knowing he was well ahead, he eased off. The urgency disappeared. A student with real ability stopped being challenged because the model had no mechanism to stretch him.
In a separate programme, a student was consistently two full topics behind the entire class throughout the course. The gap was severe enough that the team seriously discussed whether he should drop out - not because of ability, but because the schedule had no structural room to accommodate the pace he actually needed to learn properly.
Same cohort model. Opposite problems. One pace served neither of them.
What Pre-Assessment Actually Solves
The instinct is to fix this at intake. Screen better, filter harder, start with a more uniform group.
I've seen two approaches to this, and both are instructive about what pre-assessment can and can't do.
Short pre-assessment interviews can flag students who might need extra attention early, technically or motivationally. They're useful as a rough filter. But they're too brief to give you an accurate picture of actual skill level. You're identifying outliers at best, not mapping the range.
Formal entry tests narrow the spread more meaningfully, students below a minimum threshold don't make it through. But students who pass the same test still arrive with significantly different levels of practical experience. Clearing the bar doesn't mean standing at the same height. The heterogeneity survives even the most structured intake process.
Pre-assessment can compress the range. It can't close it.
What Actually Worked - And Why It Doesn't Scale
I've seen two approaches that genuinely addressed the pacing problem. Both required resources most programmes don't have.
The first was personalised exercise queues at a high trainer-to-student ratio. When coverage was good enough, trainers could build individual exercise queues - remedial practice and more time for weaker students, advanced exercises to keep stronger ones stretched. The core curriculum pace stayed fixed, but the exercise layer became individual. It worked. The constraint was that it required dedicated time allocation and only functioned at that ratio.
The second was level-split classes. With enough students and enough trainers, cohorts could be divided into three or four separate groups - each running lectures and exercises at a pace calibrated to their starting point. The gap within each class narrowed meaningfully. The constraint was that it only became viable at significant scale, with the trainer headcount to support multiple parallel tracks.
Both solutions work. The barrier isn't knowledge - the solutions are well understood. The barrier is resource. Most training providers can't sustain the ratio or the headcount that makes either approach viable. Everything else is a managed compromise: design for the middle, absorb the feedback from both ends, accept some disengagement and some attrition as the cost of running at scale.
The Question I Keep Coming Back To
The pacing problem isn't unsolvable. The workarounds exist and they're proven. The barrier is that implementing what works has always required a level of human resource that most commercial programmes can't justify.
But that constraint is worth challenging directly. The exercise layer, the part of training that most directly drives skill transfer, is exactly where personalisation matters most. And it's the part of the model most amenable to rethinking.
What would it take to deliver a personalised exercise experience without needing a 1:4 ratio to make it work?
That's the question driving how I think about training design now.