The Borrowed Model

When professional certification prep programmes needed a delivery structure, most of them borrowed from the classroom. It made sense at the time. The classroom model was familiar, scalable, and came with a ready-made logic: deliver content, practice it, test it, certify.

The problem isn't that the classroom model is wrong. It was designed for a specific purpose and it served that purpose reasonably well. The problem is that some of what it exported to certification prep deserves a harder look - because the context is different, the stakes are different, and the students sitting in those rooms are different.

After running cybersecurity programmes across several environments, I've come to believe there are three things worth reconsidering. Not as wholesale indictments of how training is currently done, but as honest questions about whether what was borrowed is still serving the people the training is meant to produce.

One Pace for Everyone

The classroom model assumes a relatively uniform group moving through content together. In academic settings, that assumption is imperfect but workable. In certification prep, it breaks down faster and more visibly.

The cohorts I've run were never uniform. Students arrive with meaningfully different levels of practical experience, different learning speeds, and different gaps. The curriculum doesn't know this. It moves at one pace, delivers the same content in the same sequence to everyone, and expects the distribution of ability in the room to sort itself out.

What actually happens is more predictable than that. Advanced students reach exercises before the lecture that contextualises them has been delivered, make avoidable mistakes, and eventually ease off because the programme stops challenging them. Struggling students fall behind not because they lack ability but because the pace moves before they've consolidated the previous topic. Both outcomes are real. Both are costly. And both are largely invisible in the aggregate metrics most programmes track.

I've written about this in more depth in a previous post on mixed-skill cohorts - but the short version is this: one pace can only ever serve the student it was designed for, and that student is a statistical average, not a person in the room.

Assessment That Doesn't Reflect the Full Picture

This is the issue I've come back to most consistently, and it's the one I think gets the least attention in training design conversations.

Most certification programmes assess students through assignment grades, completion rates, and one or two high-stakes evaluations. Each of those metrics tells you something. None of them tells you the whole story on their own - and the relationship between them matters as much as any individual number.

A student with high grades but a low completion rate isn't necessarily stronger than a student with lower grades and a high completion rate. The second may have moved faster, attempted more, and developed a broader base of practice - even if the average score is lower. A student who completes everything and scores well on written assignments but can't explain their reasoning in a one-on-one conversation is telling you something important that the grades didn't surface. A student who performs well on structured exercises but struggles when asked to do the same task independently, without the scaffolding the exercise provided, is a different problem again.

The assessment mechanism that most certification programmes borrowed from the classroom is optimised for consistency and scalability. It wasn't designed to triangulate between these different signals. The result is that instructors can finish a course believing their students are ready, only to discover - through a live interview, a practical test, or the student's performance in the actual job - that the picture the assessments painted was incomplete.

I've covered the visibility dimension of this problem in a previous post on running blind - but the assessment design question is distinct. It's not just about what you can see during training. It's about whether the things you're formally measuring are actually measuring what you think they are.

The Lecture-First Assumption

Most certification programmes lead with a lecture and follow it with practice. Content first, application second. This is the dominant sequence - it's how most of us were trained, and it's how most of us train.

I'm not convinced it's wrong. There are good reasons for it. New content needs a conceptual frame before it can be practised meaningfully. Sending students into a lab with no orientation often produces confusion rather than discovery.

But I do think it's worth asking whether lecture-first is the right sequence for every topic, every skill level, and every point in the programme - or whether it's simply the default that was inherited without much examination. There's a body of research on problem-first approaches, where students attempt a task before the concept is formally introduced, that suggests the struggle itself creates a kind of receptiveness that passive listening doesn't. I haven't run a controlled comparison in my own programmes. What I have seen is that students who encounter a problem before they understand it fully often engage with the subsequent explanation differently - with more genuine curiosity and more specific questions - than students who received the lecture first.

That's an observation, not a verdict. But it's one worth sitting with.

The Honest Question

Certification prep borrowed the classroom model because it needed a structure and the classroom model was available. That borrowing produced programmes that are, in many cases, genuinely good - well-designed content, substantial hands-on practice, experienced instructors.

The question isn't whether the model is broken. It's whether the specific things it carried over - uniform pacing, assessment mechanisms designed for scalability rather than accuracy, a delivery sequence inherited more from convention than evidence - are still the right defaults.

Some of them might be. But I think they deserve to be chosen deliberately, not simply inherited.