The Two Rooms Every Instructor Reads
When you're running a training session in a physical classroom, you're doing two things at once without even thinking about it.
You're reading the lecture room - faces, body language, who's with you and who drifted off three slides ago. That feedback is useful. It tells you when to slow down, when to repeat, when to cut the theory and get to an example.
But the lecture room isn't where the real signal lives.
The real signal is in the exercise room. And that one only works because you're physically present in it.
What Walking Between Desks Actually Does
There's a version of instructor presence that looks passive - just moving around the room while students work. It isn't.
Walking between students during exercises was a trigger. Students asked questions they would never have raised their hand for in front of the group. Small things, embarrassing things, the kind of confusion that feels too basic to admit publicly - those surfaced naturally when you stopped next to someone's screen.
More importantly, you could see the approach. Not just whether someone finished an exercise, but how they were moving through it. The hesitation before a certain step. The way someone copied a block they didn't understand and kept going. The student who had the right answer but for completely the wrong reason.
That's the difference between knowing a student completed the work and knowing whether the skill actually transferred.
The Loss That Doesn't Get Talked About Enough
When training moved online - I only ran fully remote for a short time, during COVID, but it was enough to feel exactly what disappeared - the conversation focused on the obvious loss. You can't read faces on a video call. Engagement signals fade out. The lecture room gets harder to read.
That's real. But it's the surface loss.
The deeper loss is the exercise room. You can't walk between screens. You can't see the approach. You can't intercept someone mid-wrong-turn before they've built twenty minutes of work on a flawed assumption. That window - the one where a quiet observation or a well-timed question can redirect someone before they've dug in - simply doesn't exist.
And I'd argue this isn't only a remote training problem. Any programme that leans heavily on an LMS as its primary measurement of progress, even in physical training, starts to lose the same signal. Submission completed. Status green. Move on. The platform tells you one story. What's actually happening in the work may be another.
The Cohort That Looked Fine
I ran a cohort where everything looked good from the outside.
Submission quality was solid. Pace was on track. The LMS showed completion across the board. By every metric I had access to, progress was happening.
Then I ran one-on-one interviews, asked students to write code they'd already submitted in previous exercises. Simple tasks. Work they'd technically "completed."
Many couldn't do it.
They had used AI to complete the exercises. The LMS registered the submissions. Nobody flagged anything. The actual skill hadn't transferred, and I had no way of knowing until I deliberately manufactured a moment of visibility - sat with each student, live, and asked them to do the work in front of me.
That's exactly what the exercise room is supposed to provide by default.
What followed wasn't clean. We banned AI during exercises, extended the module, went back to basics, and introduced new exercises specifically for solo practice. All of it cost time, cohort momentum, and credibility. The rework was significant, not just for me, but for the students who had to unlearn habits they'd already settled into.
The problem wasn't that they used AI. The problem was that the signals I was reading - completion rates, submission quality, LMS status - looked like progress. They weren't. And I had no way to know until I stopped relying on the platform and created the visibility myself.
The Signal Was Wrong
In a physical lab session, I'd have had a chance to catch this earlier. Not because I'd have seen them using AI, but because I'd have seen the approach. The rhythm would have been off. The ownership of the work wouldn't have been there in the way it is when someone's actually working through a problem themselves.
The exercise room gives you that - but only if you're in it. Only if you're moving through it, watching, asking, creating the conditions where the real work has to happen in front of you.
Most LMS platforms give you nothing between "assigned" and "submitted." That gap - everything that happens in the middle, the actual process of learning - is invisible unless you deliberately build something to see it.
What Are You Actually Measuring?
I don't have a clean answer to how to fully solve this. But I do know the question I keep coming back to: what are you actually measuring during your lab sessions — and does it tell you what you think it does?