Engineering Leader, Former Magician and Explorer of Curiosities

The Human in the Loop Isn't Optional

Intel had a great marketing trick: they made you care about a chip you’d never see. The sticker was on the outside of the laptop. The processor was sealed inside. You had no idea what it looked like or how it worked — but “Intel Inside” meant something. It meant the thing was engineered well. It meant someone was accountable for the core.

I’ve been thinking about a version of that for AI workflows. The label I want to see on my work is Alex Inside. Not because I wrote every line, but because there’s a human behind the judgment, the direction, and the calls that actually matter. The chip is impressive. But someone still has to be the computer.

Here’s the thing though: that label is a lot easier to slap on than to earn.


The part where I lost the thread

A while back, I handed off a significant chunk of a project to AI. Not trivial stuff, the kind of thing that shapes core behavior of what you’re building. It went smoothly. The output was good. I moved on.

A few weeks later, I needed to evolve it. Add a new constraint. Rethink the logic given some new information. And I realized, with a kind of quiet dread, that I couldn’t quite do it. Not because the AI had written bad code or bad reasoning (it hadn’t). But because I had never actually owned any of it. I had approved it, not understood it. I had agreed, not decided.

“How do we evolve the algorithm if we didn’t write it, or even carefully ask for it?”

That question sat with me. Because the drift isn’t dramatic. You don’t suddenly find yourself unable to function. It’s subtle. One project you handed off 20%. Next one, 40%. Then you’re reviewing AI output more than you’re doing actual thinking, and you’ve quietly outsourced the part of the job that keeps you sharp. The part that builds intuition. The part that lets you change direction when direction needs to change.

The algorithm doesn’t drift. You do.


AI for AI’s sake

Here’s a flavor of drift that’s less personal and more systemic: the reflexive addition of AI to things that didn’t need it.

Someone in the meeting asks, “Should we add a chat interface so users can query their data?” And because it’s 2026 and you’ve got an LLM API key sitting right there, the answer is almost always “yes, obviously.” But it’s worth actually asking: does this help?

As information gets easier to find — and it has never been easier — the real skill isn’t retrieval anymore. It’s distillation. Not asking Jeeves, but knowing what question to ask and what to do with the answer. A chat interface that lets users ask “what happened last Tuesday?” is impressive. But if what they actually need is a clean weekly summary that surfaces the three things they should act on, the chat interface is a party trick.

Someone still has to decide what matters. Someone still has to make the judgment call about what the user actually needs versus what they asked for. That’s not something you can put in a prompt.

The best AI integrations I’ve seen feel invisible. They’re doing real work somewhere under the hood, and the person using the product gets a cleaner experience. They feel like a product that actually understood what you needed. The worst ones feel like AI with a voice skin on top. Someone decided the experience should be AI because AI is the thing right now, not because it was the right answer.


Where are you in the loop?

There’s a version of “human in the loop” that’s mostly decorative. You’re in the loop in the sense that you clicked approve. You read the summary. You nodded. But you’re not actually doing anything the loop couldn’t do without you. You’re just there for liability reasons.

And then there’s a version that actually matters.

The difference is roughly this: asking and agreeing vs. directing, evaluating, and evolving.

Asking and agreeing is: “Write me a strategy for X.” Read it. Looks reasonable. Ship it.

Directing, evaluating, and evolving is: knowing enough about X to shape the input, reading the output with a critical eye, pushing back on things that don’t sit right, and building on the result in a way that adapts when the situation changes. It’s the difference between having a strong opinion you can defend and having a summary you half-remember.

This matters even more when other humans are involved. Clients, users, teammates: people don’t just want polished output. They want to feel heard. Trust doesn’t transfer through the AI’s eloquence. It transfers through the sense that a real person thought about their specific situation and gave a damn. A beautifully worded recommendation that clearly came from a template (even a good template) does not have the same weight as a recommendation from someone who was clearly paying attention.

You can automate a lot of the generation. You can’t automate the relationship.


Alex inside

So here’s where I’ve landed.

AI is embedded in nearly everything I do now. Research, drafts, code, proposals. It’s in the stack whether I name it or not. And I think that’s fine. More than fine, actually. The output is often better, faster, more thorough than what I’d have produced grinding through it alone.

But the thing I keep coming back to is: what’s the sticker on the outside?

“Intel Inside” worked because Intel stood for something. The chip did something specific and did it well, and the company was accountable for that. The sticker was a promise.

“Alex Inside” is a promise too. It means I directed this. I evaluated it. I could defend the reasoning. I can evolve it when the situation changes. I’m not the person who approved the output — I’m the person who owns it.

The AI is doing real work in there. But I’m the processor.

Using AI well is less about what you hand off and more about staying the kind of person who can hand things off and still know what’s happening. Because a loop without a human who’s actually in it isn’t a loop — it’s just a machine running until it breaks, with no one quite sure how to fix it.

Don’t lose the thread.