Two Types of People in AI: Metric Chasers and Structure Thinkers
The system doesn’t care what you think matters. Whatever you measure, that’s what it produces. If you measure spectacle, what you harvest is a one-time firework.
The hardest bug to debug isn’t broken code—it’s a system structure that makes everyone “right,” yet the outcome is still wrong.
Chasing metrics is pushing harder. Seeing structure is finding leverage.
I used to think the AI industry was all about who moves fastest. First to ship the model wins. First to launch wins. First to go viral wins. I carried that belief through a full year of building product—and something kept feeling off. The entire team was sprinting flat out, yet when I looked back, it felt like we’d been running in circles.
I recently read Donella Meadows’ Thinking in Systems. After finishing it, I realized we might be able to understand these problems through the lens of the fundamental laws governing how systems behave.
If you’ve ever wondered: Why do some products explode in week one and die within two months? Why does every individual on the team do nothing wrong, yet the result is still a mess? Why does the grind keep getting harder with no finish line in sight?
This book might give you an entirely new lens.
Here’s how I understand it. It boils down to one thing: Many problems in the AI industry aren’t technical problems—they’re systemic structural problems. And most people’s way of solving them only reinforces the very structure causing them.
Two Types of People, Two Different Games
In any workplace, there are really two kinds of people, two modes of operating: metric chasers and structure thinkers.
Metric chasers work by watching numbers: downloads, DAU, funding raised, shipping velocity. If the number goes up, it’s working. If it drops, pivot.
Structure thinkers ask a different layer of questions: Why did that number go up? How will the system react after it goes up? Is the thing I’m optimizing actually the most important thing?
On the surface, both types might be doing the same work. But the outcomes are completely different.
What Does Metric Chasing Look Like?
Trap #1: Whatever You Measure, the System Produces
Sora is a textbook example.
Released in late 2024, it hit 627,000 iOS downloads in its first week—more explosive than ChatGPT’s debut. Everyone thought it was the next phenomenon. Two months later, daily downloads had plummeted from 100,000 to under 25,000.
Why? Because from day one, Sora wasn’t optimized for “what can users actually do with this”—it was optimized for “how amazed are users when they see it.” But amazement is a one-shot experience. You don’t open an app every day just to be amazed again.
The system doesn’t care what you think matters. Whatever you measure, that’s what it produces. If you measure spectacle, what you harvest is a one-time firework.
The entire entertainment track in AI is sliding into the same trap. One founder put it bluntly: “People care less and less about completeness—they only care whether each segment delivers enough dopamine. Like short dramas: they get worse as they go on, but it doesn’t matter—you already got your hit along the way.”
Dopamine is a consumable. Products shouldn’t be.
Trap #2: Everyone Is Only Watching Their Own Metrics
There’s an even more insidious form of metric chasing: everyone on the team is chasing their own metric, everyone “did the right thing,” but the sum total is wrong.
I lived through this one. Product and engineering were fighting. Engineering wanted to build in-house, watching scalability metrics. Product wanted a third-party solution for a quick launch, watching MVP delivery timeline metrics.
Both sides had a point, because both were looking at different information and optimizing for different goals. Pick engineering’s approach? Delivery slips. Pick product’s approach? The MVP passes, but the company still needs to build in-house eventually—so the earlier investment was wasted.
In hindsight, it’s hard to say who was right or wrong.
Meadows calls this “bounded rationality”—everyone makes the most reasonable decision based on their own slice of information, but individual optima don’t add up to a global optimum.
The hardest bug to debug isn’t broken code—it’s a system structure that makes everyone “right,” yet the outcome is still wrong.
When metric chasers encounter this, they think: more meetings next time, better communication, everyone needs more global awareness. When structure thinkers encounter this, they think: how do we change the process so both teams are forced to see each other’s constraints before making decisions?
How Do You Switch from “Chasing Metrics” to “Seeing Structure”?
At this point you might ask: I get the theory—what do I actually do?
The book offers a few threads.
First, don’t hide information.
Her exact words: “You can’t distort, delay, or withhold information.”
This is precisely the remedy for that product-vs-engineering fight. The reason both teams made contradictory “correct decisions” is that each only saw their own slice of information. The solution isn’t more meetings—it’s making information flow structurally. For example, let engineering see delivery pressure during the product planning phase, and let product see long-term architecture costs during the technology selection phase.
Second, don’t run the whole thing in one shot—verify step by step.
Someone calculated: an 18-step task where each step has 90% accuracy ends up with only a 15% overall success rate.
The evolution of AI Agents is the best illustration of this lesson. The early approach to building Agents was to stuff more instructions into the model and let it run the entire task in one shot. Developers tried it—accuracy actually dropped, and token costs went up 20%. What’s gaining traction now is Harness Engineering: adding constraints at the system level—code interceptors, automatic retries, step-by-step verification—pausing at critical checkpoints, letting a human glance over it, recalibrate, then proceed to the next leg.
It’s not about teaching the model to think more correctly; it’s about designing architecture that catches it when it thinks wrong.
Designing a fault-tolerant structure is far more realistic than cultivating a team of perfect individuals.
Third, don’t assume—sense.
Meadows shared her insight from skiing: on the slopes, you can’t plan every turn from the top of the mountain. Snow conditions, gradient, speed—everything changes constantly. You can only watch what’s ahead, feel what’s underfoot, and keep adjusting. She said she later realized that management and decision-making work exactly the same way.
Sora’s problem was the opposite. The team assumed from the start that users wanted to be “amazed” by the technology, then charged ahead based on that assumption. If instead of chasing “first-week downloads,” they had skied the slope—feeling the terrain as they went, observing users’ real behavior, like what they actually used video generation for, where they dropped off, what scenarios brought them back—the product trajectory might have been completely different.
Looking back at all three points, they really converge on the same truth: you don’t need to push harder at every point in the system—you need to find the one right spot. Making information flow is a leverage point. Adding a layer of step-by-step verification is a leverage point. Changing the metric from “spectacle” to “retention” is a leverage point. Small changes, but they move entirely different things.
Chasing metrics is pushing harder. Seeing structure is finding leverage.
So, Back to Where We Started
I used to think the AI industry was a race of speed. Now I think speed is just an amplifier: if the direction is right, it amplifies success; if the direction is wrong, it amplifies disaster.
Sora didn’t die because the technology was bad—it was because they never sensed what users truly needed. Agents aren’t unstable because models are stupid—nobody designed a fault-tolerant system for them. Team friction isn’t because anyone lacks effort—the information structure only let each person see their own piece.
These problems can’t be solved by “going faster.”
Speed determines how far you can run. Structure determines whether you’re running in the right direction.