The Ultimate Hammer: Why Programmers Can't See Beyond Computation. A Dialogue.

The Ultimate Hammer: Why Programmers Can't See Beyond Computation. A Dialogue.

We assume intelligence is computable not because we’ve proven it, but because we’ve lost the ability to conceptualize it any other way. We are no longer using computation to model reality; we are forcing reality to fit the model.

  • Alex: Senior engineer at a leading AI lab, 15 years of experience in machine learning
  • Morgan: Philosopher of science with a background in biology

Alex: Look, I know you philosophers love to complicate things, but the evidence is pretty clear. Demis Hassabis said it best—we haven't found anything in the universe that isn't computable. From protein folding to neural networks, it's all just algorithms. We're getting exponentially better at this. GPT-4 can write essays, AlphaFold solved protein structure, AlphaGo beat the world champion. Give us another decade and we'll have AGI. The brain is just a really complex computer we haven't fully reverse-engineered yet.

Morgan: [smiling] That's a remarkably confident position. And I understand why—the progress has been stunning. But may I ask you something? When you say "everything is computable," what exactly do you mean by that?

Alex: I mean that any process that follows rules can be simulated by a Turing machine. And everything we've studied—from physics to chemistry to neurons firing—follows rules. Therefore, it's computable. If we can't compute something yet, it's because we haven't found the right algorithm or don't have enough compute power. Not because it's fundamentally uncomputable.

Morgan: So when you look at the world, you see algorithms?

Alex: Pretty much, yeah. I mean, what else would it be? Either something follows rules or it's random. If it follows rules, we can code it. If it's random, we can use probabilistic methods. Either way, computation handles it.

Morgan: [nodding thoughtfully] That's fascinating because... you know, I've heard almost exactly that level of certainty before. Not from programmers, but from other brilliant people working with the most advanced technology of their time.

Alex: What do you mean?

Morgan: Well, in ancient Greece and through the Renaissance, when hydraulic engineering was cutting edge—aqueducts, water clocks, elaborate fountains—the leading thinkers were absolutely convinced that the body was a plumbing system. Galen described "animal spirits" flowing through the nerves like water through pipes. Descartes imagined the pineal gland as a kind of valve that redirected fluid flow to trigger different thoughts.

Alex: [chuckling] Okay, but that's obviously wrong. They didn't have microscopes. They didn't understand neurons or electricity.

Morgan: Exactly. But here's the thing—they were just as certain as you are now. And they had good reasons! Hydraulic systems could do genuinely complex things. The Antikythera mechanism, water clocks that could move statues and play music. If you could build such sophisticated machines with pipes and pressure, why wouldn't the body work the same way?

Alex: Sure, but they were working with a primitive metaphor. Computation is different. It's not just one technology among many—it's mathematically proven to be universal. Turing showed that.

Morgan: [leaning forward] Ah, now we're getting somewhere. That's actually the first reason the computational lens is so particularly sticky. Can I walk you through this?

Alex: Go ahead.

The Universal Turing Machine: The "Infinite Hammer"

Morgan: Most tools in history had obvious limits. A hydraulic system can only push so much fluid. A clock can only turn so fast. But Turing proved something remarkable—that a Universal Turing Machine can simulate any other algorithmic process.

Alex: Right, that's what makes computation special. It's not just another metaphor.

Morgan: But here's where the trap emerges. Because computation is theoretically universal—because it can simulate any rule-following process—you start to believe you're not just using a tool. You believe you're using the ultimate tool. The final framework that captures reality itself.

Alex: [frowning] I don't see the trap. If computation really can simulate anything...

Morgan: Can it? Or does it simulate anything we can express as rules? Think about it—your logic becomes circular. You say: "If a process follows rules, computation can simulate it. We observe natural processes. Therefore, they must follow rules. Therefore, they're computable." But you've already decided what counts as a valid explanation—only rule-following processes. Anything that doesn't fit that framework gets dismissed or ignored.

Alex: But what else would there be? If something doesn't follow rules, it's just random, and we can model randomness.

Morgan: What if there are concepts that are neither rule-following nor random? What if there are processes that are genuinely creative—that generate new rules rather than following existing ones?

Alex: [skeptically] Like what?

Morgan: We'll come back to that. But first, let me show you the second reason computation feels so complete. It has to do with how programming reshapes your perception.

The Granularity of Discrete Logic

Morgan: Tell me, when you're designing a system, what's the first thing you do?

Alex: You break down the problem. Define your data structures, your objects, your functions. You decompose complexity into manageable pieces.

Morgan: Exactly. Programming requires breaking down the continuous, messy world into discrete variables and clear relationships. You've probably done this thousands of times. Tens of thousands.

Alex: It's called good engineering. You can't build complex systems without decomposition.

Morgan: Absolutely. But here's what happens—after 10,000 hours of translating reality into code, your brain starts performing a "pre-processing" step automatically. You stop seeing a forest; you see an array of tree objects. You don't see a conversation; you see a sequence of state transitions. You don't see love; you see a biochemical feedback loop with specific parameters.

Alex: [defensive] That's just... understanding how things actually work. Seeing beneath the surface.

Morgan: Is it? Or is it imposing a specific structure—the structure of discrete logic—onto phenomena that might not naturally have that structure? Let me give you an example. When you experience the color red, you can describe it computationally: wavelength of 650 nanometers, activation of L-cones in the retina, neural firing patterns in V4. All computable. But is that description the same as the experience itself?

Alex: Well, no, but the experience is just what it feels like to run that computation in a biological substrate. The "what it's like" is just an emergent property.

Morgan: Ah, but you just did it again—you assumed the experience must be an emergent property of computation because computation is your framework. What if the relationship is reversed? What if the computation is an abstraction we impose on the experience, not the other way around?

Alex: [pausing] That... doesn't make sense to me. How could computation be an abstraction? Computation is the most precise thing we have.

Morgan: Which brings me to the third reason the computational lens is so hard to escape.

The Erasure of Hardware

Morgan: In computer science, you're taught something fundamental: substrate independence. The same algorithm can run on silicon, vacuum tubes, or even water pipes with the right setup. The hardware doesn't matter; only the logical structure matters.

Alex: Right, that's one of the most powerful principles in CS. It's why we can write portable code.

Morgan: And it's brilliant—for engineering computers. But now apply that same thinking to biology. What happens?

Alex: You... focus on the algorithm. The information processing. You treat the physical details as implementation details.

Morgan: Exactly. You start ignoring the "wetness" of life. But here's the problem—in living organisms, the hardware is the software. A neuron doesn't just process a signal like a logic gate. It grows. It changes its physical shape. It consumes energy in a way that's inseparable from the information processing. The metabolic state, the physical chemistry, the three-dimensional protein folding—these aren't just "implementation details" you can abstract away. They're intrinsic to what the system is doing.

Alex: [thoughtfully] But we can model all of that computationally too. We have molecular dynamics simulations...

Morgan: Which take supercomputers days to simulate milliseconds of protein behavior. And even then, we're still imposing discrete timesteps on fundamentally continuous processes. We're forcing the biology into computational form. And sometimes... it just doesn't fit.

Alex: [more uncertain now] What do you mean, doesn't fit?

Morgan: Think about a neuron. You have processes happening at microsecond timescales, second timescales, hour timescales, day timescales—all simultaneously. They don't coordinate with each other. They don't wait for a global clock tick. The astrocyte's metabolic state from hours of activity changes what a millisecond calcium influx means. The same electrical event has completely different meanings depending on context. And that context is itself made of traces of previous contexts, going down infinitely. There's no ground level to formalize. How do you put that in a computational model?

Alex: You... use different timesteps for different processes. We do multi-scale modeling.

Morgan: But you still need to specify how those scales couple. You need meta-rules. And then meta-meta-rules for when those rules change. You fall into infinite regress. The computational framework keeps demanding: "Give me the base case. Give me the fixed rules." But biology keeps saying: "There is no base case. The rules are traces of other rules, all the way down."

Alex: [frustrated] Okay, but we're still making progress. AlphaFold, GPT-4, AlphaGo—these work. If computation was fundamentally wrong, we wouldn't be succeeding.

Morgan: Oh, I'm not saying computation doesn't work! It's incredibly powerful. But there's a difference between saying "computation is a useful tool for modeling certain aspects of intelligence" and saying "intelligence is computation." The first is pragmatic; the second is metaphysical.

The Cognitive Hammer

Morgan: You know the old saying—if all you have is a hammer, everything looks like a nail?

Alex: Sure, but I don't see how that applies. We're using the most powerful tool humans have ever developed.

Morgan: That's exactly why it applies. Because the computational hammer is so powerful, so universal in its domain, it stops feeling like a tool. It starts feeling like the structure of reality itself. You've spent 15 years translating the world into code. You've succeeded at hammering thousands of complex things into computational form. After that much training, you develop an induction: if I can hammer this, and this, and this... then everything can be hammered. You just need to hit it at the right angle.

Alex: [defensively] So you're saying I'm biased because of my profession?

Morgan: I'm saying we're all biased by our tools. But programmers face a unique challenge because their tool is theoretically universal. When a hydraulic engineer from the Renaissance couldn't explain something with fluid dynamics, it was obvious the metaphor had limits. But when you can't explain something computationally, you don't think "maybe computation has limits." You think "I haven't found the right algorithm yet." The hammer is so good that you can't see it's still a hammer.

Alex: [long pause] You know, when Hassabis talks about the world being computable, he’s not just making that up. He’s basing it on evidence. Nearly everything we’ve scrutinized closely enough has turned out to be computational.

Morgan: It’s turned out to be things you could analyze with computational tools. But what about the things that slip through the mesh? Take the "metabolic veto"—where a synapse refuses to fire despite meeting every computational condition because it simply cannot afford the ATP cost. That isn’t a computation failing to execute a rule. That is biology saying "no" based on a metabolic reality that isn't pre-specified in any algorithm.

Alex: But we could add metabolic state as a variable...

Morgan: And then you'd need to model when that variable matters, which depends on other traces, which depend on... you're always adding epicycles. The Ptolemaic astronomers could predict planetary motion with incredible accuracy by adding enough epicycles. But they were wrong about the fundamental model. Sometimes accuracy isn't the same as truth.

Alex: [quietly] So what are you saying? That computation is like... the geocentric model?

Morgan: I'm saying it might be. Look at the history. Every era uses its most advanced technology as the metaphor for consciousness:

  • When hydraulics were cutting-edge, the mind was fluid flowing through pipes
  • When clockwork automata were revolutionary, the mind was gears and springs
  • When telegraphy connected the world, the mind was a vast electrical network
  • When thermodynamics dominated physics, the mind was a heat engine managing libidinal energy

Now we have computers, and the mind is software running on neural hardware. Each metaphor was productive for a while. Each seemed obviously correct to the people living in that era. And each eventually reached its limits.

Alex: [troubled] But... those were wrong because they didn't have the right science yet. We actually understand neurons now. We have the math. We have the Church-Turing thesis.

Morgan: And the clockmakers actually understood gears and springs. And the telegraph engineers actually understood electrical signals. Having the math for computation doesn't mean everything is computation any more than having the math for gears means everything is clockwork.

Alex: So what's the alternative? If intelligence isn't computation, what is it?

Morgan: Maybe that's the wrong question. Maybe intelligence isn't any one thing that can be captured by a single framework. Maybe it's something that emerges from the interaction of an embodied system with its environment, where the embodiment—the wetness, the metabolism, the physical constraints—is essential, not incidental.

Alex: That sounds... vague. Not scientific.

Morgan: It sounds vague because we don't have good formal tools for it yet. Just like "attractors" and "chaos" sounded vague before we had the mathematics of dynamical systems. Just like "evolution" sounded vague before we had genetics. We're so used to the precision of computation that anything that doesn't fit that framework feels imprecise. But maybe the imprecision is in our tools, not in nature.

Alex: [long silence, then slowly] You know what bothers me most about what you're saying? It's not that you might be right. It's that if you are right, I don't know what to do with that information. Computation is what I know. It's how I think. If it's not the right framework... I don't know what framework would be.

Morgan: [gently] That's the cognitive hammer showing itself. When you've trained your mind to see everything through one lens, it becomes genuinely difficult to imagine other lenses. Not because you're not smart enough—you're clearly brilliant. But because the lens has become part of your cognitive architecture. It's like asking a fish to imagine what "dry" means.

Alex: So how do you escape it?

Morgan: You start by recognizing you're in it. You start by noticing when you're assuming computation is the territory rather than the map. You start by taking seriously the phenomena that don't fit cleanly—the metabolic veto, the temporal incoherence, the traces without ground, the ability to refuse. Instead of treating those as bugs in our models, we treat them as signals that maybe we need a different kind of model entirely.

Alex: [uncomfortable laugh] I don't like how much sense you're making right now. It feels like you're pulling the floor out from under me.

Morgan: That's exactly what it feels like when a paradigm starts to crack. It felt that way when people realized the Earth wasn't the center of the universe. When they realized time wasn't absolute. When they realized living things weren't designed. The ground shifts beneath your feet. But that's also when the most interesting discoveries happen—when you're forced to think in ways you never have before.

Alex: [after a long pause] Okay. Say I accept that I might be trapped in a computational lens. What do I do about it? I can't just stop being a programmer.

Morgan: You don't have to. Use your hammer—it's an incredible tool. But stay aware that it's a hammer. When you're modeling a system and something doesn't fit, instead of immediately thinking "I need a better algorithm," also ask yourself: "Is this system doing something that isn't computational?" Let the biology teach you rather than forcing it into your framework.

Alex: [smiling slightly] That's a hell of a recruiting pitch for philosophy.

Morgan: [smiling back] I prefer to think of it as a recruiting pitch for intellectual honesty. The smartest people I know are the ones who can hold their own expertise lightly—who can say "I'm an expert in X, and I've learned that X doesn't explain everything." That takes more courage than doubling down on what you already know.

Alex: You know what? I need to think about this. Really think about it. Maybe go back and look at those papers on the neuron from a different angle. See what I've been abstracting away.

Morgan: [grinning back] That's the spirit.

Read more

.