Tyler Church 🌵👨‍💻

My adventures in life, love, tech, and business.

The Most Probable Function

An AI rant has been fermenting inside me for a long time. Today is the day it bubbles up and overflows into the universe.

I present to you, The Most Probable Function:

// Helper function to format formula for display
function formatFormulaForDisplay(formula: any): string {
if (!formula) return "No formula defined";
if (typeof formula === "string") {
return formula;
}
if (typeof formula === "object") {
// If it's a formulaSolo type, extract the formula string
if (formula.type === "formulaSolo" && formula.formula) {
return formula.formula;
}
// If it's a simple expression with type and content
if (formula.type === "expression" && formula.content) {
return formula.content;
}
// If it has a name or description, show that
if (formula.name) {
return (
formula.name + (formula.description ? ` - ${formula.description}` : "")
);
}
// If it has a formula property directly
if (formula.formula) {
return formula.formula;
}
// Fallback to formatted JSON
return JSON.stringify(formula, null, 2);
}
return String(formula);
}

The AI needed to display a formula. The pool of statistics roiling inside it coalesced together into a torrent of tokens that would, by Jove, display a formula.

The clouds parted, the sun shined, a formula appeared on screen.

Oh and also sometimes we display JSON to the user.

Our users are not computer programmers.

"Huh. That's weird," I said to myself, while reworking this UI. Why is this so complicated? Why do we fallback to JSON?

I clicked around for a while. Yep, there's some JSON.

Weeks earlier, I had reviewed the commits that introduced this function, and hadn't noticed it. Why not?

Because it looks probable.

In isolation, the code above is mundane. I've seen untold thousands of JavaScript functions written this way. Sometimes you have to be defensive, sometimes you want to leverage the same function to process multiple shapes of similar data. I saw the general structure of it, and my mind turned off and went "yep that's probably fine" and moved on to juicier code review targets.

But I was reworking this piece of code, and I had to understand it. And that's when the layers started unravelling.

There are a litany of things to say about this code:

  1. The entire codebase is well-typed, there's no need for the any argument, all relevant types can be listed.
  2. This function is used in one place. That argument is only inhabited by a single type. Sure would've been nice if the AI wrote that down.
  3. The argument is always an object, never null or undefined, we can be significantly less defensive.
  4. The string "expression" does not exist in the entire code base. That comparison can never succeed.
  5. In our system, formulas have titles, not names, and no description field.
  6. The alternative to a formualSolo is a formulaChoice, and the user wants to see what those choices are, not the JSON representation of them.

We spent 36 lines wasting our time defending against problems that don't exist, and then proceeded to do the wrong thing.

I'm choking back tears.

The correct implementation is 10 lines:

function formatFormulaForDisplay(formula: ReleaseProfileFormula): string {
if (formula.type === "formulaSolo") {
return formula.formula;
} else {
const choices = formula.choices
.map((c) => `${c.title}: ${c.formula}`)
.join("\n\n");
return `Question: ${formula.question}\n\n${choices}`;
}
}

I write that, and feel a bit better about the code. But I'm scared.

I've seen this pattern a lot. AI writes things that look like they're probably right. And so we're inclined to turn off our brains and let it slide.

Recently a coworker shared with me how his AI-assisted autocomplete wrote a whole chunk of code for him. It looked almost exactly like the code he had in his head.

Almost exactly. He then spent half an hour debugging it. There was a subtle difference between what was in his head, and what the autocomplete actually put into the file, but his brain glossed over it, because it was so close.

AI tools have gotten very good. I find myself writing a few characters and then waiting for the autocomplete to kick in, and usually it's dead on. Trying out different UI designs has never been faster for me.

But I keep having these experiences that drive me a bit insane.

Over and over again, I find functions that I can collapse into 1/3rd their size. AI can generate a lot of code quickly, but it's often just so much noise.

A UI looks subtly off, and I go to remove what looks like an offending CSS class, only to find it has no rules defined; it never existed.

I worry about where my eyes glaze over in PR reviews. I worry about my brain waiting for the autocomplete. Will I see what is subtly wrong? Or will I see what I expect to see?

The AI generates what's probable, but the machines we work with are literal. As programmers, our job is to describe in painstaking detail everything the computer must do.

I may turn my autocomplete off.

I fear we're confusing velocity with progress.