Central banks are obsessed with data. That’s nothing new. For decades, the Federal Reserve and the Bank of England have pored over spreadsheets, employment figures, and consumer price indices to figure out if they should hike rates or hold steady. But lately, there’s a louder push to let machines take the wheel. The argument is that algorithms can process information faster than any human committee ever could. They don’t get tired. They don’t have political biases.
I think that's a dangerous mistake. Also making news recently: Strategic Mineral Protectionism and the Re-Engineering of Brazilian Industrial Policy.
Interest rate decisions aren't just math problems. They’re social contracts. When the Fed moves the needle, it’s not just tweaking a variable in a vacuum. It’s affecting whether a family can afford a mortgage or if a small business has to lay off half its staff. AI is great at spotting patterns in historical data, but it’s historically terrible at navigating "black swan" events or understanding the nuance of human panic. If we outsource the backbone of our economy to a black box, we lose the accountability that keeps the system from collapsing.
The hallucination of certainty in economic modeling
We’ve all seen AI get things wrong. Sometimes it’s a funny mistake in a chat window, but in the world of macroeconomics, a "hallucination" can trigger a multi-billion dollar sell-off. The problem with using large scale models for interest rate decisions is that these models are backward-looking by design. They learn from what happened in 1970, 1990, and 2008. Additional information on this are covered by The Economist.
But the world doesn't always repeat itself.
Look at the post-pandemic recovery. Standard economic models predicted a massive, sustained surge in unemployment that didn't materialize the way people expected. If an AI had been running the show based on 20th-century data, it might have kept rates at zero for too long or spiked them so hard it caused a needless depression. Machines love stability. They crave predictable inputs. Our current global economy is anything but predictable. We’re dealing with deglobalization, shifting energy markets, and demographic collapses. An algorithm looks for the "average" outcome. Real life happens at the extremes.
Why human judgment still beats the algorithm
The best central bankers aren't just statisticians. They’re part psychologists. They have to read the room. Jerome Powell or Christine Lagarde aren't just looking at the Consumer Price Index (CPI); they’re looking at "inflation expectations." That’s a fancy way of saying they’re trying to guess what people think is going to happen.
If people believe prices will go up, they demand higher wages, and prices actually do go up. It’s a self-fulfilling prophecy. An AI can track the data after it happens, but it can't sit across from a union leader or a CEO and feel the tension in the air. It doesn’t understand the "vibecession"—that weird phenomenon where the data says the economy is great but everyone feels like they’re struggling. Humans can weigh the qualitative stuff. Machines can’t.
The accountability gap
Who do you blame when an algorithm breaks the housing market?
Right now, if the Fed messes up, we know who to call to the carpet. Congressional hearings might be theater, but they serve a purpose. They force the decision-makers to justify their logic in plain English. If a machine makes the call, the explanation is buried in billions of weighted parameters. "The model said so" isn't a good enough reason for a million people losing their jobs. We need humans in the loop because humans can be held responsible.
The danger of feedback loops and flash crashes
Financial markets already move at light speed. High-frequency trading (HFT) is basically AI fighting other AI for fractions of a penny. When you introduce AI into the actual policy-making side, you risk creating a feedback loop that nobody can stop.
Imagine an AI central bank that reacts to market volatility by instantly adjusting rates. The market reacts to the rate change, the AI reacts to the market’s reaction, and suddenly you have a flash crash that wipes out retirement accounts in three minutes. Humans provide friction. Friction is usually seen as a bad thing in tech, but in the economy, friction is what prevents total chaos. We need that "wait a second" moment that only a group of people sitting around a table can provide.
Data quality is a ghost in the machine
The phrase "garbage in, garbage out" has never been more relevant. Economic data is notoriously messy. Numbers get revised months after they’re released. The initial "flash" GDP report is often way off from the final tally.
If you feed unverified or shifting data into an automated rate-setting tool, you get erratic policy. Humans know to take the "preliminary" jobs report with a grain of salt. An algorithm takes it as gospel unless you program it otherwise. And even then, you’re just programming a human bias into the code. There is no such thing as an "objective" AI. It’s just a reflection of the data it was fed and the goals its creators set.
What happens when the model breaks
In 1998, a hedge fund called Long-Term Capital Management (LTCM) nearly blew up the world economy. They had Nobel Prize winners and the most sophisticated mathematical models of their time. They thought they had "solved" the market. Then Russia defaulted on its debt—something their model said was basically impossible.
The model didn't know how to handle the "impossible."
Central banking is the ultimate "impossible" job. You’re trying to balance price stability with maximum employment while navigating wars, pandemics, and political shifts. AI can be a tool. It’s great for organizing the massive amounts of data that central banks collect. It can help identify sub-trends in regional manufacturing that a human might miss. But it shouldn't be the one clicking the "increase" button.
Practical steps for the future of policy
We don't have to banish technology from the halls of the Fed. That would be just as stupid as giving it full control. The smart move is to treat AI as a junior analyst, not the Chief Investment Officer.
- Use AI for "nowcasting." It’s great at looking at real-time credit card swipes and shipping data to tell us what’s happening right now, rather than waiting for government reports that are six weeks old.
- Keep the "Human-in-the-Loop" (HITL) framework. No policy change should happen without a majority vote from people who have to live in the economy they're managing.
- Demand transparency in the models. If a central bank uses an algorithm to help inform a decision, that code should be open to public audit.
Don't let the allure of "efficiency" blind you to the necessity of human empathy in economics. Money isn't just numbers on a screen. It’s the way we value each other’s time and effort. You can’t automate that without losing something fundamental.
If you’re watching the markets, pay less attention to what the latest "AI sentiment tracker" says and more to what the actual board members are saying in their speeches. They’re the ones with their necks on the line. That’s where the real signal is. Stop looking for a magic machine to fix the economy. It doesn't exist.