The Unaccountability Machine - Once Again It’s Somehow All Milton Friedman’s Fault
Maybe the answer really is to be found in early 1970’s Chilean socialism…
By: Dan Davies
Published: 2024
304 Pages
Briefly, what is this book about?
The development of accountability sinks, a construct used by governments, corporations, and really any large-scale organization to deflect responsibility (and potential punishment) away from individuals and into processes. As part of his critique and his hoped for solution Davies leans heavily into management cybernetics and Stafford Beer. If neither of those ring a bell perhaps you’ve encountered Beer’s most famous saying, “The purpose of a system is what it does.”
What’s the author’s angle?
Davies sits in a weird place ideologically. He’s a huge fan of Beer, and spends lots of time talking about Beer’s partnership with Salvador Allende, the president of Chile in the early 70s. They partnered to create Cybersyn, a cybernetic management system for the whole economy. Davies admits it wouldn’t have worked at the time, but seems to think that maybe with AI something like it might work now? On the other hand, in many places he seems to be channeling Taleb, and while I can’t find anything by Taleb directly commenting on Cybersyn, I’m confident he would not be a fan. Davies also levels significant criticism at Milton Friedman, which makes sense in the Chilean context, but it feels out of character for a soberly written business book.
Who should read this book?
I read it as part of a Slate Star Codex/Astral Codex Ten book club. If that means anything to you, you’ll probably find the book interesting. Additionally, anyone looking for another way to describe the hidden brokenness of the world will probably enjoy the book.
What does the book have to say about the future?
Davies sees AI as something that could either exacerbate the problems of accountability sinks or solve them. They might end up being the ultimate accountability sink, or they could be precisely the thing to notify you when there’s an edge case that really does need human oversight.
Specific thoughts: The problem of focus and consequences
The book starts out by describing the concept of an accountability sink, a rule, a process, a metric, or even a diffusion that results in no one being accountable for a mistake. The chief example of an accountability sink in action would be the 2007-2008 financial crisis, when the world’s financial system blew up and no one went to jail.1 No one was charged with a crime. Most of the post-mortem discussion focused on failed accountability sinks: systems, policies and incentives, rather than individual accountability. But perhaps if there had been more individual accountability it wouldn’t have happened. Sure there was political fingerpointing, but no one person at any financial institution said “Yeah I screwed up. I’m accountable.” There were a few mea-culpas, but all of them were hedged with stuff like: “No one could have seen this coming!” “It was a systemic failure.” Or “There was nothing we could have done.” And in fact many of the people who seemed to be culpable and who might have been expected to apologize instead got very large financial rewards while all of it was going on.
So accountability sinks are bad and we should just get rid of them right? That was Davies’ initial instinct, but then he realized that you can’t have unlimited accountability. Unlimited accountability would crash the entire system. (Note that modern capitalism didn’t really get going until they came up with the idea of limited liability.) Large corporations engage in thousands of transactions, and the only way they can maintain that scale is through significant standardization. Edge cases, even significant ones, can’t end up requiring the input of the CEO.
When used properly Davies calls this an accountability shield, but obviously drawing the line between edge cases that don’t require intervention, and those that do is extraordinarily difficult, and might only be obvious in hindsight. On top of the difficulty of doing it right, there are significant incentives to err on the side of tossing as many edge cases as possible behind your accountability shield. At which point it turns into a sink.
Davies alleges that this is particularly true when you’ve got a single overriding metric, one that sees accountability, at best, as orthogonal and, at worst, something that undermines metric. This is when he brings in his criticism of Friedman with his doctrine of maximizing shareholder value. This maximization project minimized all manner of other concerns. But the big thing it did, at least according to Davies, was cut off all forms of feedback that didn’t relate to the metric of shareholder value. So even if feedback did make it out of the accountability sink/shield if it didn’t relate to that, it was ignored.
This takes us to the crux of Davies’ project. In order to keep accountability shields from turning into accountability sinks, feedback above a certain “intensity” needs to “make it out”. Davies asserts that old school cybernetics is designed to accomplish exactly this task through its prioritization of feedback loops and adaptive management.
I’m a big fan of Taleb (the author, not the Twitter tyrant) and there is a lot of overlap between what Davies says about the lack of accountability and what Taleb refers to as “skin in the game”. But the latter strikes me as a better philosophy than cybernetics. (Also there’s a lot of “skin in the game” inherent in Friedman’s recommendations as well.) Cybernetics appears to add additional complexity and more systemization, without necessarily adding any accountability. Skin in the game pulls things together much more tightly, and accountability is automatically enforced just by the nature of where the skin is. Another issue I have with cybernetics is that Davies doesn’t offer any examples of a business adopting it and finding success thereby. On the other hand it’s easy to find examples where “skin in the game” has worked. Look at any successful small business. But where are the examples of cybernetics acting in a similar fashion. Davies does call cybernetics the road not taken, but why was it not taken?
Which brings us to AI. Do we now finally have a tool that can reliably extract the feedback signal from customer noise? In the future will bank CEOs be unable to dodge accountability because there will be logs showing the AI told them about the dangerous precarity of the subprime market? Perhaps, but wouldn’t all of the AI’s be giving everyone the same information? Is AI just going to stop us from making mistakes in general? Maybe? Perhaps AI will be more useful in cases where the problem is more localized? But what if the opposite happens? What if the AI tells you everything is fine, and things still collapse? Has AI resulted in a system with even less accountability? It will be interesting to see how things play out, but I predict that AI will change a lot of things before it changes cybernetic management from a fringe discipline into the thing every business does.
I assume that in addition to too much feedback, separating the wheat from the chaff as it were, that there’s the problem of too little feedback. How does AI and cybernetics solve that? No really I’m asking? I’d love to get more feedback. Wait… I think I’ve got it, I use AI to create fake sycophantic feedback! Brilliant!
Wait have I just reinvented heavenbanning and decided to inflict it upon myself? Well surely I can’t be the only one.
For more bad ideas consider subscribing. I have a lot of them.
I understand it’s not literally no one. I’m aware of Kareem Serageldin and some other minor crisis-adjacent prosecutions. But to a first, or even second approximation, it’s no one.



I'm rather curious if anyone has compared what "Cybersyn" would have done to the Fed's FRED system (https://fred.stlouisfed.org/) that let's you graph quickly just about any metric you want from money supply to the price of a dozen eggs over the last few decades?
A problem with a metric like "increase shareholder value" is that:
1. It's an effect highly removed from its causes.
2. It's driven by a lot of speculation. The present value of a company will change a lot if you just tweak future earnings up a bit or the time value of money (discount rate) down. This means it's not as simple as make a healthy profit.
The resolution will end up with smaller metrics that will be selected and weighted trying to line up with the overall meta metric. This leads to frustrating things like an individual manager who sees his store make the metrics for things like sales corporate imposes on him but then gets notified it's getting shutdown and he laid off because some other metric says eliminate stores with margins below a certain level.
Capitalism's success has never been due to efficiency. It's extremely inefficient, 80% of all new businesses fail within 5 years. Capitalism won because it was the least inefficient means of solving the limited information problem of any economy. (Kind of like Churchill's quote about democracy.)
Noah Smith (here on Substack) has speculated that AI monitored surveillance and central planning may alter this equation. Stalin's 5 year plans were always doomed to fail, since gathering and analyzing the data was too labor intensive. However, the combination of instant communication enables instant feedback (faster than a market) and machine learning provides the labor to analyze that feedback in real time. I wonder if Noah has read Davies book?