4 Comments
User's avatar
The Sentient Dog Group's avatar

I'm rather curious if anyone has compared what "Cybersyn" would have done to the Fed's FRED system (https://fred.stlouisfed.org/) that let's you graph quickly just about any metric you want from money supply to the price of a dozen eggs over the last few decades?

A problem with a metric like "increase shareholder value" is that:

1. It's an effect highly removed from its causes.

2. It's driven by a lot of speculation. The present value of a company will change a lot if you just tweak future earnings up a bit or the time value of money (discount rate) down. This means it's not as simple as make a healthy profit.

The resolution will end up with smaller metrics that will be selected and weighted trying to line up with the overall meta metric. This leads to frustrating things like an individual manager who sees his store make the metrics for things like sales corporate imposes on him but then gets notified it's getting shutdown and he laid off because some other metric says eliminate stores with margins below a certain level.

Expand full comment
Brian Villanueva's avatar

Capitalism's success has never been due to efficiency. It's extremely inefficient, 80% of all new businesses fail within 5 years. Capitalism won because it was the least inefficient means of solving the limited information problem of any economy. (Kind of like Churchill's quote about democracy.)

Noah Smith (here on Substack) has speculated that AI monitored surveillance and central planning may alter this equation. Stalin's 5 year plans were always doomed to fail, since gathering and analyzing the data was too labor intensive. However, the combination of instant communication enables instant feedback (faster than a market) and machine learning provides the labor to analyze that feedback in real time. I wonder if Noah has read Davies book?

Expand full comment
R.W. Richey's avatar

More accurate information is good, but it doesn't necessarily solve the problem of incentives. The Enron executives knew exactly what was going on, but it didn't stop them from continuing the charade. Perhaps AI would have allowed the investors and regulators to know what was going on? Even so I think "skin in the game" is the better model, because it solves the incentive problem. (And I understand that there's the possibility of a hybridization between the two approaches.)

Expand full comment
Brian Villanueva's avatar

As you say in this post, our obsession with limiting liability started this. Based on past track record, the incentives would likely be, "follow whatever the AI says to do." And the lawyers will make sure of that.

Expand full comment