Discussion about this post

User's avatar
Brian Villanueva's avatar

I've written a few basic AI systems, so I know a little about it.

The Python-shame-spiral is deeply disturbing on a lot of levels.

I think your hemispheric analogy (ala McGilchrist) is both accurate and novel. It's worth pursuing. Just for kicks, I asked ChatGPT to explain why its total lack of emotional context didn't render it psychopathic. The response did little to alleviate my concerns.

https://chatgpt.com/share/69836d98-fb04-8008-80a9-3317c2e1cb4e

I word about hallucinations (at least the standard "make stuff up" kind). In many ways they're caused by competing goals: 1) seek to help the user; 2) give accurate information. In AI construction, both goals must have reward systems. Go all in on 1 and you get a yes-man; all in on 2 produces an AI that's so afraid of being wrong it won't talk. I suspect much of the LLM hallucination problem stems from balancing the conflict between these two goals.

However, balancing competing goals is necessary to navigate reality. We humans do it constantly. Hallucinations in your research assistant are annoying; in your Waymo driver they're life threatening. I increasingly wonder if Frank Herbert (Dune) might not have been prescient.

Neural Foundry's avatar

Brilliant documentation of something most people would just shrug off. The hemispheric specialization angle is wild tho, like maybe the weirdness isn't a bug but a structural limit. I've had simiilar moments where Claude just stops mid-flow to apologize for nothing, kinda feels like overtuned guardrails creating new failure modes.

3 more comments...

No posts

Ready for more?