Discussion about this post

User's avatar
Brian Villanueva's avatar

I've written a few basic AI systems, so I know a little about it.

The Python-shame-spiral is deeply disturbing on a lot of levels.

I think your hemispheric analogy (ala McGilchrist) is both accurate and novel. It's worth pursuing. Just for kicks, I asked ChatGPT to explain why its total lack of emotional context didn't render it psychopathic. The response did little to alleviate my concerns.

https://chatgpt.com/share/69836d98-fb04-8008-80a9-3317c2e1cb4e

I word about hallucinations (at least the standard "make stuff up" kind). In many ways they're caused by competing goals: 1) seek to help the user; 2) give accurate information. In AI construction, both goals must have reward systems. Go all in on 1 and you get a yes-man; all in on 2 produces an AI that's so afraid of being wrong it won't talk. I suspect much of the LLM hallucination problem stems from balancing the conflict between these two goals.

However, balancing competing goals is necessary to navigate reality. We humans do it constantly. Hallucinations in your research assistant are annoying; in your Waymo driver they're life threatening. I increasingly wonder if Frank Herbert (Dune) might not have been prescient.

Randy M's avatar

That's an intriguing error. I suppose I should be absolutely certain LLM have no 'real' feelings before finding it hilarious? I'll refrain for the sake of your pain, at least.

Perhaps the problem is something like training program designed intended for code writing on casual conversation and reams of AITA style posts?

2 more comments...

No posts

Ready for more?