5 Comments

My vote is for an amended version of your previous innovation. This one would be called “The Luck Machine”.

😉

Expand full comment

Maybe – somehow – LLMs / ML models need to "go to university", rather than educating themselves on content scraped from the web.

Also I think they need to learn how to say "I don't know" rather than always confidently presenting a result even when the score was low. I'm sure both these problems are being worked on!

Expand full comment
author

It's a great point that "I have low confidence about this" could be interesting and an improvement. On the other hand, people reply with something like "Well give me something for which you have high confidence", which gets us back to the same problem.

Expand full comment
Jun 14·edited Jun 14

It’s absolutely got to be a very hard problem to solve. I’m not an expert – I have some experience working with TensorFlow so I get the basic principles – but, in the more abstract and general sense, actual human intelligence comes with an awareness of one's own limitations. Or – inversely – a lack of intelligence has a tendency to be coupled with the Dunning-Kruger effect. We surely want to avoid emulating that.

Expand full comment