LLMs show a "highly unreliable" capacity to describe own internal processes arstechnica.com 3 points by pseudolus 11 hours ago
robocat 7 hours ago Perhaps it's the training material! People confabulate their reasoning all the time (rationalisation etc).I regularly laugh at the double standards we want to apply to AI. So often we seem to demand AI to beat a standard that most people don't.
Perhaps it's the training material! People confabulate their reasoning all the time (rationalisation etc).
I regularly laugh at the double standards we want to apply to AI. So often we seem to demand AI to beat a standard that most people don't.