LLMs generate 'fluent nonsense' when reasoning outside their training zone venturebeat.com 7 points by cintusshied 14 hours ago
VivaTechnics 14 hours ago LLMs operate on numbers; LLMs are trained on massive numerical vectors. Therefore, every request is simply a numerical transformation, approximating learned patterns; without proper trainings, their output could be completely irrational.
LLMs operate on numbers; LLMs are trained on massive numerical vectors. Therefore, every request is simply a numerical transformation, approximating learned patterns; without proper trainings, their output could be completely irrational.
I mean, define 'reasoning'.