There is a reason that #LLMs are caled #LanguageModels and not #knowledge models, #reasoning models, or #intelligence models: they are trained, exclusively, to model linguistic form (which is neither knowledge about the world, reasoning, nor intelligence). Their ability to perform other tasks is purely epiphenomenal. The degree to which these epiphenomena are robust is still an open question.
Much of the debate about #LLMs, I argue, comes from a misunderstanding of what they are.
Booster: Our train will make airplanes obsolete.
Critic: Your train is pathetic—it can't even cross the Rhine.
@davidmortensen 100% with on this one
More than this: these models are trained on "language represented as a raw sequence of characters".