@interfluidity I think the valid issue is “capability overhang”: the LLMs were designed for translation, but have proved to be capable across a surprising variety of tasks (programming, mathematical reasoning, etc.). With “the bitter lesson,” that “more data, more computation,” seems to often trump the best-laid domain-specific strategies, the feeling is that we may shortly face superior competency across a huge surface area and it’s reasonable to expect significant societal disruption.