Game over, I won. GPT is hitting a period of dimishing returns, just like I said it would.
Analysis and breaking news here: https://open.substack.com/pub/garymarcus/p/confirmed-llms-have-indeed-reached
@garymarcus ha I thought it said Onion
@albnelson @garymarcus it would make sense to call it onion - either because they'll layer it with other strategies to try to hide its deficiencies, or more literally as an ingredient in a technology 'stone soup' (meaning roughly the same thing.)
@garymarcus Saw it coming a mile off. Even armed with just some basic statistical mechanics, it seemed obvious GPT-4 was nearing the top of an S-curve.
Even more obvious with image generators like Midjourney, which peaked several versions ago. You can *see* it.
OpenAI, Midjrouney etc have pivoted to installing proverbial reclining seats and sun roofs because the car's performance is as good as it's gonna get
ChatGPT Isn’t ‘Hallucinating’—It’s Bullshitting! | Scientific American – https://www.scientificamerican.com/article/chatgpt-isnt-hallucinating-its-bullshitting/
@garymarcus
To be fair, OpenAI also said that themselves in 2020 in the neural scaling laws work: the power-law scaling means that a fractional improvement in cross-entropy loss requires an order of magnitude growth in data and compute
@garymarcus it would be great if there was a link to the podcast. I’ve been avoiding that madness as much as possible, so even if I agree with the premise that LLMs are bound by math, I hate posts without sources