@tess
I definitely don’t trust anything run by OpenAI, but the one OpenAI part appears to be perfunctory and off to the side. The meat of this is all in-house, and much of it on-device (and increasingly so over time). I’m not quite sure how the profit structure works either, but I assume by default it’s a ploy to sell more of their high-margin hardware.
@inthehands @tess Yeah, if the models were local then it might help fuel the upgrade circus. But they aren't. It's all just cloud compute.
@drgroftehauge @tess
I don’t think that’s correct. They’re advertising local compute as a headline feature. It’s on-device for many/most tasks on newer devices with fancy chips, on-cloud as a fallback. •If• they maintain that, then the percentage of new-enough devices — and thus the percentage of it happening locally — will increase over time, and cloud demand shrinks. Who knows, future etc etc, but I can imagine how this does make profit sense fo rthem.
@inthehands @tess Okay, I just saw it was OpenAI. Then it makes sense for them, even if it doesn't for the consumer. I agree with you then.
@drgroftehauge @tess
More here, upthread, and in linked posts: https://hachyderm.io/@inthehands/112599704497874927
@inthehands @tess Okay, that sounds like they are embracing federated learning instead of differential privacy. Good.
And they are adding a chatbot so they have AI that people can recognise as AI. And since they pretty much own the smartphones for enterprise market they can have a central switch for IT managers to turn off the chatbot.