Narrow (domain specific) #AI is powerful, broadly applicable, and potentially socially beneficial.
Broad AI (domain agnostic), like most generative AIs, in contrast, does have very few actual use cases (i.e. applications sustainable without VC subsidies) and is by and large socially harmful.
@festal
I disagree: Generative models are great enablers.
I use LLMs to summarize and extract information. To quickly explore new topics. To generate boilerplate code. To draft boilerplate emails. To improve wording and spelling. Translation seems largely solved by now.
I have also used text to image models to generate personalized greeting cards and fillers for presentations.
Just don't expect the output to be perfect. Validate and use it as a starting point for your own work.
@vrld The applications you mention are like using an SUV to drive to the grocery store one block down the road. Yes, you can do it, but it's a hugely inefficient way of doing things. And it seems convenient to you only because someone else (VC investors, right now) is subsidizing them. Remember when Uber was cheap?
Translation is a good example, in my view, for narrow AI. It's great and useful. Yes, you can do it with ChatGPT, but why this overhead?
If you could get 1000 google searches for the same price as one chatgpt prompt (which seems about the cost ratio for the companies), which one would you take?
@festal
You don't need GPT 4 for any of those. There are a lot of good open access / open source models that you run on your gaming rig (or M2 Mac), like llama, Mistral, wizardcoder, etc.
With LCMs you can even generate images on your CPU. I expect that we'll see a lot more smaller, cheaper expert models in the future.
@mrshll interesting. Could you share the source of that information? Thank you.
@mrshll Fair point, but also moving the goal post. The original argument was against driving SUVs and I pointed out that bicycles exist.
I'd also argue that, unlike certain distributed write-only databases, the energy spent in training DOES provide downstream value. My (actual) bicycle also required energy to build and maintain, but it does provide me with value.
I don't expect to change your mind about this issue, but I think that there is a middle ground.
@vrld I'm very much a middle-grounder (I run a company that does the original post's "Narrow AI" to forecast hydrology and find that framing great).
I agree with OP's contrast and have struggled to find applications that don't feel gimicky and worth the resource inputs.
@mrshll Then I've misread your intention, sorry. I think the main issue is that "broad AI" and "narrow AI" are ill-defined, hyped-up marketing terms. Are translation, multimodal search using embeddings, speech to text and services like deepl write narrow AI? Even though the underlying model is a transformer? Where do we draw the line?