Today Nomic released version 3.0 of GPT4All, the easiest and best way to run LLMs on your own computer without the cloud. Fully FOSS. If you haven't ever run a private local LLM -- or even if it's been 6 months -- try some of the llama3 or mistral derivatives, and chat locally with your own documents. It's remarkable how good open-source local quantized models have gotten, even as commercial models have been stuck at GPT-4 level for a year.
@benmschmidt Do you prefer it to Ollama?
@fotis_jannidis I mean, my company makes it, so yes… But also I'd say that the difference is:
1. It's better especially for non-technical users -- we have a really GUI so it's actually reasonable for non-programmers
2. We've got some neat local indexing stuff built in for RAG on private documents
3. We have more supported models (all the community finetunes on huggingface+llama.cpp)
OTOH ollama tends to have the latest and greatest models faster than we do.
@benmschmidt thanks for explanation. I have been using Ollama in a seminar and some not so tech savvy found the setup - especially together with that of the webui - a bit challenging.