I'm a little late to the party, but have recently been deep-diving into running LLMs locally via #langchain and experimenting with semantic embedding and retrieval augmented generation.
It's really fun to "ask" your personal and business document cache questions and get reasonable answers out. Endless applications.