@gabek @mitch @obsidianmd i don't either, i have other ways of doing what the paid version supports. i use cloud foundation models and local; my backends for embeddings are always ollama, lmstudio, and/or anythingLLM.
#anythingLLM has an easily deployed docker release and desktop application. it's not as able in managing and cross-threading conversations as LM (really Msty does it best) but #aLLM has a nice setup for agents and RAG.
@gabek @mitch @obsidianmd some of the small models i like using with obsidian vaults locally are deepseek+llama distills and MoE models for every occasion. fiction and creative, classification and vision. there's a few 8x merged models that are extremely fun for d&d.
i have a speech operated adventure like #Zork that uses a 6x MoE that can be really surreal.
there's a phi2-ee model on hf that is small and fast at electrical eng work, i use that for a radio and electronics project vault!