this post was submitted on 07 Jul 2025
3 points (71.4% liked)
ObsidianMD
4659 readers
11 users here now
Unofficial Lemmy community for https://obsidian.md/
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
@mitch @obsidianmd does that use the AI Providers extension? LocalGPT is an extension that does and doesn't have that "premium tier" thing going on.
do you configure the embeddings for local vectors and also local inference? can you compare it to like the RAG function in AnythingLLM or open-webui or something similar?
i use a variety of methods haven't got a favorite yet.
@emory @obsidianmd i will be honest that is a question you might be more equipped to answer than i, but here are the links if you wanna check it out. i see it has a folder named
LLMproviders
, but i am not sure if that is what you mean or not.obsidian://show-plugin?id=copilot
https://github.com/logancyang/obsidian-copilot
https://github.com/logancyang/obsidian-copilot/tree/master/src/LLMProviders
@mitch @emory @obsidianmd Do you pay for it?
@gabek @mitch @obsidianmd some of the small models i like using with obsidian vaults locally are deepseek+llama distills and MoE models for every occasion. fiction and creative, classification and vision. there's a few 8x merged models that are extremely fun for d&d.
i have a speech operated adventure like #Zork that uses a 6x MoE that can be really surreal.
there's a phi2-ee model on hf that is small and fast at electrical eng work, i use that for a radio and electronics project vault!