I mainly use Llama-3-8B abliterated for everyday questions, and DeepSeek-Coder-V2-Lite for programming/Linux stuff.
LocalLLaMA
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
Rules:
Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.
Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.
Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.
Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.
Using DeepSeek-Coder-V2-Lite now, it's awesome!
I find that for the purpose of my projects (narrative building, tabletop rpg simulation) gemma3:14b (with low temperature) works perfectly to create consistent psychological overviews.
I have been using deephermes daily. I think CoT reasoning is so awesome and such a game changer! It really helps the model give better answers especially for hard logical problems. But I don't want it all the time especially on an already slow model. Being able to turn it on and off wirhout switching models is awesome. Mistral 24b deephermes is relatively uncensored, powerful and not painfully slow on my hardware. a high quant of llama 3.1 8b deephermes is able to fit entirely on my 8gb vram.
QWQ-32B for most questions, llama-3.1-8B for agents. I'm looking for new models to replace them though, especially the agent one.
Want to test the new GLM models, but I'd rather wait for llama.cpp to definitely fix the bugs with them first.
GLM? I feel like every other day there is a new abbreviation :(
Want to test the new GLM models
Which models are you referring to? These: https://github.com/THUDM/GLM-4 ?
That's the ones, the 0414 release.
Fallen Gemma. The writing style is really good and it can keep relatively persistent personalities. On the other hand it's stupid af compared to other recent models and even the vanilla Gemma 3.