this post was submitted on 16 Jun 2025
116 points (95.3% liked)

Selfhosted

49132 readers
598 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I've tried coding and every one I've tried fails unless really, really basic small functions like what you learn as a newbie compared to say 4o mini that can spit out more sensible stuff that works.

I've tried explanations and they just regurgitate sentences that can be irrelevant, wrong, or get stuck in a loop.

So. what can I actually use a small LLM for? Which ones? I ask because I have an old laptop and the GPU can't really handle anything above 4B in a timely manner. 8B is about 1 t/s!

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 41 points 2 weeks ago* (last edited 2 weeks ago)

Sorry, I am just gonne dump you some links from my bookmarks that were related and interesting to read, cause I am traveling and have to get up in a minute, but I've been interested in this topic for a while. All of the links discuss at least some usecases. For some reason microsoft is really into tiny models and made big breakthroughs there.

https://reddit.com/r/LocalLLaMA/comments/1cdrw7p/what_are_the_potential_uses_of_small_less_than_3b/

https://github.com/microsoft/BitNet

https://www.microsoft.com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/

https://news.microsoft.com/source/features/ai/the-phi-3-small-language-models-with-big-potential/

https://techcommunity.microsoft.com/blog/aiplatformblog/introducing-phi-4-microsoft%E2%80%99s-newest-small-language-model-specializing-in-comple/4357090

[–] [email protected] 25 points 2 weeks ago (1 children)

Converting free text to standardized forms such as json

[–] [email protected] 5 points 2 weeks ago (1 children)

Oh—do you happen to have any recommendations for that?

[–] [email protected] 16 points 2 weeks ago

DeepSeek-R1-Distill-Qwen-1.5B

[–] [email protected] 19 points 2 weeks ago

I installed Llama. I've not found any use for it. I mean, I've asked it for a recipe because recipe websites suck, but that's about it.

[–] [email protected] 16 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I've integrated mine into Home Assistant, which makes it easier to use their voice commands.

I haven't done a ton with it yet besides set it up, though, since I'm still getting proxmox configured on my gaming rig.

[–] [email protected] 3 points 2 weeks ago (1 children)

What are you using for voice integration? I really don't want to buy and assemble their solution if I don't have to

[–] [email protected] 3 points 2 weeks ago (1 children)

I just use the companion app for now. But I am designing a HAL9000 system for my home.

[–] [email protected] 2 points 2 weeks ago (1 children)

[ A DIM SCREEN WITH ORANGE TEXT ]

Objective: optimize electrical bill during off hours.

... USER STATUS: UNCONSCIOUS 
... LIGHTING SYSTEM: DISABLED
... AUDIO/VISUAL SYSTEM: DISABLED 
... CLIMATE SYSTEM: ECO MODE ENABLED
... SURVEILLANCE SYSTEM: ENABLED 
... DOOR LOCKS: ENGAGED
... CELLULAR DATA: DISABLED
... WIRELESS ACCESS POINTS: DISABLED
... SMOKE ALARMS: DISABLED
... CO2 ALARMS: DISABLED
... FURNACE: SET TO DIAGNOSTIC MODE
... FURNACE_PILOT: DISABLED
... FURNACE_GAS: ENABLED

WARN: Furnace gas has been enabled without a Furnace pilot. Please consult the user manual to ensure proper installation procedure.

... FURNACE: POWERED OFF

Objective realized. Entering low power mode.

[ Cut to OP, motionless in bed ]

[–] [email protected] 3 points 2 weeks ago (1 children)

Luckily my entire neighborhood doesn't have gas and I have a heat pump.

But rest assured, I'm designing the system with 20% less mental illness

[–] [email protected] 3 points 2 weeks ago (1 children)

All systems need a little mental illness.

[–] [email protected] 3 points 2 weeks ago

It's what keeps things fun! I don't want a system that I don't have to troubleshoot every once in a while.

[–] [email protected] 13 points 2 weeks ago (1 children)

Have you tried RAG? I believe that they are actually pretty good for searching and compiling content from RAG.

So in theory you could have it connect to all of you local documents and use it for quick questions. Or maybe connected to your signal/whatsapp/sms chat history to ask questions about past conversations

[–] [email protected] 5 points 2 weeks ago (1 children)

No, what is it? How do I try it?

[–] [email protected] 13 points 2 weeks ago (1 children)

RAG is basically like telling an LLM "look here for more info before you answer" so it can check out local documents to give an answer that is more relevant to you.

You just search "open web ui rag" and find plenty kf explanations and tutorials

[–] [email protected] 4 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I think RAG will be surpassed by LLMs in a loop with tool calling (aka agents), with search being one of the tools.

[–] [email protected] 5 points 2 weeks ago

LLMs that train LoRas on the fly then query themselves with the LoRa applied

[–] [email protected] 9 points 2 weeks ago (2 children)

It'll work for quick bash scripts and one-off things like that. But there's not usually enough context window unless you're using a 24G GPU or such.

[–] [email protected] 4 points 2 weeks ago

Yeah shell scripts are one of those things that you never remember how to do something and have to always look it up!

[–] [email protected] 3 points 2 weeks ago (1 children)

Snippets are a great use.

I use StableCode on my phone as a programming tutor for learning Python. It is outstanding in both speed and in accuracy for this task. I have it generate definitions which I copy and paste into Anki the flashcard app. Whenever I'm on a bus or airplane I just start studying. Wish that it could also quiz me interactively.

[–] [email protected] 6 points 2 weeks ago (1 children)

Please be very careful. The python code it'll spit out will most likely be outdated, not work as well as it should (the code isn't "thought out" as if a human did it.

If you want to learn, dive it, set yourself tasks, get stuck, and f around.

[–] [email protected] 4 points 2 weeks ago

I know what you mean. All the code generated with ai was loaded with problems. Specifically it kept forcing my api keys into the code without using environmental variables. But for basic coding concepts it has so far been perfect. even a 3b model seemingly generates great definitions

[–] [email protected] 9 points 2 weeks ago (1 children)

I've used smollm2:135m for projects in DBeaver building larger queries. The box it runs on is Intel HD 530 graphics with an old i5-6500T processor. Doesn't seem to really stress the CPU.

UPDATE: I apologize to the downvoter for not masochistically wanting to build a 1000 line bulk insert statement by hand.

[–] [email protected] 2 points 2 weeks ago (1 children)

How, exactly, do you have Intel HD graphics, found on Intel APUs, on a Ryzen AMD system?

[–] [email protected] 2 points 2 weeks ago

Sorry, I was trying to find parts for my daughter's machine while doing this (cheap Minecraft build). I corrected my comment.

[–] [email protected] 8 points 2 weeks ago

As cool and neato as I find AI to be, I haven't really found a good use case for it in the selfhosting/homelabbing arena. Most of my equipment is ancient and lacking the GPU necessary to drive that bus.

[–] [email protected] 5 points 2 weeks ago (2 children)

I have it roleplay scenarios with me and sometimes I verbally abuse it for fun.

load more comments (2 replies)
[–] [email protected] 4 points 2 weeks ago* (last edited 2 weeks ago) (9 children)

Currently I've been using a local AI (a couple different kinds) to first - take the audio from a Twitch stream; so that I have context about the conversation, convert it to text, and then use a second AI; an LLM fed the first AIs translation + twitch chat and store 'facts' about specific users so that they can be referenced quickly for a streamer who has ADHD in order to be more personable.

That way, the guy can ask User X how their mothers surgery went. Or he can remember that User K has a birthday coming up. Or remember that User G's son just got a PS5 for Christmas, and wants a specific game.

It allows him to be more personable because he has issues remembering details about his users. It's still kind of a big alpha test at the moment, because we don't know the best way to display the 'data', but it functions as an aid.

[–] [email protected] 8 points 2 weeks ago (23 children)

Hey, you're treating that data with the respect it demands, right? And you definitely collected consent from those chat participants before you Hoover'd up their [re-reads example] extremely Personal Identification Information AND Personal Health Information, right? Because if you didn't, you're in violation of a bunch of laws and the Twitch TOS.

load more comments (23 replies)
load more comments (8 replies)
[–] [email protected] 4 points 2 weeks ago (1 children)

for coding tasks you need web search and RAG. It's not the size of the model that matters, since even the largest models find solutions online.

[–] [email protected] 2 points 2 weeks ago (3 children)

Any suggestions for solutions?

load more comments (3 replies)
[–] [email protected] 4 points 2 weeks ago (2 children)

I've run a few models that I could on my GPU. I don't think the smaller models are really good enough. They can do stuff, sure, but to get anything out of it, I think you need the larger models.

They can be used for basic things, though. There are coder specific models you can look at. Deepseek and qwen coder are some popular ones

load more comments (2 replies)
[–] [email protected] 3 points 2 weeks ago* (last edited 2 weeks ago)

I think that's a size where it's a bit more than a good autocomplete. Could be part of a chain for retrieval augmented generation. Maybe some specific tasks. And there are small machine learning models that can do translation or sentiment analysis, though I don't think those are your regular LLM chatbots... And well, you can ask basic questions and write dialogue. Something like "What is an Alpaca?" will work. But they don't have much knowledge under 8B parameters and they regularly struggle to apply their knowledge to a given task at smaller sizes. At least that's my experience. They've become way better at smaller sizes during the last year or so. But they're very limited.

I'm not sure what you intend to do. If you have some specific thing you'd like an LLM to do, you need to pick the correct one. If you don't have any use-case... just run an arbitrary one and tinker around?

load more comments
view more: next ›