this post was submitted on 09 Jun 2025
6 points (75.0% liked)

Selfhosted

48403 readers
561 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Hello,

I have a little homelab that contains a 3 node k3s cluster which im pretty happy about but i got some questions regarding ingress.

Right now i use nginx as ingress controller and i have the IP of one of the nodes defined under externalIPs. All the nodes are behind the router my ISP gave me so this is nothing special, in this router i configured it to forward port 443 to port 443 of that ip. This all works as excpected im able to access the ingress resources that i want.

But i wanna make some improvements to this setup and im honestly not really sure how i could implement this.

  1. Highly available ingress. When the node which contains the IP of the ingress controller goes down im unable to reach my clusters ingress since my router cant forward the traffic. Whats the best way to configure all 3 nodes to be able to receive ingress traffic? (If needed im able to put it behind something like openwrt or opnsense but not sure if this is needed)
  2. Some ingres resources i only want to expose on my local network. I read online that i can use nginx.ingress.kubernetes.io/whitelist-source-range: 192.168.0.0/24 but this doesn't work i think because since the ingress doesn't receive the clients actual ip rather it receives an internal k3s ip. Or is their another way to only allow certain ips to access an ingress resource?

Could someone point my in the right direction for these improvements i wanna make? If you need more information you can always ask!

Thanks for your time and have a great day!

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 week ago (4 children)

You’ll want to look into “keepalived” to setup a shared IP across all worker nodes in the cluster and either directly forward, or setup haproxy on each to do the forwarding from that keepalived IP to the ingresses.

I’m running 6 kube nodes (running Talos) running in a 3node proxmox cluster. Both haproxy and keepalived run on the 3 nodes to manage the IP and route traffic to the appropriate backend. Haproxy just allows me to migrate nodes and still have traffic hit an ingress kube node.

Keepalived manages which node is the active node and therefore listens to the IP based on backend communication and a simple local script to catch when nodes can’t serve traffic.

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago) (3 children)

Thanks for your response!

I haven't used keepalived or haproxy before, but i quickly took a look at it. Do you mean i should setup 2 new vms which run keepalived an ha proxy?

While looking at keepalived i remembered reading about kube-vip https://kube-vip.io/. Couldnt this also help me with the issue? Since this also uses a vip and 1 node gets elected and its able to inform the network which node this is?

[–] [email protected] 2 points 1 week ago (1 children)

Honestly, that sounds like a keepalived replacement or equivalent. I went with keepalived because I’m also using the IP for the proxmox cluster itself so it had to be outside kube, but the idea is the same. If all you’re using the IP for is kube, go with kube-vip! But let us know how it works!

[–] [email protected] 2 points 1 week ago

Currently I only will need to use it for k8s so kube-vip will do the job for now.

load more comments (1 replies)
load more comments (1 replies)