dont

joined 1 year ago
[–] [email protected] 5 points 1 month ago

... have a look at all those happy little tickets ...

[–] [email protected] 5 points 1 month ago* (last edited 1 month ago) (1 children)

I love the simplicity of this, I really do, but I don't consider this SSO. It may be if you're a single user, but even then, many things I'm hosting have their own authentication layer and allow offloading only to some oidc-/oauth or ldap-provider.

[–] [email protected] 3 points 2 months ago

Deployment of NC on kubernetes/docker (and maintenance thereof) is super scary. They copy config files around in dockerfile, e.g., it's a hell of a mess. (And not just docker: I have one instance running on an old-fashioned webhosting with only ftp access and I have to manually edit .ini and apache config after each update since they're being overwritten.) As the documentation of OCIS is growing and it gets more features, I might actually change even the larger instances, but for now I must consider it as not feature complete (since people have expectations from nextcloud that aren't met by ocis and its extensions). Moreover, I have more trust in the long term openness of nextcloud as opposed to owncloud, for historical reasons.

[–] [email protected] 6 points 2 months ago

I've spent a few minutes with it so far, still looking out for some documentation. I couldn't find a way to mount storage either way so far. Could be done with an nfs or smb share on the vm, being accessed by the phone with an appropriate app, perhaps. Also, I couldn't pass through usb, which is a real bummer...

[–] [email protected] 11 points 2 months ago (2 children)

No Doubt on the happy side? Have you listened to the lyrics?

[–] [email protected] 1 points 3 months ago (1 children)

I use GrapheneOS because it works better for me, but considering their personnel and how they're operating, I still would also trust Calyx. Was clearly the best choice for my fairphone back then.

[–] [email protected] 11 points 4 months ago

This has been on X Files, quarantine those people! 👻

[–] [email protected] 5 points 5 months ago

For those of you speaking German: Hast du enen Scherzkeks gefrühstückt?

[–] [email protected] 3 points 5 months ago

Kubrick's version of The Shining. Most likely, I would feel differently had I not read the novel first, but the reduction of the story to a Nicholson-show pisses me off to the point where I cannot enjoy it for what it is. I'd rather endure the over four hours of less brilliant screenplay of the 1997 version.

[–] [email protected] 1 points 5 months ago

The soundtrack though... (Makes the BS tolerable for me)

 

I'm afraid this is going to attract the "why use podman when docker exists"-folks, so let me put this under the supposition that you're already sold on (considering) using podman for whatever reason. (For me, it has been the existence of pods, to be used in situations where pods make sense, but in a non-redundant, single-node setup.)

Now, I was trying to understand the purpose of quadlets and, frankly, I don't get it. It seems to me that as soon as I want a pod with more than one container, what I'll be writing is effectively a kubernetes configuration plus some systemd unit-like file, whereas with podman compose I just have the (arguably) simpler compose file and a systemd file (which works for all pod setups).

I would get that it's sort of simpler, more streamlined and possibly more stable using quadlets to let systemd manage single containers instead of putting podman run commands in systemd service files. Is that all there is to it, or do people utilise quadlets as a kind of lightweight almost-kubernetes distro which leverages systemd in a supposedly reasonable way? (Why would you want to do that if lightweight, fully compliant kubernetes distros are a thing, nowadays?)

Am I missing or misunderstanding something?

 

I have just ordered a CCR2004-1G-2XS-PCIe to be used as the firewall of a single server (and its IPMI) that's going to end up in a data center for colocation. I would appreciate a sanity check and perhaps some hints as I haven't had any prior experience with mikrotik and, of course, no experience at all with such a wild thing as a computer in a computer over pcie.

My plan is to manage the router over ssh over the internet with certificates and then open the api / web-configurator / perhaps windows-thinyg only on localhost. Moreover, I was planning to use it as an ssh proxy for managing the server as well as accessing the server IPMI.

I intend to use the pcie-connection for the communication between the server and the router and just connect the IPMI and either physical port.

I have a (hopefully compatible) RJ45 1.25 G transceiver. Since the transceiver is a potential point of failure and loosing IPMI is worse than loosing the only online connection, I guess it makes more sense to connect to the data center via the RJ45-port and the server IPMI via the transceiver. (The data center connection is gigabit copper.) Makes sense? Or is there something about the RJ45-port that should be considered?

I plan to manually forward ports to the server as needed. I do not intend to use the router as some sort of reverse proxy, the server will deal with that.

Moreover, I want to do a site2site wireguard vpn-connection to my homelab to also enable me to manage the router and server without the ssh-jump.

Are there any obstacles I am overlooking or is this plan sound? Is there something more to consider or does anyone have any further suggestions or a better idea?

view more: next ›