ramielrowe

joined 2 years ago
[–] [email protected] 7 points 3 weeks ago

If you're considering video transcoding, I'd give Intel a look. Quicksync is pretty well supported across all of the media platforms. I do think Jellyfin is on a much more modern ffmpeg than Plex, and it actually supports AMD. But, I don't have any experience with that... Only Nvidia and Intel. You really don't need a powerful CPU either. I've got my Plex server on a little i5 NUC, and it can do 4k transcodes no problem.

[–] [email protected] 26 points 3 weeks ago (7 children)

You really don't need an AIO with a 5600X. Just grab a reasonably sized tower cooler and call it a day. There's less to fail, and less risk of water damage if it fails catastrophically. I've found thermalright to be exceptionally good for how well priced they are. Not as quiet as Noctua, but damn near the same cooling performance.

Another thing to consider is that a 5600X doesn't have built in graphics. I think you'd need to jump up to AM5/7600X for that.

[–] [email protected] 24 points 2 months ago

A coworker of mine built an LLM powered FUSE filesystem as a very tongue-in-check response to the concept of letting AI do everything. It let the LLM generate responses to listing files in directories and reading contents of the files.

[–] [email protected] 6 points 3 months ago (3 children)

Honestly, I don't mind them adding ads. They've got a business to support. But, calling them "quests" and treating them as "rewards" for their users is just so tone-deaf and disingenuous. Likewise, if I've boosted even a single server, I shouldn't see this crap anywhere, let alone on the server I've boosted.

[–] [email protected] 13 points 3 months ago (3 children)

After repeated failures to pass a test, I do not think it is unreasonable for the business to stop paying for your attempts at a certification. Either directly via training sessions and testing fees, or indirectly via your working hours.

[–] [email protected] 17 points 3 months ago (1 children)

In the US, salaried engineers are exempt from overtime pay regulations. He is telling them to work 20 extra hours, with no extra pay.

[–] [email protected] 3 points 6 months ago

Commentary from someone quite trusted in the historical gun community and who's actually shot multiple Welrods/VP9s: https://www.youtube.com/shorts/POubd0SoCQ8

It's not a VP9. Even at the very start of the video, on the first shot before the shooter even manually cycles the gun, gas is ejected backwards out of the action rather than forward out of the suppressor.

[–] [email protected] 2 points 7 months ago* (last edited 7 months ago)

In general, on bare-metal, I mount below /mnt. For a long time, I just mounted in from pre-setup host mounts. But, I use Kubernetes, and you can directly specify a NFS mount. So, I eventually migrated everything to that as I made other updates. I don't think it's horrible to mount from the host, but if docker-compose supports directly defining an NFS volume, that's one less thing to set up if you need to re-provision your docker host.

(quick edit) I don't think docker compose reads and re-reads compose files. They're read when you invoke docker compose but that's it. So...

If you're simply invoking docker compose to interact with things, then I'd say store the compose files where ever makes the most sense for your process. Maybe think about setting up a specific directory on your NFS share and mount that to your docker host(s). I would also consider version controlling your compose files. If you're concerned about secrets, store them in encrypted env files. Something like SOPS can help with this.

As long as the user invoking docker compose can read the compose files, you're good. When it comes to mounting data into containers from NFS.... yes permissions will matter and it might be a pain as it depends on how flexible the container you're using is in terms of user and filesystem permissions.

[–] [email protected] 1 points 7 months ago

Docker's documentation for supported backing filesystems for container filesystems.

In general, you should be considering your container root filesystems as completely ephemeral. But, you will generally want low latency and local. If you move most of your data to NFS, you can hopefully just keep a minimal local disk for images/containers.

As for your data volumes, it's likely going to be very application specific. I've got Postgres databases running off remote NFS, that are totally happy. I don't fully understand why Plex struggles to run it's Database/Config dir from NFS. Disappointingly, I generally have to host it on a filesystem and disk local to my docker host.

[–] [email protected] 5 points 7 months ago (3 children)

In general, container root filesystems and the images backing them will not function on NFS. When deploying containers, you should be mounting data volumes into the containers rather than storing things on the container root filesystems. Hopefully you are already doing that, otherwise you're going to need to manually copy data out of the containers. Personally, if all you're talking about is 32 gigs max, I would just stop all of the containers, copy everything to the new NFS locations, and then re-create the containers to point at the new NFS locations.

All this said though, some applications really don't like their data stored on NFS. I know Plex really doesn't function well when it's database is on NFS. But, the Plex media directories are fine to host from NFS.

[–] [email protected] 11 points 7 months ago (1 children)

I mean, if you get hit by something, that tends to happen suddenly.

[–] [email protected] 6 points 8 months ago (1 children)

Realistically, probably not. If your workload is highly memory bound, and sensitive to latency, you would be leaving a little performance on the table. But, I wouldn't stress over it. It's certainly not going to bottleneck your CPU.

view more: next ›