vluz

joined 2 years ago
[–] vluz@kbin.social 5 points 1 year ago

Messing around with system python/pip and newly installed versions till all was broken and then looking at documentation.
This was way back on the 00's and I'm still ashamed on how fast completely I messed it up.

[–] vluz@kbin.social 2 points 1 year ago (1 children)

Just figured out there are 10 places called Lisbon dotted around the US, according to the search.

[–] vluz@kbin.social 3 points 1 year ago

Got one more for you: https://gossip.ink/
I use it via a docker/podman container I've made for it: https://hub.docker.com/repository/docker/vluz/node-umi-gossip-run/general

[–] vluz@kbin.social 3 points 2 years ago (4 children)

I got cancelled too and chose Hetzner instead. Will not do business with a company that can't get their filters working decently.

[–] vluz@kbin.social 3 points 2 years ago (1 children)

Not close enough for V.A.T.S.

[–] vluz@kbin.social 7 points 2 years ago (1 children)

Lovely! I'll go read the code as soon as I have some coffee.

[–] vluz@kbin.social 2 points 2 years ago

That is extremely better. It is a very interesting problem, as you put it.

[–] vluz@kbin.social 2 points 2 years ago (2 children)

We know remarkably little about how AI systems work

Every single time I see this argument used, I stop reading.

[–] vluz@kbin.social 3 points 2 years ago

I do SDXL generation in 4GB at extreme expense of speed, by using a number of memory optimizations.
I've done this kind of stuff since SD 1.4, for the fun of it. I like to see how low I can push vram use.

SDXL takes around 3 to 4 minutes per generation including refiner but it works within constraints.
Graphics cards used are hilariously bad for the task, a 1050ti with 4GB and a 1060 with 3GB vram.

Have an implementation running on the 3GB card, inside a podman container, with no ram offloading, 1 vcpu and 4GB ram.
Graphical UI (streamlit) run on a laptop outside of server to save resources.

Working on a example implementation of SDXL as we speak and also working on SDXL generation on mobile.
That is the reason I've looked into this news, SSD-1B might be a good candidate for my dumb experiments.

[–] vluz@kbin.social 6 points 2 years ago (1 children)

Goddammit! Don't tell that one, I use it to impress random people at parties.

[–] vluz@kbin.social 2 points 2 years ago

Not joking, although I understand it seems very silly at face value.
Dark Souls 3 PvP specifically SL60+6 at gank town (after pontiff).
It used to be my go-to wind down after a work day.
It made me smile and actually relaxed me enough to go to bed and sleep, especially after a hard day.

 

Not broken at all...

 

Hi,

For a media project, I need to create dark fantasy themed backgrounds.
It will be heavily inpainted to meet each background needs.

I'm looking for models, loras, styles, examples, tutorials, etc.
Anything you can think around dark fantasy is valuable, please feel free to suggest linked or related subjects or resources.

Thanks for help in advance.

 

Hi,

Not exactly my area and I'm lost in a sea of solutions, I need help.
There are so many out there but I don't know any of them or if they are still maintained, offer full solution, time to instance up, etc.

Problem is simple to describe.
I want to setup access to GPU instances in order to run any python code the project devs have built.
The hardware consists of several servers with GPUs that support vGPU, NVIDIA GPU virtualization solution.

I'm looking for something similar to https://www.runpod.io/

What open source software can be used to spawn the client machines from the existing hardware pool?

I'm looking into kubernetes for automation and MAAS from canonical for the rest. Am I missing something important?

Any help or insight would be helpful.

view more: next ›