Puttybrain

joined 2 years ago
[–] [email protected] 2 points 2 years ago

I've been using uncensored models in Koboldcpp to generate whatever I want but you'd need the RAM to run the models.

I generated this using Wizard-Vicuna-7B-Uncensored-GGML but I'd suggest using at least the 13B version

It's a basic reply but it's not refusing

[–] [email protected] 1 points 2 years ago (1 children)

Not to throw an unwanted suggestion but would an OVHCloud eco server work for this?

I'm not an expert in any way but I've had no issues with my instance and it was a lot cheaper than anything else I could find.

[–] [email protected] 1 points 2 years ago

It's Wizard-Vicuna-7B-Uncensored-GGML

Been running it on my phone through Koboldcpp

0
Rule (beehaw.org)
 
[–] [email protected] 9 points 2 years ago

196 isn't a place, it's a people

[–] [email protected] 1 points 2 years ago

Congrats to Iceland!

[–] [email protected] 0 points 2 years ago (6 children)

According to the Apollo dev, they were taking in £500,000 a year ($10 from 50,000 subscriptions). I don't know if anyone else has revealed any figures

Source