this post was submitted on 26 Jul 2023
8 points (100.0% liked)

LocalLLaMA

3372 readers
7 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
top 5 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 2 years ago (1 children)

I have a feeling that this is going to go similarly to Stable Diffusion's big 2.0 flop. SD put its limits in through training data. Meta put in its limits via terms and conditions. The end result for both will still be that the community gravitates toward what is usable with the most freedom attached to it. The most annoying part of the TOS is that you can't use the output to improve other models.

Fuck you Meta, I wanna make a zillion baby specialist models.

[–] [email protected] 3 points 2 years ago (1 children)

Well, i've had other arguments about OpenAI prohibiting use to improve other models.... I'm not sure. My concept of what's right and what is wrong kind of contradicts Meta or OpenAI just using copyrighted content to train their models and then claiming copyright and banning me from using that for the same purpose.

[–] [email protected] 2 points 2 years ago

Good point. I think I'll do whatever I want with it and just keep my trap shut. Good luck proving anything Zuck.

[–] [email protected] 1 points 2 years ago* (last edited 2 years ago)

I used it and was not impressed... I found Wizard LM to be far superior.

Also, I agree with @wagesj45 up there about training other models... but how would they detect that you're training other models with it? I think one of the best things you can do with a large model is to train a small specialist model.

[–] [email protected] 1 points 2 years ago

People may not love the model or its outputs, but it's hard to deny the impact to the open-source community that releases like this bring, such a positive bonus and really happy they're continuing

load more comments
view more: next ›