You realize LLMs are designed not to self improve by design right? It’s totally possible and has been tried - It’s just that they usually don’t end up very well once they do.
Tay is yet another example of AI lacking comprehension and intelligence; it produced racist and antisemitic content because it had no comprehension of ethics or morality, and so it just responded to the input given to it. It's a display of "intelligence" on the same level as a slime mold seeking out the biggest nearby source of food--the input Tay received was largely racist/antisemitic, so its output became racist/antisemitic.
And LLMs do learn new things, they’re just called new models. Because it takes time and resources to retrain LLMs with new information in mind. It’s up to the human guiding the AI to guide it towards something that isn’t copyright infringement.
And the way that humans do that is by not using copyrighted material for its training dataset. Using copyrighted material to produce an AI model is infringing on the rights of the people who created the material, the vast majority of whom are small-time authors and artists and open-source projects composed of individuals contributing their time and effort to said projects). Full stop.
Also, you say “right” and “probable” are without difference, yet once again bring something into the conversation which can only be “right”. Code. You cannot create code that is incorrect or it will not work. Text and creative works cannot be wrong. They can only be judged by opinions, not by rule books which say “it works” or “it doesn’t”.
Then why does ChatGPT invent Powershell cmdlets out of whole cloth that don't exist yet accomplish the exact precise task that the prompter asked it to do?
The last line is just a bit strange honestly. The biggest users of AI are creative minds, and it’s why it’s important that AI models remain open source so all creative minds can use them.
The biggest users of AI are techbros who think that spending half an hour crafting a prompt to get stable diffusion to spit out the right blend of artists' labor are anywhere near equivalent to the literal collective millions of man hours spent by artists honing their skill in order to produce the content that AI companies took without consent or attribution and ran through a woodchipper. Oh, and corporations trying to use AI to replace artists, writers, call center employees, tech support agents...
Frankly, I'm absolutely flabbergasted that the popular sentiment on Lemmy seems to be so heavily in favor of defending large corporations taking data produced en masse by individuals without even so much as the most cursory of attribution (to say nothing of consent or compensation) and using it for the companies' personal profit. It's no different morally or ethically than Meta hoovering all of our personal data and reselling it to advertisers.
That's part of it, yes, but nowhere near the whole issue.
I think someone else summarized my issue with AI elsewhere in this thread--AI as it currently stands is fundamentally plagiaristic, because it cannot be anything more than the average of its inputs, and cannot be greater than the sum of its inputs. If you ask ChatGPT to summarize the plot of The Matrix and write a brief analysis of the themes and its opinions, ChatGPT doesn't watch the movie, do its own analysis, and give you its own summary; instead, it will pull up the part of the database it was fed into by its learning model that relates to "The Matrix," "movie summaries," "movie analysis," find what parts of its training dataset matches up to the prompt--likely an article written by Roger Ebert, maybe some scholarly articles, maybe some metacritic reviews--and spit out a response that combines those parts together into something that sounds relatively coherent.
Another issue, in my opinion, is that ChatGPT can't take general concepts and extend them further. To go back to the movie summary example, if you asked a regular layperson human to analyze the themes in The Matrix, they would likely focus on the cool gun battles and neat special effects. If you had that same layperson attend a four-year college and receive a bachelor's in media studies, then asked them to do the exact same analysis of The Matrix, their answer would be drastically different, even if their entire degree did not discuss The Matrix even once. This is because that layperson is (or at least should be) capable of taking generalized concepts and applying them to specific scenarios--in other words, a layperson can take the media analysis concepts they learned while earning that four-year degree, and apply them to a specific thing, even if those concepts weren't explicitly applied to that thing. AI, as it currently stands, is incapable of this. As another example, let's say a brand-new computing language came out tomorrow that was entirely unrelated to any currently existing computing languages. AI would be nigh-useless at analyzing and helping produce new code for that language--even if it were dead simple to use and understand--until enough humans published code samples that could be fed into the AI's training model.