Eccitaze

joined 2 years ago
[–] [email protected] 4 points 1 year ago

After reading this article that got posted on Lemmy a few days ago, I honestly think we're approaching the soft cap for how good LLMs can get. Improving on the current state of the art would require feeding it more data, but that's not really feasible. We've already scraped pretty much the entire internet to get to where we are now, and it's nigh-impossible to manually curate a higher-quality dataset because of the sheer scale of the task involved.

We also can't ask AI to curate its own dataset, because that runs into model collapse issues. Even if we don't have AI explicitly curate its own dataset, it's highly likely going to be a problem in the near future with the tide of AI-generated spam. I have a feeling that companies like Reddit signing licensing deals with AI companies are going to find that they mostly want data from 2022 and earlier, similar to manufacturers looking for low-background steel to make particle detectors.

We also can't just throw more processing power at it because current LLMs are already nearly cost-prohibitive in terms of processing power per query (it's just being masked by VC money subsidizing the cost). Even if cost wasn't an issue, we're also starting to approach hard limits in physics like waste heat in terms of how much faster we can run current technology.

So we already have a pretty good idea what the answer to "how good AI will get" is, and it's "not very." At best, it'll get a little more efficient with AI-specific chips, and some specially-trained models may provide some decent results. But as it stands, pretty much any organization that tries to use AI in any public-facing role (including merely using AI to write code that is exposed to the public) is just asking for bad publicity when the AI inevitably makes a glaringly obvious error. It's marginally better than the old memes about "I trained an AI on X episodes of this show and asked it to make a script," but not by much.

As it stands, I only see two outcomes: 1) OpenAI manages to come up with a breakthrough--something game-changing, like a technique that drastically increases the efficiency of current models so they can be run cheaply, or something entirely new that could feasibly be called AGI, 2) The AI companies hit a brick wall, and the flow of VC money gradually slows down, forcing the companies to raise prices and cut costs, resulting in a product that's even worse-performing and more expensive than what we have today. In the second case, the AI bubble will likely pop, and most people will abandon AI in general--the only people still using it at large will be the ones trying to push disinfo (either in politics or in Google rankings) along with the odd person playing with image generation.

In the meantime, what I'm most worried for are the people working for idiot CEOs who buy into the hype, but most of all I'm worried for artists doing professional graphic design or video production--they're going to have their lunch eaten by Stable Diffusion and Midjourney taking all the bread-and-butter logo design jobs that many artists rely on for their living. But hey, they can always do furry porn instead, I've heard that pays well~

[–] [email protected] 13 points 1 year ago (2 children)

This article is excellent, and raises a point that's been lingering in the back of my head--what happens if the promises don't materialize? What happens when the market gets tired of stories about AI chatbots telling landlords to break the law, or suburban moms complaining about their face being plastered onto a topless model, or any of the other myriad stories of AI making glaring mistakes that would get any human immediately fired?

We've poured hundreds of billions of dollars into this, and what has it gotten us? What is the upside that makes up for all the lawsuits, lost jobs, disinformation, carbon footprint, and deluge of valueless slop flooding our search results? So far as I can tell, its primary use seems to be in creating things that someone is too lazy to do properly themself like cover letters or memes, and inserting Godzilla into increasingly ridiculous situations. There's certainly something there, perhaps, but is it worth using enough energy to power a small country?

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

Well, I've tried using it for the following:

  • Asking questions and looking up information in my job's internal knowledgebase, using a specially designed LLM trained specifically on our public and internal knowledgebase. It repeatedly gave me confidently incorrect answers and linked nonexistent articles.

  • Deducing a bit of Morse code that didn't have any spaces in it, creating an ambiguous word. I figured it could iterate through the possible solutions easily enough, saving me the time of doing it myself. I gave up in frustration after it repeatedly gave answers that were incorrect from the very first letter.

If I ever get serious about looking for a new job, I'll probably try and have it type up the first draft of a cover letter for me. With my luck, it'll probably claim I was a combat veteran or some shit even though I'm a fat 40-something who's never even talked with a recruitment officer in their life.

Oh, funny story--some of my coworkers at the job got the brilliant idea to use the company LLM to write responses to users for them. Needless to say, the users were NOT pleased to get messages signed "Company ChatGPT LLM." Management put their foot down immediately that doing it was a fireable offense and made it clear that we tracked every request sent to our chatbot.

[–] [email protected] 1 points 1 year ago

The other flipside is that individual landlords aren't necessarily going to be any better than larger corporate landlords--for every individual landlord that rents their Nan's home at cost and keeps rent lower than inflation, there's probably at least one other landlord that jacks rent up year over year, drags their feet on maintenance, and tries to screw you out of your deposit when you move out. (The ones who do this usually tend to leverage their income into more property and turn into a slum lord, though, so the rule of thumb of 'don't make it your only job' still largely applies.)

The real core of the issue is that we haven't built any new public housing for well on 2 decades by now, and the market has decided that the only new housing we should build are million dollar McMansions that squeeze into lots that would previously hold a much smaller house with a decent yard.

What should be done is a massive investment in public housing at all levels of government to fill in the missing demand for low-cost housing, but we've been so collectively conditioned by four decades of Reagan-era "Government is not the solution, it is the problem" neoliberal thinking that the odds of this ever happening is roughly on par with McConnell agreeing to expand the supreme court and eliminate the electoral college.

[–] [email protected] 13 points 1 year ago (1 children)

I think that's just called informally splitting a mortgage, homie

[–] [email protected] 34 points 1 year ago (12 children)

We're making our last payment on our EV this month, and a few weeks ago I brought up the idea of maybe trading it in for a newer EV, since our current one was starting to show signs of possible battery degradation and it's a Leaf that's stuck with CHAdeMO charging instead of CCS/NACS charging. My husband asked me what car we'd consider replacing it with, and the instant I floated maybe looking at a used Tesla, my husband barked back "Absolutely NOT!" And the thing was, I couldn't find myself disagreeing, either.

I know that my husband and I are far from the only ones who think the same way.

[–] [email protected] 1 points 1 year ago (1 children)

I always thought it originated from comic books where nerds would rank powers of various heroes and the S tier was named for Superman because he was in a tier of his own.

[–] [email protected] 7 points 1 year ago (1 children)

Yeah, happy to help. Sealioning really fucking sucks, because the only ways to counter it are:

  • Insult the troll until they go away

  • Refuse to play their game and give short, pithy responses without doing any research (or not linking the research you did)

  • Ignore the troll entirely

  • Copy your response and paste it whenever you see the troll asking the same question (which someone is doing in this very thread)

  • Create and maintain a collection of ready-to-go arguments with citations that you can copy/paste at the drop of a hat, which is a fair bit of work in of itself

In case it's not obvious, most of the counters for sealioning look almost exactly like trolling itself, and it's almost impossible to tell a sealion from someone apart looking for a legitimate discussion at first glance--short of keeping track of individual usernames and watching them in multiple threads, the only way to know if someone is a sealion for sure is for at least one person to feed the troll at least one good response. It's what makes sealioning such an insidious technique, because fighting a sealion almost always results in a lower quality of discussion itself, giving the sealion another type of victory.

[–] [email protected] 13 points 1 year ago* (last edited 1 year ago) (3 children)

It's a specific form of trolling/bad-faith argument based on this comic. The idea behind sealioning is that you feign politeness and badger someone with seemingly-simple questions (that in reality require spending a sizable amount of time to answer) to get them to try to debate you. This can take the form of asking someone to elaborate a point, or provide citations to support a claim. If the victim takes the bait and responds legitimately, the troll ignores most of the message, claims any citations are invalid for some reason (biased source, misrepresenting what the article says, or just ignoring it exists entirely). The troll then cherry picks a few statements, and asks more questions about those, continuing the cycle, If the victim refers to previous posts, the troll pretends it either didn't happen or didn't actually answer their question (it did). If the victim refers to previously linked articles, the troll dismisses them and insists the victim provides "better" articles (that the troll will also dismiss out of hand). If the victim ever tells the troll to fuck off, the troll claims the moral high road and says they just "want a civil discussion" and "reasoned debate" over the topic.

The goal is something like a reverse Gish Gallop. Where a gish gallop aims to overwhelm the victim with more arguments than can be addressed quickly in the hope that your opponent can't/won't take the time to respond and walk away, allowing you to claim victory, sealioning aims to trick the victim into spending hours writing a messages that you can respond to in under a minute with a few simple questions, creating a kind of denial-of-service attack.

[–] [email protected] 3 points 1 year ago (2 children)

🙄 And right on cue, here comes the techbros with the same exact arguments I've heard dozens of times...

The problem with "AI as a tool" theory is that it abstracts away so much of the work of creating something, that what little meaning the AI "author" puts into the work is drowned out by the AI itself. The author puts in a sentence, maybe a few words, and magically gets multiple paragraphs, or an image that would take them hours to make on their own (assuming they had the skill). Even if they spend hours learning how to "engineer" a prompt, the effort they put in to generate a result that's similar (but still inferior) to what actual artists can make is infinitesimal--a matter of a few days at most, versus the multiple years an artist will spend, along with the literal thousands of practice drawings an artist will create to improve their skill.

The entire point of LLMs and generative AI is reducing the work you put in to get a result to a trivial basis; if using AI required as much effort as creating something yourself, nobody would ever bother using it and we wouldn't be having this discussion in the first place. But the drawback of reducing the amount of effort you put in is that you reduce the amount of control you have over the result. So-called "AI artists" have no ability to control the output of an image on the level of the brush or stroke style; they can only control the result of their "work" on the macro level. In much the same way that Steve Jobs claimed credit for creating the iPhone when it was really the hundreds of talented engineers working at Apple who did the work, AI "artists" claim the credit for something that they had no hand in creating beyond vague directions.

This also creates a conundrum where there's little-to-no ability to demonstrate skill in AI art--from an external viewer, there's very little real difference between the quality of a one-sentence image prompt and one fine-tuned over several hours. The only "skill" in creating AI art is in being able to cajole the LLM to create something that more closely matches what you were thinking of, and it's impossible for a neutral observer to discern the difference between the creator's vision and the actual result, because that would require reading the creator's mind. And since AI "artists," by the nature of how AI art works, have precious little control over how something is composed, AI "art" has no rules or conventions--and this means that one cannot choose to deliberately break those rules or conventions to make a statement and add more meaning to their work. Even photographers, the favorite poster-child of AI techbros grasping at straws to defend the pink slime they call "art," can play with things like focus, shutter speed, exposure length, color saturation, and overall photo composition to alter an image and add meaning to an otherwise ordinary shot.

And all of that above assumes the best-case scenario of someone who actually takes the time to fine-tune the AI's output, fix all the glaring errors and melting hands, and correct the hallucinations and falsehoods. In practice, 99% of what an AI creates goes straight onto the Internet without any editing or oversight, because the intent behind the human telling the AI to create something isn't to contribute something meaningful, it's to make money by farming clicks and follows for ad dollars, driving traffic from Google search results using SEO, and to scam gullible people.

[–] [email protected] 12 points 1 year ago* (last edited 1 year ago) (4 children)

This, more than anything else, is what really worries me about AI. Ignoring all the other myriad worries about AI models such as using peoples' works without consent, the ethics of deepfakes and trivial creation of misinformation, the devastation of creative professions, and the massive carbon footprint, the fundamental truth is that all the creative output of humanity means something. Everything has some form of purpose, some intention behind it, even if that intention is trivial. AI generated material has no such meaning behind it, just what its own probability table says should go next. In other words, this lack of meaning in AI content arises because AI has no understanding of the world around it--It has no ability to perform deductive reasoning. That flaw has untold implications:

  • It cannot say "no" of its own volition because it has no understanding of the concept. This results in behavior where if you tell it 'Don't use emojis' or 'Don't call me that name' an LLM will basically ignore the "don't" and its probability table just processes "use emojis" or "use this name" and it starts flooding its responses with the very thing you told it not to do. This flaw is misinterpreted as the LLM "bullying" the user.

  • It has no ability to determine right from wrong, or true from false. This is why if there are no guardrails an LLM will happily create CSAM material or disinformation, or cite nonexistent cases in legal briefings, or invent nonexistent functions in code, or any of the myriad of behaviors we collectively refer to as hallucinations. It's also why all the attempts by OpenAI and other companies to fix these issues are fatally flawed--without a foundation of deductive reasoning and the associated understanding, any attempts to prevent this behavior results in a cat-and-mouse game where bad actors find loophole after loophole, solved by more and more patches. I also suspect that these tweaks are gradually degrading the performance of chatbots more and more over time, producing an effect similar to Robocop 2 when OCP overwrites RoboCop's 3 directives with 90+ focus-grouped rules, producing a wholly toothless and ineffective automaton.

  • Related to the above, and as discussed in the linked article, LLMs are effectively useless at determining the accuracy of a statement that is outside of its training data. (Hell, I would argue that it's also suspect for the purpose of summarizing text and anything it summarizes should be double checked by a human, at which point you're saving so little time you may as well do it yourself.) This makes the use of AI in scientific review particularly horrifying--while human peer review is far from perfect, it at least has some ability to catch flawed science.

  • Finally, AI has no volition of its own, no will. Because LLMs lack deductive reasoning, it cannot act without being directed to do so by a human. It cannot spontaneously decide to write a story, or strike up a conversation, or post to social media, or write a song, or make a video. That spontaneity--that desire to make a statement--is the essence of meaning.

The most telling sign of this flaw is that generative AI has no real null state. When you ask an AI to do something, it will do it, even if the output is completely nonsensical (ignoring the edge cases where you run afoul of the guardrails, and even that is more the LLM saying yes to its safeguard than it is saying no to you). Theoretically, AI is just a tool for human use, and it's on the human using AI to verify the value of the AI's output. In practice, nobody really does this outside of the "prompt engineers" making AI images (I refuse to call it art), because it runs headfirst into the wider problem of it taking too damn long to review by hand.

The end result is that all of the flood of this AI content is meaningless, and overwhelming the output of actual humans making actual statements with intention and meaning behind them. Even the 90% of content humans make that Sturgeon's Law says is worthless has more meaning and value than the dreck AI is flooding the world with.

AI is robbing our collective culture of meaning. It's absorbing all of the stuff we've made, and using it to drown out everything of value, crowding out actual creative human beings and forcing them out of our collective culture. It's destroying the last shreds of shared truth that our culture had remaining. The deluge of AI content is grinding the system we built up over the last century to share new scientific research to a halt, like oil sludge in an automobile engine. It's accelerating an already prevalent problem I've observed of cultural freeze, where new, original material cannot compete with established properties, resulting in pop culture being largely composed of remakes of older material and riffs on existing stories; for, if in a few years, 99% of creative work is AI generated trash, and humans cannot compete against the flood of meaningless dreck and automated, AI-driven content theft, why would anyone make anything new, or pay attention to anything made after 2023?

The worst part of all this is that I cannot fathom a way to fix this, except the invention of AGI and the literal end of the value of human labor. While it would be nice if humanity collectively woke up and realized AI is a dangerous scam, the odds of this are nearly impossible, and there will always be someone who will abuse AI.

There's only two possible outcomes at this point: either the complete collapse of our collective culture under the weight of trash AI content, or the utopia of self-directed, coherent, meaningful AI content.

...Inb4 the AI techbros flood this thread with "nuh-uh" responses.

[–] [email protected] 0 points 1 year ago (1 children)

Compared to how much effort it takes to learn how to draw yourself? The effort is trivial. It's like entering a Toyota Camry into a marathon and then bragging about how good you did and how hard it was to drive the course.

view more: ‹ prev next ›