I didn't ignore it, you took the third paragraph as the only point and ignored everything I highlighted.
Cowbee
Quoting and bolding your own reference seems to be an easy way to counter here:
Computers don’t actually do anything. They don’t write, or play; they don’t even compute. Which doesn’t mean we can’t play with computers, or use them to invent, or make, or problem-solve. The new AI is unexpectedly reshaping ways of working and making, in the arts and sciences, in industry, and in warfare. We need to come to terms with the transformative promise and dangers of this new tech. But it ought to be possible to do so without succumbing to bogus claims about machine minds.
What could ever lead us to take seriously the thought that these devices of our own invention might actually understand, and think, and feel, or that, if not now, then later, they might one day come to open their artificial eyes thus finally to behold a shiny world of their very own? One source might simply be the sense that, now unleashed, AI is beyond our control. Fast, microscopic, distributed and astronomically complex, it is hard to understand this tech, and it is tempting to imagine that it has power over us.
But this is nothing new. The story of technology – from prehistory to now – has always been that of the ways we are entrained by the tools and systems that we ourselves have made. Think of the pathways we make by walking. To every tool there is a corresponding habit, that is, an automatised way of acting and being. From the humble pencil to the printing press to the internet, our human agency is enacted in part by the creation of social and technological landscapes that in turn transform what we can do, and so seem, or threaten, to govern and control us.
Yet it is one thing to appreciate the ways we make and remake ourselves through the cultural transformation of our worlds via tool use and technology, and another to mystify dumb matter put to work by us. If there is intelligence in the vicinity of pencils, shoes, cigarette lighters, maps or calculators, it is the intelligence of their users and inventors. The digital is no different.
But there is another origin of our impulse to concede mind to devices of our own invention, and this is what I focus on here: the tendency of some scientists to take for granted what can only be described as a wildly simplistic picture of human and animal cognitive life. They rely unchecked on one-sided, indeed, milquetoast conceptions of human activity, skill and cognitive accomplishment. The surreptitious substitution (to use a phrase of Edmund Husserl’s) of this thin gruel version of the mind at work – a substitution that I hope to convince you traces back to Alan Turing and the very origins of AI – is the decisive move in the conjuring trick.
The article is deliberately railing against mystifying AI and attributing to it human cognition, but it fully acknowledges that AI in its present case has uses. Making those distinct from human cognition, and not as a replacement, is important, not fetishizing AI like some AI dogmatists do.
Isn't that argument fundamentally based on the user misanalyzing the use-case of AI, and what it can and cannot do? The article you linked argued for clearly understanding AI, its limits, etc, not rejecting it dogmatically in all cases.
As for pixel-perfect recreation, AI is improving, and will continue to improve whether or not you or I approve. The hypothetical is important because it reveals something about use-value.
If two images are pixel-for-pixel the same, then their use is the same. I don't appreciate calling me stupid just because I don't believe there is a metaphysical quality to a .png taken with a camera that looks the exact same as an AI generated output, especially if it's for something as mundane as getting across an idea like "office worker eats corncob while laughing." Plus, I already told you, I don't personally use it because I don't have a use for it.
Not a fan of the style of argument that consists of "read this thing," but I read the whole thing. It doesn't contradict me, it contradicts you. The article argues that machines cannot replace cognition, can't replace art, for example, but acknowledges right at the beginning that it has uses, and that drawing clear lines between what's damaging and what's not, what LLMs can and cannot do, is the task at hand. Your argument is that the use of AI, in all circumstances, is cognitively damaging for the individual. This is an entirely distinct argument that your article doesn't back up.
These are distinct hypotheticals.
In the first case, the question is if it is equivalent, does the use-value change? The answer is no.
In the second case, the question is "if we can tell, does it matter?" And the answer is yes in some cases, no in others. If the reason we want a painting is for its artisinal creation, but it turns out it was AI-generated, then this fundamentally cannot satisfy the use of an image for its appretiation due to artisinally being generated. If the reason we want an image is to convey an idea, such that it would be faster, easier, and higher quality than an amateur sketch, but in no way needs to be appreciated for its artisinal creation, then it does not matter if we can tell or not.
Another way of looking at it is a mass-produced chair vs a hand-crafted one. If I want a chair that lets me sit, then it doesn't matter to me which chair I have, both are equivalent in that they both satisfy the same need. If I have a specific vision and a desire for the chair as it exists artisinally, say, by being created in a historical way, then they cannot be equivalent use-values for me.
How is AI cognitively damaging under all circumstances? You just left this hanging like it's a fact, but that requires incredible effort to prove. Is using a calculator cognitively damaging? What about a search engine? What is it about using AI that makes it cognitively damaging?
"Every random idea" meaning AI can take the place of some stock photos, and not all, as in we don't need to do the traditional stock photo process for every random idea, AI can replace some of them. As for the quality of the output, that's something that varies from case to case, and further the idea isn't to replace human art in general, but to exist alongside in instances where a human artisinally producing it isn't the purpose, but the traditional means to an end. Therefore, it doesn't actually matter if we can tell or not, the goal isn't to decieve, but even that is getting blurrier and blurrier as AI improves.
Essentially, if an AI image can fulfill the same purpose as a stock image, then the act of creating the stock image through traditional means is just unnecessary expenditure of effort. We don't traditionally appreciate stock images for their artistic merit, but for a visual function, be it to convey information or otherwise, not because our goal is to appreciate and understand the artistic process a human went through to create it.
The core of your argument seems to be that using AI, under all circumstances, is cognitively damaging. You also call it a process and not a tool, but all tools have associated process, including correct and incorrect process. A hammer can be misgripped, causing strain on muscles and thus pain. You can also use a hammer for the wrong purpose, like driving a screw and not a nail. You can kinda do it, but it's less efficient at best, and harmful at worst. AI is similar.
I have never said that a process has no effect on the person performing a process. You still aren't adhering to materialism fully, even if you have improved. It's not about being bad-faith, I've been good faith this entire time even as you've openly mocked me.
Well, up front, it's nice that you at least cleared up that you don't consider Marxism to be socialist. I disagree with that, of course, but now that we've established that your definition of socialism is exclusionary of Marxism, then that does at least mean we can have a consistent conversation.
As for delegates vs. representatives, the PRC's democracy extends beyond simply voting for candidates and representatives. I already explained that each rung makes decisions for that which their area needs, and elect from among themselves delegates that they can recall. People's integration into politics isn't relegated to simple elections, but consensus building, feedback, drafts of policy, etc.
As for ownership, your argument was that politicians are literally owners of publicly owned industry, which isn't how public ownership works anywhere. Even if the PRC is centrally planned for the majority of its large firms and key industries, that doesn't mean those large firms and key industries are run for profit, personal enrichment of capitalists, participate in markets, etc. There's nothing at all resembling capitalism there, so state capitalism is an absurdity. I gave clear examples of capitalist systems with heavy state involvement, like Singapore, that better fit "state capitalism."
Either way, this will be my last comment too. Have a good one!
"We" in this moment is you, right now. If the end product is the same, then it is the same. If the process is the use-value then it matters, but if not, it doesn't.
Ideas and symbols matter, sure, but not because of any metaphysical value you ascribe them, but the ideas they convey.