theluddite

joined 2 years ago
[–] [email protected] 2 points 1 month ago

I get your point and it's funny but it's different in important ways that are directly relevant to the OP article. The parent uses the instrumental theory of technology to dismiss the article, which is roughly saying that antidemocracy is a property of AI. I'm saying that not only is that a valid argument, but that these kinds of properties are important, cumulative, and can fundamentally reshape our society.

[–] [email protected] 6 points 1 month ago (2 children)

I don’t like this way of thinking about technology, which philosophers of tech call the "instrumental" theory. Instead, I think that technology and society make each other together. Obviously, technology choices like mass transit vs cars shape our lives in ways that simpler tools, like a hammer or or whatever, don't help us explain. Similarly, society shapes the way that we make technology.

In making technology, engineers and designers are constrained by the rules of the physical world, but that is an underconstraint. There are lots of ways to solve the same problem, each of which is equally valid, but those decisions still have to get made. How those decisions get made is the process through which we embed social values into the technology, which are cumulative in time. To return to the example of mass transit vs cars, these obviously have different embedded values within them, which then go on to shape the world that we make around them. We wouldn't even be fighting about self-driving cars had we made different technological choices a while back.

That said, on the other side, just because technology is more than just a tool, and does have values embedded within it, doesn't mean that the use of a technology is deterministic. People find subversive ways to use technologies in ways that go against the values that are built into it.

If this topic interests you, Andrew Feenberg's book Transforming Technology argues this at great length. His work is generally great and mostly on this topic or related ones.

[–] [email protected] 1 points 2 months ago

Honestly I should just get that slide tattooed to my forehead next to a QR code to Weizenbaum's book. It'd save me a lot of talking!

[–] [email protected] 17 points 2 months ago

I agree with you so strongly that I went ahead and updated my comment. The problem is general and out of control. Orwell said it best: "Journalism is printing something that someone does not want printed. Everything else is public relations."

[–] [email protected] 8 points 2 months ago

These articles frustrate the shit out of me. They accept both the company's own framing and its selectively-released data at face value. If you get to pick your own framing and selectively release the data that suits you, you can justify anything.

[–] [email protected] 52 points 2 months ago* (last edited 2 months ago) (10 children)

I am once again begging journalists to be more critical ~~of tech companies~~.

But as this happens, it’s crucial to keep the denominator in mind. Since 2020, Waymo has reported roughly 60 crashes serious enough to trigger an airbag or cause an injury. But those crashes occurred over more than 50 million miles of driverless operations. If you randomly selected 50 million miles of human driving—that’s roughly 70 lifetimes behind the wheel—you would likely see far more serious crashes than Waymo has experienced to date.

[...] Waymo knows exactly how many times its vehicles have crashed. What’s tricky is figuring out the appropriate human baseline, since human drivers don’t necessarily report every crash. Waymo has tried to address this by estimating human crash rates in its two biggest markets—Phoenix and San Francisco. Waymo’s analysis focused on the 44 million miles Waymo had driven in these cities through December, ignoring its smaller operations in Los Angeles and Austin.

This is the wrong comparison. These are taxis, which means they're driving taxi miles. They should be compared to taxis, not normal people who drive almost exclusively during their commutes (which is probably the most dangerous time to drive since it's precisely when they're all driving).

We also need to know how often Waymo intervenes in the supposedly autonomous operations. The latest we have from this, which was leaked a while back, is that Cruise (different company) cars are actually less autonomous than taxis, and require >1 employee per car.

edit: The leaked data on human interventions was from Cruise, not Waymo. I'm open to self-driving cars being safer than humans, but I don't believe a fucking word from tech companies until there's been an independent audit with full access to their facilities and data. So long as we rely on Waymo's own publishing without knowing how the sausage is made, they can spin their data however they want.

edit2: Updated to say that ournalists should be more critical in general, not just about tech companies.

[–] [email protected] 10 points 3 months ago

David Graeber's Debt: The First 5000 Years. We all take debt for granted. It's fascinating to learn how differently we've thought about it over the millenia and how much of our modern world makes more sense when understood through its lens.

[–] [email protected] 2 points 3 months ago

No need to apologize for length with me basically ever!

I was thinking how you did it in the second paragraph, but even more stripped down. The algorithm has N content buckets to choose from, then, once it chooses, the success is how much of the video the user watched. Users have the choice to only keep watching or log off for simplicity. For small N, I think that @[email protected] is right on that it's the multi-armed bandit problem if we assume that user preferences are static. If we introduce the complexity that users prefer familiar things, which I think is pretty fair, so users are more likely to keep watching from a bucket if it's a familiar bucket, I assume that exploration gets heavily disincentivized and exhibits some pretty weird behavior, while exploitation becomes much more favorable. What I like about this is that, with only a small deviation from a classic problem, it would help explain what you also explain, which is getting stuck in corners.

Once you allow user choice beyond consume/log off, I think your way of thinking about it, as a turn based game, is exactly right, and your point about bin refinement is great and I hadn't thought of that.

[–] [email protected] 4 points 3 months ago

Yeah I really couldn't agree more. I really harped on the importance of other properties of the medium, like brevity, when I reviewed the book #HashtagActivism, and how those too are structurally right wing. There's a lot of scholars doing these kinds of network studies and imo they way too often emphasize user-user dynamics and de-emphasize, if not totally omit, the fact that all these interactions are heavily mediated. Just this week I watched a talk that I thought had many of these same problems.

[–] [email protected] 1 points 3 months ago

I knew you were the person to call :)

[–] [email protected] 3 points 3 months ago (4 children)

Thanks!

I feel enlightened now that you called out the self-reinforcing nature of the algorithms. It makes sense that an RL agent solving the bandits problem would create its own bubbles out of laziness.

You're totally right that it's like a multi-armed bandit problem, but maybe with so many possibilities that searching is prohibitively expensive, since the space of options to search is much bigger than the rate that humans can consume content. In other ways, though, there's a dissimilarity because the agent's reward depends on its past choices (people watch more of what they're recommended). It would be really interesting to know if anyone has modeled a multi-armed bandit problem with this kind of self-dependency. I bet that, in that case, the exploration behavior is pretty chaotic. @[email protected] this seems like something you might just know off the top of your head!

Maybe we can take advantage of that laziness to incept critical thinking back into social media, or at least have it eat itself.

If you have any ideas for how to turn social media against itself, I'd love to hear them. I worked on this post unusually long for a lot of reasons, but one of them was trying to think of a counter strategy. I came up with nothing though!

[–] [email protected] 9 points 3 months ago (1 children)

Yup. Silicon-washing genocidal intention is almost certainly the most profitable use of AI we've come up with so far.

 

Though wrapped in the aesthetic of science, this paper is a pure expression of the AI hype's ideology, including its reliance on invisible, alienated labor. Its data was manufactured to spec to support the authors' pre-existing beliefs, and its conclusions are nothing but a re-articulation of their arrogance and ideological impoverishment.

 

#HashtagActivism is a robust and thorough defense of its namesake practice. It argues that Twitter disintermediated public discourse, analyzing networks of user interactions in that context, but its analysis overlooks that Twitter is actually a heavy-handed intermediary. It imposes strict requirements on content, like a character limit, and controls who sees what and in what context. Reintroducing Twitter as the medium and reinterpreting the analysis exposes serious flaws. Similarly, their defense of hashtag activism relies almost exclusively on Twitter engagement data, but offers no theory of change stemming from that engagement. By reexamining their evidence, I argue that hashtag activism is not just ineffective, but its institutional dynamics are structurally conservative and inherently anti-democratic.

 

It's so slow that I had time to take my phone out and take this video after I typed all the letters. How is this even possible?

view more: next ›