TinyTimmyTokyo

joined 2 years ago
[–] TinyTimmyTokyo@awful.systems 7 points 1 week ago (1 children)

What I don't understand is how these people didn't think they would be caught, with potentially career-ending consequences? What is the series of steps that leads someone to do this, and how stupid do you need to be?

[–] TinyTimmyTokyo@awful.systems 19 points 1 week ago (3 children)

What makes this worse than the financial crisis of 2008 is that you can't live in a GPU once the crash happens.

[–] TinyTimmyTokyo@awful.systems 17 points 2 weeks ago (1 children)

Apparently the NYT hit-piece's author, Benjamin Ryan, is a subscriber to Jordan Lasker's (Cremieux's) substack.

[–] TinyTimmyTokyo@awful.systems 5 points 2 weeks ago

When this was first posted I too was curious about the book series. It appears that nearly every book in the series is authored by academics affiliated with Indian universities. Modi's government has promoted and invested heavily in AI.

[–] TinyTimmyTokyo@awful.systems 8 points 2 weeks ago

I call bullshit on Daniel K. That backtracking is so obviously ex-post-facto cover-your-ass woopsie-doopsie. Expect more of it as we get closer to whatever new "median" he has suddenly claimed. It's going to be fun to watch.

[–] TinyTimmyTokyo@awful.systems 24 points 2 weeks ago

I have no doubt that a chatbot would be just as effective at doing Liuson's job, if not moreso. Not because chatbots are good, but because Liuson is so bad at her job.

[–] TinyTimmyTokyo@awful.systems 9 points 2 weeks ago

That thread is wild. Nate proposes techniques to get his kooky beliefs taken more seriously. Others point out that those very same techniques counterproductively pushed people to into the e/acc camp. Nate deletes those other people's comments. How rationalist of him!

[–] TinyTimmyTokyo@awful.systems 18 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

People are often overly confident about their imperviousness to mental illness. In fact I think that --given the right cues -- we're all more vulnerable to mental illness than we'd like to think.

Baldur Bjarnason wrote about this recently. He talked about how chatbots are incentivizing and encouraging a sort of "self-experimentation" that exposes us to psychological risks we aren't even aware of. Risks that no amount of willpower or intelligence will help you avoid. In fact, the more intelligent you are, the more likely you may be to fall into the traps laid in front of you, because your intelligence helps you rationalize your experiences.

[–] TinyTimmyTokyo@awful.systems 9 points 2 months ago (1 children)

ChatGPT tells prompter that he's brilliant for his literal "shit on a stick" business plan.

[–] TinyTimmyTokyo@awful.systems 5 points 2 months ago

Not surprised to find Sabine in the comments. She's been totally infected by the YouTube algorithm and captured by her new culture-war-mongering audience. Kinda sad, really.

 

Not 7.5% or 8%. 8.5%. Numbers are important.

 

Non-paywalled link: https://archive.ph/9Hihf

In his latest NYT column, Ezra Klein identifies the neoreactionary philosophy at the core of Marc Andreessen's recent excrescence on so-called "techno-optimism". It wasn't exactly a difficult analysis, given the way Andreessen outright lists a gaggle of neoreactionaries as the inspiration for his screed.

But when Andreessen included "existential risk" and transhumanism on his list of enemy ideas, I'm sure the rationalists and EAs were feeling at least a little bit offended. Klein, as the founder of Vox media and Vox's EA-promoting "Future Perfect" vertical, was probably among those who felt targeted. He has certainly bought into the rationalist AI doomer bullshit, so you know where he stands.

So have at at, Marc and Ezra. Fight. And maybe take each other out.

 

Rationalist check-list:

  1. Incorrect use of analogy? Check.
  2. Pseudoscientific nonsense used to make your point seem more profound? Check.
  3. Tortured use of probability estimates? Check.
  4. Over-long description of a point that could just have easily been made in 1 sentence? Check.

This email by SBF is basically one big malapropism.

 

Representative take:

If you ask Stable Diffusion for a picture of a cat it always seems to produce images of healthy looking domestic cats. For the prompt "cat" to be unbiased Stable Diffusion would need to occasionally generate images of dead white tigers since this would also fit under the label of "cat".

 

[All non-sneerclub links below are archive.today links]

Diego Caleiro, who popped up on my radar after he commiserated with Roko's latest in a never-ending stream of denials that he's a sex pest, is worthy of a few sneers.

For example, he thinks Yud is the bestest, most awesomest, coolest person to ever breathe:

Yudkwosky is a genius and one of the best people in history. Not only he tried to save us by writing things unimaginably ahead of their time like LOGI. But he kind of invented Lesswrong. Wrote the sequences to train all of us mere mortals with 140-160IQs to think better. Then, not satisfied, he wrote Harry Potter and the Methods of Rationality to get the new generation to come play. And he founded the Singularity Institute, which became Miri. It is no overstatement that if we had pulled this off Eliezer could have been THE most important person in the history of the universe.

As you can see, he's really into superlatives. And Jordan Peterson:

Jordan is an intellectual titan who explores personality development and mythology using an evolutionary and neuroscientific lenses. He sifted through all the mythical and religious narratives, as well as the continental psychoanalysis and developmental psychology so you and I don’t have to.

At Burning Man, he dons a 7-year old alter ego named "Evergreen". Perhaps he has an infantilization fetish like Elon Musk:

Evergreen exists ephemerally during Burning Man. He is 7 days old and still in a very exploratory stage of life.

As he hinted in his tweet to Roko, he has an enlightened view about women and gender:

Men were once useful to protect women and children from strangers, and to bring home the bacon. Now the supermarket brings the bacon, and women can make enough money to raise kids, which again, they like more in the early years. So men have become useless.

And:

That leaves us with, you guessed, a metric ton of men who are no longer in families.

Yep, I guessed about 12 men.

 

Excerpt:

Richard Hanania, a visiting scholar at the University of Texas, used the pen name “Richard Hoste” in the early 2010s to write articles where he identified himself as a “race realist.” He expressed support for eugenics and the forced sterilization of “low IQ” people, who he argued were most often Black. He opposed “miscegenation” and “race-mixing.” And once, while arguing that Black people cannot govern themselves, he cited the neo-Nazi author of “The Turner Diaries,” the infamous novel that celebrates a future race war.

He's also a big eugenics supporter:

“There doesn’t seem to be a way to deal with low IQ breeding that doesn’t include coercion,” he wrote in a 2010 article for AlternativeRight .com. “Perhaps charities could be formed which paid those in the 70-85 range to be sterilized, but what to do with those below 70 who legally can’t even give consent and have a higher birthrate than the general population? In the same way we lock up criminals and the mentally ill in the interests of society at large, one could argue that we could on the exact same principle sterilize those who are bound to harm future generations through giving birth.”

(Reminds me a lot of the things Scott Siskind has written in the past.)

Some people who have been friendly with Hanania:

  • Mark Andreessen, Silion Valley VC and co-founder of Andreessen-Horowitz
  • Hamish McKenzie, CEO of Substack
  • Elon Musk, Chief Enshittification Officer of Tesla and Twitter
  • Tyler Cowen, libertarian econ blogger and George Mason University prof
  • J.D. Vance, US Senator from Ohio
  • Steve Sailer, race (pseudo)science promoter and all-around bigot
  • Amy Wax, racist law professor at UPenn.
  • Christopher Rufo, right-wing agitator and architect of many of Florida governor Ron DeSantis's culture war efforts
 

Ugh.

But even if some of Yudkowsky’s allies don’t entirely buy his regular predictions of AI doom, they argue his motives are altruistic and that for all his hyperbole, he’s worth hearing out.

view more: ‹ prev next ›