gerikson

joined 2 years ago
[–] [email protected] 9 points 23 hours ago (9 children)

Oh FFS, that couple have managed to break into Sweden's public broadcasting site

https://www.svt.se/nyheter/utrikes/har-ar-familjen-som-vill-foda-elitbarn-for-att-radda-manskligheten

[–] [email protected] 19 points 4 days ago (5 children)

Here's LWer "johnswentworth", who has more than 57k karma on the site and can be characterized as a big cheese:

My Empathy Is Rarely Kind

I usually relate to other people via something like suspension of disbelief. Like, they’re a human, same as me, they presumably have thoughts and feelings and the like, but I compartmentalize that fact. I think of them kind of like cute cats. Because if I stop compartmentalizing, if I start to put myself in their shoes and imagine what they’re facing… then I feel not just their ineptitude, but the apparent lack of desire to ever move beyond that ineptitude. What I feel toward them is usually not sympathy or generosity, but either disgust or disappointment (or both).

"why do people keep saying we sound like fascists? I don't get it!"

[–] [email protected] 6 points 4 days ago

The artillery branch of most militaries has long been a haven for the more brainy types. Napoleon was a gunner, for example.

[–] [email protected] 12 points 4 days ago

Oh, but LW has the comeback for you in the very first paragraph

Outside of niche circles on this site and elsewhere, the public's awareness about AI-related "x-risk" remains limited to Terminator-style dangers, which they brush off as silly sci-fi. In fact, most people's concerns are limited to things like deepfake-based impersonation, their personal data training AI, algorithmic bias, and job loss.

Silly people! Worrying about problems staring them in the face, instead of the future omnicidal AI that is definitely coming!

[–] [email protected] 16 points 5 days ago (10 children)

LessWronger discovers the great unwashed masses , who inconveniently still indirectly affect policy through outmoded concepts like "voting" instead of writing blogs, might need some easily digested media pablum to be convinced that Big Bad AI is gonna kill them all.

https://www.lesswrong.com/posts/4unfQYGQ7StDyXAfi/someone-should-fund-an-agi-blockbuster

Cites such cultural touchstones as "The Day After Tomorrow", "An Inconvineent Truth" (truly a GenZ hit), and "Slaughterbots" which I've never heard of.

Listen to the plot summary

  • Slowburn realism: The movie should start off in mid-2025. Stupid agents.Flawed chatbots, algorithmic bias. Characters discussing these issues behind the scenes while the world is focused on other issues (global conflicts, Trump, celebrity drama, etc). [ok so basically LW: the Movie]
  • Explicit exponential growth: A VERY slow build-up of AI progress such that the world only ends in the last few minutes of the film. This seems very important to drill home the part about exponential growth. [ah yes, exponential growth, a concept that lends itself readily to drama]
  • Concrete parallels to real actors: Themes like "OpenBrain" or "Nole Tusk" or "Samuel Allmen" seem fitting. ["we need actors to portray real actors!" is genuine Hollywood film talk]
  • Fear: There's a million ways people could die, but featuring ones that require the fewest jumps in practicality seem the most fitting. Perhaps microdrones equipped with bioweapons that spray urban areas. Or malicious actors sending drone swarms to destroy crops or other vital infrastructure. [so basically people will watch a conventional thriller except in the last few minutes everyone dies. No motivation. No clear "if we don't cut these wires everyone dies!"]

OK so what should be shown in the film?

compute/reporting caps, robust pre-deployment testing mandates (THESE are all topics that should be covered in the film!)

Again, these are the core components of every blockbuster. I can't wait to see "Avengers vs the AI" where Captain America discusses robust pre-deployment testing mandates with Tony Stark.

All the cited URLS in the footnotes end with "utm_source=chatgpt.com". 'nuff said.

[–] [email protected] 10 points 5 days ago

At this point in time, having a substack is in itself a red flag.

[–] [email protected] 9 points 6 days ago

The targets are informed, via a grammatically invalid sentence.

Sam Kriss (author of the ‘Laurentius Clung’ piece) has posted a critique. I don’t think it’s good, but I do think it’s representative of a view that I ever encounter in the wild but haven’t really seen written up.

FWIW the search term 'Laurentius Clung' gets no hits on LW, so I'm to assume everyone there also is Extremely Online on Xitter and instantly knows the reference.

https://www.lesswrong.com/posts/3GbM9hmyJqn4LNXrG/yams-s-shortform?commentId=MzkAjd8EWqosiePMf

[–] [email protected] 10 points 6 days ago (6 children)

This was a good read. I also read the post/story/essay that got the rats upset and it's good too.

https://samkriss.substack.com/p/the-law-that-can-be-named-is-not

Yud's sputtering reaction can be read here among the comments here

https://xcancel.com/captgouda24/status/1946398995556819149

[–] [email protected] 4 points 6 days ago

Remember FizzBuzz? That was originally a simple filter exercise some person recruiting programmers came up with to weed out everyone with multi-year CS degrees but zero actual programming experience.

[–] [email protected] 9 points 1 week ago

The argument would be stronger (not strong, but stronger) if he could point to an existing numbering system that is little-endian and somehow show it's better

[–] [email protected] 17 points 1 week ago* (last edited 1 week ago) (9 children)

The guy who thinks it's important to communicate clearly (https://awful.systems/comment/7904956) wants to flip the number order around

https://www.lesswrong.com/posts/KXr8ys8PYppKXgGWj/english-writes-numbers-backwards

I'll consider that when the Yanks abandon middle-endian date formatting.

Edit it's now tagged as "Humor" on LW. Cowards. Own your cranks.

[–] [email protected] 13 points 1 week ago* (last edited 1 week ago)

So here's a poster on LessWrong, ostensibly the space to discuss how to prevent people from dying of stuff like disease and starvation, "running the numbers" on a Lancet analysis of the USAID shutdown and, having not been able to replicate its claims of millions of dead thereof, basically concludes it's not so bad?

https://www.lesswrong.com/posts/qgSEbLfZpH2Yvrdzm/i-tried-reproducing-that-lancet-study-about-usaid-cuts-so

No mention of the performative cruelty of the shutdown, the paltry sums involved compared to other gov expenditures, nor the blow it deals to American soft power. But hey, building Patriot missiles and then not sending them to Ukraine is probably net positive for human suffering, just run the numbers the right way!

Edit ah it's the dude who tried to prove that most Catholic cardinals are gay because heredity, I think I highlighted that post previously here. Definitely a high-sneer vein to mine.

 

current difficulties

  1. Day 21 - Keypad Conundrum: 01h01m23s
  2. Day 17 - Chronospatial Computer: 44m39s
  3. Day 15 - Warehouse Woes: 30m00s
  4. Day 12 - Garden Groups: 17m42s
  5. Day 20 - Race Condition: 15m58s
  6. Day 14 - Restroom Redoubt: 15m48s
  7. Day 09 - Disk Fragmenter: 14m05s
  8. Day 16 - Reindeer Maze: 13m47s
  9. Day 22 - Monkey Market: 12m15s
  10. Day 13 - Claw Contraption: 11m04s
  11. Day 06 - Guard Gallivant: 08m53s
  12. Day 08 - Resonant Collinearity: 07m12s
  13. Day 11 - Plutonian Pebbles: 06m24s
  14. Day 18 - RAM Run: 05m55s
  15. Day 04 - Ceres Search: 05m41s
  16. Day 23 - LAN Party: 05m07s
  17. Day 02 - Red Nosed Reports: 04m42s
  18. Day 10 - Hoof It: 04m14s
  19. Day 07 - Bridge Repair: 03m47s
  20. Day 05 - Print Queue: 03m43s
  21. Day 03 - Mull It Over: 03m22s
  22. Day 19 - Linen Layout: 03m16s
  23. Day 01 - Historian Hysteria: 02m31s
 

Problem difficulty so far (up to day 16)

  1. Day 15 - Warehouse Woes: 30m00s
  2. Day 12 - Garden Groups: 17m42s
  3. Day 14 - Restroom Redoubt: 15m48s
  4. Day 09 - Disk Fragmenter: 14m05s
  5. Day 16 - Reindeer Maze: 13m47s
  6. Day 13 - Claw Contraption: 11m04s
  7. Day 06 - Guard Gallivant: 08m53s
  8. Day 08 - Resonant Collinearity: 07m12s
  9. Day 11 - Plutonian Pebbles: 06m24s
  10. Day 04 - Ceres Search: 05m41s
  11. Day 02 - Red Nosed Reports: 04m42s
  12. Day 10 - Hoof It: 04m14s
  13. Day 07 - Bridge Repair: 03m47s
  14. Day 05 - Print Queue: 03m43s
  15. Day 03 - Mull It Over: 03m22s
  16. Day 01 - Historian Hysteria: 02m31s
 

The previous thread has fallen off the front page, feel free to use this for discussions on current problems

Rules: no spoilers, use the handy dandy spoiler preset to mark discussions as spoilers

 

This season's showrunners are so lazy, just re-using the same old plots and antagonists.

 

“It is soulless. There is no personality to it. There is no voice. Read a bunch of dialogue in an AI generated story and all the dialogue reads the same. No character personality comes through,” she said. Generated text also tends to lack a strong sense of place, she’s observed; the settings of the stories are either overly-detailed for popular locations, or too vague, because large language models can’t imagine new worlds and can only draw from existing works that have been scraped into its training data.

 

The grifters in question:

Jeremie and Edouard Harris, the CEO and CTO of Gladstone respectively, have been briefing the U.S. government on the risks of AI since 2021. The duo, who are brothers [...]

Edouard's website: https://www.eharr.is/, and on LessWrong: https://www.lesswrong.com/users/edouard-harris

Jeremie's LinkedIn: https://www.linkedin.com/in/jeremieharris/

The company website: https://www.gladstone.ai/

 

HN reacts to a New Yorker piece on the "obscene energy demands of AI" with exactly the same arguments coiners use when confronted with the energy cost of blockchain - the product is valuable in of itself, demands for more energy will spur investment in energy generation, and what about the energy costs of painting oil on canvas, hmmmmmm??????

Maybe it's just my newness antennae needing calibrating, but I do feel the extreme energy requirements for what's arguably just a frivolous toy is gonna cause AI boosters big problems, especially as energy demands ramp up in the US in the warmer months. Expect the narrative to adjust to counter it.

view more: next ›