this post was submitted on 01 Jul 2025
2081 points (98.4% liked)

Microblog Memes

8358 readers
2639 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 49 points 1 day ago* (last edited 1 day ago) (24 children)

Let’s do the math.

Let’s take an SDXl porn model, with no 4-step speed augmentations, no hand written quantization/optimization schemes like svdquant, or anything, just an early, raw inefficient implementation:

https://www.baseten.co/blog/40-faster-stable-diffusion-xl-inference-with-nvidia-tensorrt/#sdxl-with-tensorrt-in-production

So 2.5 seconds on an A100 for a single image. Let’s batch it (because that’s what’s done in production), and run it on the now popular H100 instead, and very conservatively assume 1.5 seconds per single image (though it’s likely much faster).

That’s on a 700W SXM Nvidia H100. Usually in a server box with 7 others, so let’s say 1000W including its share of the CPU and everything else. Let’s say 1400W for networking, idle time, whatever else is going on.

That’s 2 kJ, or 0.6 watt hours.

…Or about the energy of browsing Lemmy for 30-60 seconds. And again, this is an high estimate, but also a fraction of a second of usage for a home AC system.


…So yeah, booby pictures take very little energy, and the usage is going down dramatically.

Training light, open models like Deepseek or Qwen or SDXL takes very little energy, as does running them. The GPU farms they use are tiny, and dwarfed by something like an aluminum plant.

What slurps energy is AI Bros like Musk or Altman trying to brute force their way to a decent model by scaling out instead of increasing efficiency, and mostly they’re blowing that out of proportion to try the hype the market and convince them AI will be expensive and grow infinitely (so people will give them money).

That isn’t going to work very long. Small on-device models are going to be too cheap to compete.

https://escholarship.org/uc/item/2kc978dg

So this is shit, they should be turning off AI farms too, but your porn images are a drop in the bucket compared to AC costs.


TL;DR: There are a bazillion things to flame AI Bros about, but inference for small models (like porn models) is objectively not one of them.

The problem is billionaires.

load more comments (21 replies)