Well, I had to look up the lyrics for that song... I guess it's better than I would expect from an average country musician but still
drhead
There's usually going to be a hegemonic style for AI art, since for most people making this stuff they're just going to put some vague keywords for a direction of the style then stuff the rest of the prompt with quality keywords. Often times hosted inference services will actually do the quality keyword stuffing for you or train in a house style. Whatever you don't specify is going to be filled in with essentially the model average (which is, of course, not going to be a representative average image, it's going to be the average of the "preferred" set for their preference optimization training). Practically nobody asks for mediocre images (because why would you), and people making models especially on hosted services often effectively won't let you.
Think of what you'd expect to get from requesting an image of "a beautiful woman". There's certainly a lot of different ideas that people have of which women are beautiful and what traits make a woman beautiful, across different individuals and especially across different cultures and time periods. But if you take a set of every picture that someone thought of as having a beautiful woman in it, and look at the mode of that distribution, it's going to settle on conventionally attractive by the standards of whatever group is labeling the images. And the same thing will happen with an AI model, training on those images labeled as "a beautiful woman" will shift its output towards conventionally attractive women. If you consider it as a set of traits contributing to conventional attractiveness, then it's also fairly likely that every "a beautiful woman" image will end up looking like a flawless supermodel, since the mode will be a woman with all of the most common traits in the "a beautiful woman" dataset. That often won't look natural, because we're not used to seeing flawless supermodels all of the time.
That's more or less what is happening when people make these AI images, but with the whole image and its style. The set of images labeled as "high quality" or whatever quality keyword, or that are in their preference optimization set, have attributes that are more common in those images than they are in other images. Those attributes end up becoming dominant and a lot of them will show up in a generated image stuffed with quality keywords or on a heavily DPO-tuned model, which may look unnatural when a typical good-looking natural image may have only a few of those traits. And the problem is exacerbated by each model having its own default flavor, and people heavily reusing the same sets of quality keywords, and I would honestly fully expect that I could pin part of it on how some text encoders work (CLIP's embeddings are hard to separate distinct concepts from and this does manifest in how images are generated, but a lot of recent popular models don't use CLIP so this doesn't necessarily always apply).
Well, it was true for the first big models. The most recent generation of models do not have this problem.
Earlier models like Stable Diffusion 1.5 worked on noise (ϵ) prediction. All diffusion models work by training to predict where the noise is in an image, given images with differing levels of noise in them, and then you can sample from the model using a solver to get a coherent image in a smaller amount of steps. So, using ϵ as the prediction target, you're obviously not going to learn anything by trying to predict what part of pure noise is noise, because the entire image is noise. During sampling, the model will (correctly) predict on the first step that the pure noise input is pure noise, and remove the noise giving you a black image. To prevent this, people trained models with a non-zero SNR for the highest noise timestep. That way, they are telling the model that there is something actually meaningful in the random noise we're giving it. But since the noise we're giving it is always uniform, it ends up biasing the model towards making images with average brightness. The parts of the initial noise that it retains (since remember, we're no longer asking it to remove all of the noise, we're lying to it and telling it some of it is actually signal) usually also end up causing unusual artifacting. An easy test for these issues is to try to prompt "a solid black background" -- early models will usually output neutral gray squares or grayscale geometric patterns.
One of the early hacks for solving the average brightness issue was training with a random channelwise offset to the noise, and models like Stable Diffusion XL used this method. This allowed models to make very dark and light images, but also often made images end up being too dark or light, it's possible that you saw some of these about a year into the AI craze when this was the latest fad. The proper solution came with Bytedance's paper ( https://arxiv.org/pdf/2305.08891 ) showing a method allowing training with a SNR of zero at the highest noise timestep. The main change is that instead of predicting noise (ϵ), the model needs to predict velocity (v), which is a weighted combination between predicting noise and predicting the original sample x~0~. With that, at the highest noise timestep the sampler will predict the dataset mean (which will manifest as an incredibly blurry mess in the vague shape of whatever you're trying to make an image of). ~~People didn't actually implement this as-is for any new foundation model, most of what I saw of it was independent researchers running finetune projects, apparently because it was taking too much trial and error for larger companies to make it work well.~~ actually this isn't entirely true, people working on video models ended up adopting it more quickly because the artifacts from residual noise get very bad when you add a time dimension. A couple of groups made SDXL clones using this method.
The latest fad is using rectified flow which is a very different process from diffusion. The diffusion process is described by a stochastic differential equation (SDE), which adds some randomness and essentially follows a meandering path from input noise to the resulting image. The rectified flow process is an ordinary differential equation (ODE), which (ideally) follows a straight-line path from the input noise to the image, and can actually be run either forwards or backwards (since it's an ODE). Flux (the model used with Twitter's AI stuff) and Stable Diffusion 3/3.5 both use rectified flow. They don't have the average brightness issue at all because it makes zero mathematical or practical sense to have the end point be anything but pure noise. I've also heard people say that rectified flow doesn't typically show the same uniform level of detail that a few people in this thread have mentioned, I haven't really looked into that myself at all but I would be cautious about using uniform detail as a litmus test for that reason.
I'm fairly certain the thought process is exactly:
"Project 2025 needs to be stopped. It would be bad for..."
checks opinion polls for what issue voters prioritize right now
"...the economy!"
I see it as a comical artifact of how hyper-optimized towards polling and focus groups that party messaging has become. An inevitable consequence of liberal democracy in the information age.
because his party-line successor, whoever that may be,
So they're not going to mention that the actual person next in the line of succession for him is a pro-Palestine democratic socialist?
If we're talking about FTL, might as well mention Multiverse: https://subsetgames.com/forum/viewtopic.php?t=35332
I'm pretty sure this outright has more new content than the base game did.
I remember looking over those at the time, at that time the images both seemed a bit beyond then-current image generation technology and there never really seemed to be a compelling explanation over why "some RFA source went through great effort to fabricate images for this story" is more likely of an explanation than "some RFA source is misrepresenting pictures of what is actually mostly just boring normal prison stuff".
@PorkrollPosadist@hexbear.net