Those numbers will go down once everyone is driving 4wd EV Suburbans with half-inch steel plate armor (you know, just so they feel their kids are safe) :)
The Stable Diffusion algorithm is strange, and I’m surprised someone thought of it, and surprised it works.
IIRC it works like this: Stable Diffusion starts with an image of completely random noise. The idea is that the text prompt given to the model describes a hypothetical image where the noise was added. So, the model tries to “predict,” given the text, what the image would look like if it was denoised a little bit. It does this repeatedly until the image is fully denoised.
So, it’s very easy for the algorithm to make a “mistake” in one iteration by coloring the wrong pixels black. It’s unable to correct it’s mistake in later denoising iterations. And, it can’t really “plan” ahead of time, it can only do one denoising operation at a time.
I use ChatGPT premium almost every day, mostly for coding, rarely for image generation. $20/month. It can write/refactor decent (not great) code faster than me if I can type out what I want faster than just writing the code myself. Dalle-3 through ChatGPT produces pretty good images and seems to understand the prompts better than SD (ChatGPT actually writes the prompt for you, so that might have something to do with it). It’s much better than Dalle-2, but they’ve put guardrails on it so you can’t ask to do things like create images in the style of a modern artist.
I’ve messed around with Automatic1111 and SD a little bit. ControlNet is very nice for when you need to have control over the output. I would draw shitty outlines with Inkscape then used SD+ControlNet to kind of fill everything else in. Free and open source model and software. Ran it on a RTX 3090 which costed me $800 a year ago.
Messed around with DeepFloyd IF on replicate.ai for a while, which was very nice. It seemed to understand the prompts much better than SD. I think it was $2/hr, with each image generation using something like 30s of GPU time. Cold starts can take minutes though, which is annoying.
I use OpenAI’s API in a prototype application; both GPT-4 and Dalle-3. GPT-4 is by far the most well-behaved and “knowledgeable” LLM, but all the guardrails put on it can be annoying. Dalle-3 is pretty good, but not sure if it’s the best. The cost isn’t significant yet while prototyping.
I get ads, news, and video recommendations served to me which probably uses some kind multi-armed bandit AI algorithm. Costs me my privacy. I don’t like it; I rate it 0/10.
I don’t have a console, but I’ve hooked up a Kill-A-Watt to my crazy gaming PC with a TDP > 600w. When working, browsing, listening to music, watching videos, etc, it only uses around 60w, or the same as a single incandescent light bulb. When playing a modern AAA game, it uses around 250w. Not great considering the power consumption of a Switch or Steam Deck, but orders of magnitude less than typical U.S. household heating and cooling. I’d guess AI and crypto BS uses more energy than all PCs combined. Though I guess we all indirectly use AI (or rather, get used by AI).
This kinda lines up with propaganda I’ve been seeing the past couple years (from the likes of Peter Theil and Alex Epstein). They argue that we should be extracting and using fossil fuels as fast as possible. The (stupid, fucked up, wishful thinking) idea is that cheap energy drives human development and technological solutions to climate change.