Brin’s “We definitely messed up.”, at an AI “hackathon” event on 2 March, followed a slew of social media posts showing Gemini’s image generation tool depicting a variety of historical figures – including popes, founding fathers of the US and, most excruciatingly, German second world war soldiers – as people of colour.
What? Gemini has an image gen tool? That fucker told me it didn’t when I asked! Dumbass AI don’t even know what it can do… SMH
They switched off image generation after these issues, so it (correctly) said that it couldn’t generate images at the time.
It’s not just historical. I’m a white male and I prompted Gemini to create images for me if a middle aged white man building a Lego set etc. Only one image was a white male and two of the others wrecan Indian and a Black male. Why when I asked for a white male. It was an image I wanted to share to my family. Why would Gemini go off the prompt? I did not ask for diversity, nor was it expected for that purpose, and I got no other options for images which I could consider so it was a fail.
Could you elaborate on the use case you’re describing? You were trying to make an image of a middle aged white man building Lego for your family?
Yes, but it does not really matter what the rest of the prompt detail was? The point was, it was supposed to me an image of me doing an activity. I’d clearly prompted for a white man, but it gave me two other images that were completely not that. Why was Gemini deviating from specific prompts like that? Seems the identical issue to the case with the Nazis, just introducing variations completely of its own.
Yeah yeah sure sure but why were you generating an image of a middle aged white man building Lego for your family? I’m baffled.
That is really just not relevant at all to the discussion here, but to satisfy your curiosity, I’m busy building a Lego model that a family member sent me, so the generated AI photo was supposed to depict someone that looked vaguely like me building such a Lego model. I used Bing in the past, and it has usually delivered 4 usable choices. Fact that Google gave me something that was distinctly NOT what I asked for, means it is messing with the specifics that are asked for.
Why use an AI? Just like… take a selfie
So, what you’re saying is that white people shouldn’t use AI?
It would appear that is exactly what I’m saying as long as the reader lacked any reading comprehension skills.
I’m not the lego person, but I am not taking that selfie because: 1) I don’t want to clean the house to make it look all nice before judgey relatives critique the pic, 2) my phone is old and all its pics are kinda fish-eyed, 3) I don’t actually want to spend the time doing the task right now when AI can get me an image in seconds.
A while back, one of the image generation AIs (midjourney?) caught flack because the majority of the images it generated only contained white people. Like…over 90% of all images. And worse, if you asked for a “pretty girl” it generated uniformly white girls, but if you asked for an “ugly girl” you got a more racially-diverse sample. Wince.
But then there reaction was to just literally tack “…but diverse!” on the end of prompts or something. They literally just inserted stuff into the text of the prompt. This solved the immediate problem, and the resulting images were definitely more diverse…but it led straight to the sort of problems that Google is running into now.
was it really offensive or was it just “target selling pride clothes during pride month” offensive?
I don’t know that “offensive” is the right word. More just “shitty” and “lazy”.
Like they took the time out to teach it “diversity” but couldn’t bother to train it past “diversity = people who are not white” or to acknowledge when the user is asking specifically for a white person or a different region or time period.
I, for one, welcome Japanese George Washington, Indian Hitler and Inuit Ghandi to our historical database.
Jojo Rabbit featured Jewish Maori Hitler and was very well received.
I think the lesson here is that political correctness isn’t very machine learnable. Human history and modern social concerns are very complex in a precise way and really should be addressed with conventional rules and algorithms. Or manually, but that’s obviously not scalable at all.
Why is that a problem? These things happened and why shouldn’t the “ai” get images of it?
The issue is not that it can generate the images, it’s that the filtering a pre prompt for Gemini was coercing the images to include forced diversity into the gens. So asking for 1940s German soldier would give you multiracial Nazis, even though that obviously doesn’t make sense and it’s explicitly not what was asked for.
Lmao
It is a pretty silly scenario lol, I personally don’t really care but I can understand why they implemented the safeguard but also why it’s overly aggressive and needs to be tuned more.
Why even put a save guard in place? Nobody needs it anyway.
Corporations making AI tools available to the general public are under a ton of scrutiny right now and are kinda in a “damned if you do, damned if you don’t” situation. At the other extreme, if they completely uncensored it, the big controversial story would be that pedophiles are generating images of child porn or some other equally heinous shit.
These are the inevitable growing pains of a new industry with a ton of hype and PR behind it.
TBH it’s just a byproduct of the “everything is a service, nothing is a product” age of the industry. Google is responsible for what random people do with their products.
If you create an image generator that always returns clean cut white men whenever you ask it to produce a “doctor” or a “business man”, but only ever spits out black when when you ask for a picture of someone cleaning, your PR department is going to have a bad time.
And even worse, it actually reinforces that image within users.