This feels like clickbait to me, as the fundamental problem clearly isn’t AI. At least to me it isn’t. The title would have worked as well without AI in the title. The fact that the images are AI generated isn’t even that relevant. What is worrying is that the peer review process, at least for this journal clearly is faulty as no actual review of the material took place.
If we do want to talk about AI. I am impressed how well the model managed to actually create text made up of actual letters resembling words. From what I have seen so far that is often just as difficult for these models as hands are.
Modern AI image generators are pretty good with creating text (and hands). You’re right that that’s very recent though (like the last 6 months); they used to be bad at it.
American classic car,1935 Ford Pickup, poster ,retro ,illustrator, vibrant colors,Idaho color view landscape, with words "Idaho"
Simplifying this down to an issue of just the review process flattens out the problem that generative AI does not think in the same way that generative human content does. There’s additional considerations that need to be made when considering using generative AI, namely that generative AI does not have a sum of knowledge to pull from in order to keep certain ideas in check, such as how large an object should appear and it doesn’t have the ability to fact check relevancy with other objects within the image.
We need to think about these issues in depth because we are introducing a non-human, specific kind of bias into literature. If we don’t think about it systematically we can’t create a process which intends to limit or reduce the amount of bias introduced by allowing this kind of content. Yes, the review process can and should already catch a lot of this, but I’m not convinced that waving our hands and saying that review is enough is adequate to fully address the biases we may be introducing.
I think there’s a much higher chance of introducing bias or false information in highly specialized fields where the knowledge necessary to determine if something was generated incorrectly, since generative AI does not draw upon facts or fact check, is in fact, correct. Reviewers are not perfect, and may miss things. If we then draw upon this knowledge in the future to direct additional studies we might create a house of cards which becomes very difficult to undo. We already have countless examples of this in science where a study with falsified data or poor methodology breeds a whole field of research which struggles to validate the original studies and eventually needs to be retracted. We could potentially have situations in which the study is validated but an image influences how we even think (or can acquire funding for) a process should work. Having strong protections such as requiring that AI images be clearly notated that they were created via AI, can help to mitigate these kinds of issues.
Rather the opposite: simplifying this down to an issue of just an AI introducing some BS, flattens out the problem that grifter journals don’t follow a proper peer review process.
introducing bias or false information in highly specialized fields
Reviewers are not perfect, and may miss things
It’s called a “peer review” process for a reason. If there are not enough peers in a highly specialized field to conduct a proper review, then the article should stay on arxiv or some other preprint server until enough peers can be found.
Journals that charge for “reviewing” BS, no matter if AI generated, or by a donkey with a brush tied to its tail, should be named and shamed.
We already have countless examples of this in science where a study with falsified data or poor methodology breeds a whole field of research which struggles to validate the original studies and eventually needs to be retracted.
…and no AI was needed. Goes to show how AI is the red herring here.
I totally see why you are worried about all the aspects AI introduces, especially regarding bias and the authenticity of generated content. My main gripe, though, is with the oversight (or lack thereof) in the peer review process. If a journal can’t even spot AI-generated images, it raises red flags about the entire paper’s credibility, regardless of the content’s origin. It’s not about AI per se. It is about ensuring the integrity of scholarly work. Because realistically speaking, how much of the paper itself is actually good or valid? Even more interesting, and this would bring AI back in the picture. Is the entire paper even written by a human or is the entire thing fake? Or maybe that is also not interesting at all as there are already tons of papers published with other fake data in it.
People that actually don’t give a shit about the academic process and just care about their names published somewhere likely already have employed other methods as well. I wouldn’t be surprised if there is a paper out there with equally bogus images created by an actual human for pennies on Fiverr.
The crux of the matter is the robustness of the review process, which should safeguard against any form of dubious content, AI-generated or otherwise. Which is what I also said in my initial reply, I am most certainly not waving hands and saying that review is enough. I am saying that it is much more likely the review process has already failed miserably and most likely has been for a while.
My main gripe, though, is with the oversight (or lack thereof) in the peer review process. If a journal can’t even spot AI-generated images, it raises red flags about the entire paper’s credibility, regardless of the content’s origin.
The crux of the matter is the robustness of the review process
The pace at which AI can generate bullshit not only currently vastly outstrips the ability for individual humans to vet it, but is actually accelerating. We cannot manually solve this by saying “people just need to catch it.” Look at YT with CSAM or other federal violations - they literally can’t keep up with the content coming in despite having armies of people (with insane turnover I might add) trying to do it. So the bar has been changed from “you can’t have any of this stuff” to “you must put in reasonable effort to minimize it,” because we’ve simply accepted it can’t be done with humans - and that’s with the assistance of their current algorithms constantly scouring their content for red flags. Bear in mind this is an international, massive company with resources these journals can’t even dream of and almost all this content has been generated and uploaded by individual people.
These people I’m sure are perfectly capable of catching AI generated nonsense most of the time. But as the content gets more sophisticated and voluminous, the problem is only going to get worse. Stuff is going to get through. So we are at a crossroads where we throw up our hands and say “well there’s not much we can do, good luck separating the wheat from the chaff,” or we get creative. And this isn’t just in academic journals either. This is crossing into more and more industries, in particular if it requires writing. Someone(s) is throwing money and resources at getting AI to do it faster and cheaper than people can.
I feel like two different problems are conflated into one though.
The academic review process is broken.
AI generated bullshit is going to cause all sorts of issues.
Point two can contribute to point 1 but for that a bunch of stuff needs to happen. Correct my if I am wrong but as far as my understanding of peer-review processes are supposed to go it is something along the lines of:
A researcher submits their manuscript to a journal.
An editor of that journal validates the paper fits within the scope and aims of the journal. It might get rejected here or it gets send out for review.
When it does get send out for review to several experts in the field, the actual peer reviewers. These are supposed to be knowledgeable about the specific topic the paper is about. These then read the paper closely and evaluate things like methodology, results, (lack of) data, and conclusions.
Feedback goes to the editor, who then makes a call about the paper. It either gets accepted, revisions are required or it gets rejected.
If at point 3 people don’t do the things I highlighted in bold then to me it seems like it is a bit silly to make this about AI.
If at point 4 the editor ignores most feedback for the peer reviewers, then it again has very little to do with AI and everything the a base process being broken.
To summarize, yes AI is going to fuck up a lot of information, it already has. But by just shouting, “AI is at it again with its antics!” at every turn instead of looking further and at other core issues we will only make things worse.
Edit:
To be clear, I am not even saying that peer reviewers or editors should “just do their job already”. But fake papers have been increasingly an issue for well over a decade as far as I am aware. The way the current peer review process works simply doesn’t seem to scale to where we are today. And yes, AI is not going to help with that, but it is still building upon something that already was broken before AI was used to abuse it.
I feel like this is the third time people are selective reading into what I have said.
I specifically acknowledge that AI is already causing all sorts of issues. I am also saying that there is also another issue at play. One that might be exacerbated by the use of AI but at its root isn’t caused by AI.
In fact, in this very thread people have pointed out that *in this case" the journal in question is simply the issue. https://beehaw.org/comment/2416937
In fact. The only people likely noticed is, ironically, the fact that AI was being used.
And again I fully agree, AI is causing massive issues already and disturbing a lot of things in destructive ways. But, that doesn’t mean all bullshit out there is caused by AI. Even if AI is tangible involved.
If that still, in your view, somehow makes me sound like an defensive AI evangelist then I don’t know what to tell you…
If you feel several people are selectively reading what you’re writing then you should consider what about your writing is perhaps contributing to the misinterpretation/selective reading. It’s not like we are working in concert.
but that doesn’t mean all bullshit out there is caused by AI
Again, you are mischaracterizing what I and others have said. No one asserted that. Quote where I said anything remotely like that.
The only irony I’m seeing is your seemingly engaging in the behavior you’re decrying.
I am most certainly not waving hands and saying that review is enough
Apologies, that’s what it sounded like to me. You said it’s clickbait. You said the title would work without AI in the title. You also said that AI generation isn’t relevant. That felt like diminishing the conversation - focusing in on what you’re most concerned about, and dismissing all other discussions. I don’t think that helps discussion happen. It discourages it. It says that we shouldn’t talk about the problems present here which exist outside the realm of just the review process.
For example, both of the figures do have a description, but neither of them have any kind of attribution. The review process might ensure it is factual when it is followed and still let through material such as that you’ve laid out above which do not involve AI - like hiring someone off of fiverr. One way to solve this would be with image attribution. As I mentioned above, simply requiring that an image explain where it came from, such as requiring attribution to the artist who created the figure or requiring that the software used be attributed (perhaps even requiring the full prompt for generated images) are all methods through which we can ensure scientific rigor (and accurate attribution) which will both help ensure the review process catches problematic material and cues the readers in to key information about the figures present in research.
This feels like clickbait to me, as the fundamental problem clearly isn’t AI. At least to me it isn’t. The title would have worked as well without AI in the title. The fact that the images are AI generated isn’t even that relevant. What is worrying is that the peer review process, at least for this journal clearly is faulty as no actual review of the material took place.
If we do want to talk about AI. I am impressed how well the model managed to actually create text made up of actual letters resembling words. From what I have seen so far that is often just as difficult for these models as hands are.
Modern AI image generators are pretty good with creating text (and hands). You’re right that that’s very recent though (like the last 6 months); they used to be bad at it.
American classic car,1935 Ford Pickup, poster ,retro ,illustrator, vibrant colors,Idaho color view landscape, with words "Idaho"
Oh huh, you are right. I threw that exact prompt in Dall-e and got indeed legible letters.
Simplifying this down to an issue of just the review process flattens out the problem that generative AI does not think in the same way that generative human content does. There’s additional considerations that need to be made when considering using generative AI, namely that generative AI does not have a sum of knowledge to pull from in order to keep certain ideas in check, such as how large an object should appear and it doesn’t have the ability to fact check relevancy with other objects within the image.
We need to think about these issues in depth because we are introducing a non-human, specific kind of bias into literature. If we don’t think about it systematically we can’t create a process which intends to limit or reduce the amount of bias introduced by allowing this kind of content. Yes, the review process can and should already catch a lot of this, but I’m not convinced that waving our hands and saying that review is enough is adequate to fully address the biases we may be introducing.
I think there’s a much higher chance of introducing bias or false information in highly specialized fields where the knowledge necessary to determine if something was generated incorrectly, since generative AI does not draw upon facts or fact check, is in fact, correct. Reviewers are not perfect, and may miss things. If we then draw upon this knowledge in the future to direct additional studies we might create a house of cards which becomes very difficult to undo. We already have countless examples of this in science where a study with falsified data or poor methodology breeds a whole field of research which struggles to validate the original studies and eventually needs to be retracted. We could potentially have situations in which the study is validated but an image influences how we even think (or can acquire funding for) a process should work. Having strong protections such as requiring that AI images be clearly notated that they were created via AI, can help to mitigate these kinds of issues.
Rather the opposite: simplifying this down to an issue of just an AI introducing some BS, flattens out the problem that grifter journals don’t follow a proper peer review process.
It’s called a “peer review” process for a reason. If there are not enough peers in a highly specialized field to conduct a proper review, then the article should stay on arxiv or some other preprint server until enough peers can be found.
Journals that charge for “reviewing” BS, no matter if AI generated, or by a donkey with a brush tied to its tail, should be named and shamed.
…and no AI was needed. Goes to show how AI is the red herring here.
I totally see why you are worried about all the aspects AI introduces, especially regarding bias and the authenticity of generated content. My main gripe, though, is with the oversight (or lack thereof) in the peer review process. If a journal can’t even spot AI-generated images, it raises red flags about the entire paper’s credibility, regardless of the content’s origin. It’s not about AI per se. It is about ensuring the integrity of scholarly work. Because realistically speaking, how much of the paper itself is actually good or valid? Even more interesting, and this would bring AI back in the picture. Is the entire paper even written by a human or is the entire thing fake? Or maybe that is also not interesting at all as there are already tons of papers published with other fake data in it. People that actually don’t give a shit about the academic process and just care about their names published somewhere likely already have employed other methods as well. I wouldn’t be surprised if there is a paper out there with equally bogus images created by an actual human for pennies on Fiverr.
The crux of the matter is the robustness of the review process, which should safeguard against any form of dubious content, AI-generated or otherwise. Which is what I also said in my initial reply, I am most certainly not waving hands and saying that review is enough. I am saying that it is much more likely the review process has already failed miserably and most likely has been for a while.
Which, again to me, seems like the bigger issue.
The pace at which AI can generate bullshit not only currently vastly outstrips the ability for individual humans to vet it, but is actually accelerating. We cannot manually solve this by saying “people just need to catch it.” Look at YT with CSAM or other federal violations - they literally can’t keep up with the content coming in despite having armies of people (with insane turnover I might add) trying to do it. So the bar has been changed from “you can’t have any of this stuff” to “you must put in reasonable effort to minimize it,” because we’ve simply accepted it can’t be done with humans - and that’s with the assistance of their current algorithms constantly scouring their content for red flags. Bear in mind this is an international, massive company with resources these journals can’t even dream of and almost all this content has been generated and uploaded by individual people.
These people I’m sure are perfectly capable of catching AI generated nonsense most of the time. But as the content gets more sophisticated and voluminous, the problem is only going to get worse. Stuff is going to get through. So we are at a crossroads where we throw up our hands and say “well there’s not much we can do, good luck separating the wheat from the chaff,” or we get creative. And this isn’t just in academic journals either. This is crossing into more and more industries, in particular if it requires writing. Someone(s) is throwing money and resources at getting AI to do it faster and cheaper than people can.
I feel like two different problems are conflated into one though.
Point two can contribute to point 1 but for that a bunch of stuff needs to happen. Correct my if I am wrong but as far as my understanding of peer-review processes are supposed to go it is something along the lines of:
If at point 3 people don’t do the things I highlighted in bold then to me it seems like it is a bit silly to make this about AI. If at point 4 the editor ignores most feedback for the peer reviewers, then it again has very little to do with AI and everything the a base process being broken.
To summarize, yes AI is going to fuck up a lot of information, it already has. But by just shouting, “AI is at it again with its antics!” at every turn instead of looking further and at other core issues we will only make things worse.
Edit:
To be clear, I am not even saying that peer reviewers or editors should “just do their job already”. But fake papers have been increasingly an issue for well over a decade as far as I am aware. The way the current peer review process works simply doesn’t seem to scale to where we are today. And yes, AI is not going to help with that, but it is still building upon something that already was broken before AI was used to abuse it.
spoiler
asdfasdfsadfasfasdf
I feel like this is the third time people are selective reading into what I have said.
I specifically acknowledge that AI is already causing all sorts of issues. I am also saying that there is also another issue at play. One that might be exacerbated by the use of AI but at its root isn’t caused by AI.
In fact, in this very thread people have pointed out that *in this case" the journal in question is simply the issue. https://beehaw.org/comment/2416937
In fact. The only people likely noticed is, ironically, the fact that AI was being used.
And again I fully agree, AI is causing massive issues already and disturbing a lot of things in destructive ways. But, that doesn’t mean all bullshit out there is caused by AI. Even if AI is tangible involved.
If that still, in your view, somehow makes me sound like an defensive AI evangelist then I don’t know what to tell you…
If you feel several people are selectively reading what you’re writing then you should consider what about your writing is perhaps contributing to the misinterpretation/selective reading. It’s not like we are working in concert.
Again, you are mischaracterizing what I and others have said. No one asserted that. Quote where I said anything remotely like that.
The only irony I’m seeing is your seemingly engaging in the behavior you’re decrying.
The fact that you specifically respond to this one highly specific thing. While I clearly have written more is exactly what I mean.
shrugs
Apologies, that’s what it sounded like to me. You said it’s clickbait. You said the title would work without AI in the title. You also said that AI generation isn’t relevant. That felt like diminishing the conversation - focusing in on what you’re most concerned about, and dismissing all other discussions. I don’t think that helps discussion happen. It discourages it. It says that we shouldn’t talk about the problems present here which exist outside the realm of just the review process.
For example, both of the figures do have a description, but neither of them have any kind of attribution. The review process might ensure it is factual when it is followed and still let through material such as that you’ve laid out above which do not involve AI - like hiring someone off of fiverr. One way to solve this would be with image attribution. As I mentioned above, simply requiring that an image explain where it came from, such as requiring attribution to the artist who created the figure or requiring that the software used be attributed (perhaps even requiring the full prompt for generated images) are all methods through which we can ensure scientific rigor (and accurate attribution) which will both help ensure the review process catches problematic material and cues the readers in to key information about the figures present in research.