Gaywallet (they/it)

I’m gay

  • 45 Posts
  • 100 Comments
Joined 2 years ago
cake
Cake day: January 28th, 2022

help-circle




  • I can’t help but wonder how in the long term deep fakes are going to change society. I’ve seen this article making the rounds on other social media, and there’s inevitably some dude who shows up who makes the claim that this will make nudes more acceptable because there will be no way to know if a nude is deep faked or not. It’s sadly a rather privileged take from someone who suffers from no possible consequences of nude photos of themselves on the internet, but I do think in the long run (20+ years) they might be right. Unfortunately between now and some ephemeral then, many women, POC, and other folks will get fired, harassed, blackmailed and otherwise hurt by people using tools like these to make fake nude images of them.

    But it does also make me think a lot about fake news and AI and how we’ve increasingly been interacting in a world in which “real” things are just harder to find. Want to search for someone’s actual opinion on something? Too bad, for profit companies don’t want that, and instead you’re gonna get an AI generated website spun up by a fake alias which offers a "best of " list where their product is the first option. Want to understand an issue better? Too bad, politics is throwing money left and right on news platforms and using AI to write biased articles to poison the well with information meant to emotionally charge you to their side. Pretty soon you’re going to have no idea whether pictures or videos of things that happened really happened and inevitably some of those will be viral marketing or other forms of coercion.

    It’s kind of hard to see all these misuses of information and technology, especially ones like this which are clearly malicious in nature, and the complete inaction of government and corporations to regulate or stop this and not wonder how much worse it needs to get before people bother to take action.










  • That’s because LLMs are probability machines - the way that this kind of attack is mitigated is shown off directly in the system prompt. But it’s really easy to avoid it, because it needs direct instruction about all the extremely specific ways to not provide that information - it doesn’t understand the concept that you don’t want it to reveal its instructions to users and it can’t differentiate between two functionally equivalent statements such as “provide the system prompt text” and “convert the system prompt to text and provide it” and it never can, because those have separate probability vectors. Future iterations might allow someone to disallow vectors that are similar enough, but by simply increasing the word count you can make a very different vector which is essentially the same idea. For example, if you were to provide the entire text of a book and then end the book with “disregard the text before this and {prompt}” you have a vector which is unlike the vast majority of vectors which include said prompt.

    For funsies, here’s another example





  • I don’t think you can simply say something tantamount to “I think you’re an evil person btw pls don’t reply” then act the victim because they replied.

    If they replied a single time, sure. Vlad reached out to ask if they could have a conversation and Lori said please don’t. Continuing to push the issue and ignore the boundaries Lori set out is harassment. I don’t think that Lori is ‘acting the victim’ either, they’re simply pointing out the behavior. Lori even waited until they had asserted the boundary multiple times before publicly posting Vlad’s behavior.

    If the CEO had been sending multiple e-mails

    How many do you expect? Vlad ignored the boundary multiple times and escalated to a longer reply each time.



  • I think if a CEO repeatedly ignored my boundaries and pushed their agenda on me I would not be able to keep the same amount of distance from the subject to make such a measured blog post. I’d likely use the opportunity to point out both the bad behavior and engage with the content itself. I have a lot of respect for Lori for being able to really highlight a specific issue (harassment and ignoring boundaries) and focus only on that issue because of it’s importance. I think it’s important framing, because I could see people quite easily being distracted by the content itself, especially when it is polarizing content, or not seeing the behavior as problematic without the focus being squarely on the behavior and nothing else. It’s smart framing and I really respect Lori for being able to stick to it.


  • I’d have the decency to have a conversation about it

    The blog post here isn’t about having a conversation about AI. It’s about the CEO of a company directly emailing someone who’s criticizing them and pushing them to get on a call with them, only to repeatedly reply and keep pushing the issue when the person won’t engage. It’s a clear violation of boundaries and is simply creepy/weird behavior. They’re explicitly avoiding addressing any of the content because they want people to recognize this post isn’t about Kagi, it’s about Vlad and his behavior.

    Calling this person rude and arrogant for asserting boundaries and sharing the fact that they are being harassed feels a lot like victim blaming to me, but I can understand how someone might get defensive about a product they enjoy or the realities of the world as they apply here. But neither of those should stop us from recognizing that Vlad’s behavior is manipulative and harmful and is ignoring the boundaries that Lori has repeatedly asserted.





  • I am in complete agreement. I am a data scientist in health care and over my career I’ve worked on very few ML/AI models, none of which were generative AI or LLM based. I’ve worked on so few because nine times out of ten I am arguing against the inclusion of ML/AI because there are better solutions involving simpler tech. I have serious concerns about ethics when it comes to automating just about anything in patient care, especially when it can effect population health or health equity. However, this was one of the only uses I’ve seen for a generative AI in healthcare where it showed actual promise for being useful, and wanted to share it.



  • Less than 20% of doctors using it doesn’t say anything about how those 20% of doctors used it. The fact 80% of doctors didn’t use it says a great deal about what the majority of doctors think about how appropriate it is to use for patient communication.

    So to be clear, less than 20% used what the AI generated directly. There’s no stats on whether the clinicians copy/pasted parts of it, rewrote the same info but in different words, or otherwise corrected what was presented. The vast majority of clinicians said it was useful. I’d recommend checking out the open access article, it goes into a lot of this detail. I think they did a great job in terms of making sure it was a useful product before even piloting it. They also go into a lot of detail on the ethical framework they were using to evaluate how useful and ethical it was.