• thingsiplay@beehaw.org
    link
    fedilink
    arrow-up
    77
    ·
    5 months ago

    How did he calculate the 70% chance? Without an explanation this opinion is as much important as a Reddit post. It’s just marketing fluff talk, so people talk about AI and in return a small percentage get converted into people interested into AI. Let’s call it clickbait talk.

    First he talks about high chance that humans get destroyed by AI. Follows with a prediction it would achieve AGI in 2027 (only 3 years from now). No. Just no. There is a loong way to get general intelligence. But isn’t he trying to sell you why AI is great? He follows with:

    “We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,”

    Ah yes, he does.

      • MagicShel@programming.dev
        link
        fedilink
        arrow-up
        12
        ·
        5 months ago

        ChatGPT says 1-5%, but I told it to give me nothing but a percentage and it gave me a couple of paragraphs like a kid trying to distract from the answer by surrounding it with bullshit. I think it’s onto us…

        (I kid. I attribute no sentience or intelligence to ChatGPT.)

    • eveninghere@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      5 months ago

      This is a horoscope trick. They can always say AI destroyed humanity.

      Trump won in 2016 and there was Cambridge Analytica doing data analysis: AI technology destroyed humanity!

      Israel used AI-guided missiles to attack Gaza: AI destroyed humanity!

      Whatever. You can point at whatever catastrophe and there is always AI behind because already in 2014 AI is a basic technology used everywhere.

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      2
      ·
      5 months ago

      The person who predicted 70% chance of AI doom is Daniel Kokotajlo, who quit OpenAI because of it not taking this seriously enough. The quote you have there is a statement by OpenAI, not by Kokotajlo, this is all explicit in the article. The idea that this guy is motivated by trying to do marketing for OpenAI is just wrong, the article links to some of his extensive commentary where he is advocating for more government oversight specifically of OpenAI and other big companies instead of the favorable regulations that company is pushing for. The idea that his belief in existential risk is disingenuous also doesn’t make sense, it’s clear that he and other people concerned about this take it very seriously.