• Zaktor@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    de Vries, who now works for the Netherlands’ central bank, estimated that if Google were to integrate generative A.I. into every search, its electricity use would rise to something like twenty-nine billion kilowatt-hours per year. This is more than is consumed by many countries, including Kenya, Guatemala, and Croatia.

    Why on earth would they do that? Just cache the common questions.

    It’s been estimated that ChatGPT is responding to something like two hundred million requests per day, and, in so doing, is consuming more than half a million kilowatt-hours of electricity. (For comparison’s sake, the average U.S. household consumes twenty-nine kilowatt-hours a day.)

    Ok, so the actual real world estimate is somewhere in the order of a million kilowatt-hours, for the entire globe. Even if we assume that’s just US, there are 125M households, so that’s 4 watt-hours per household per day. A LED lightbulb consumes 8 watts. Turn one of those off for a half-hour and you’ve balanced out one household’s worth of ChatGPT energy use.

    This feels very much in the “turn off your lights to do you part for climate change” distraction from industry and air travel. They’ve mixed and matched units in their comparisons to make it seem like this is a massive amount of electricity, but it’s basically irrelevant. Even the big AI-every-search number only works out to 0.6 kwh/day (again, if all search was only done by Americans), which isn’t great, but is still on the order of don’t spend hours watching a big screen TV or playing on a gaming computer, and compares to the 29 kwh already spent.

    Math, because this result is so irrelevant it feels like I’ve done something wrong:

    • 500,000 kwh/day / 125,000,000 US households = 0.004 kwh/household/day
    • 29,000,000,000 kwh/yr / 365 days/yr / 125,000,000 households = 0.6 kwh/household/day, compared to 29 kwh base
    • frezik@midwest.social
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      Just cache the common questions.

      AI models work in a feedback loop. The fact that you’re asking the question becomes part of the response next time. They could cache it, but the model is worse off for it.

      Also, they are Google/Microsoft/OpenAI. They will do it because they can and nobody is stopping them.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Just cache the common questions.

      There are only two hard things in Computer Science: cache invalidation and naming things.

      • boonhet@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        You mean: two hard things - cache invalidation, naming things and off-by-one errors

        • kibiz0r@midwest.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          Reminds me of the two hard things in distributed systems:

          • 2: Exactly-once delivery
          • 1: Guaranteed order
          • 2: Exactly-once delivery
      • Zaktor@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        It’s a good thing that Google has a massive pre-existing business about caching and updating search responses then. The naming things side of their business could probably use some more work though.