• QuadratureSurfer@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    7 months ago

    This would actually explain a lot of the negative AI sentiment I’ve seen that’s suddenly going around.

    Some YouTubers have hopped on the bandwagon as well. There was a video posted the other day where a guy attempted to discredit AI companies overall by saying their technology is faked. A lot of users were agreeing with him.

    He then proceeded to point out stories about how Copilot/ChatGPT output information that was very similar to a particular travel website. He also pointed out how Amazon Fresh stores required a large number of outsourced workers to verify shopping cart totals (implying that there was no AI model at all and not understanding that you need workers like this to actually retrain/fine-tune a model).

    • BakerBagel@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      I would say that 90% of AI companies are fake. They are just running API calls to ChatGP-3, and calling themselves “AI” to get investors. Amazon even has an entire business to help companies pretend their AI works by crowdsourcing cheap labor to review data.

      • QuadratureSurfer@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        I don’t think that “fake” is the correct term here. I agree a very large portion of companies are just running API calls to ChatGPT and then patting themselves on the back for being “powered by AI” or some other nonsense.

        Amazon even has an entire business to help companies pretend their AI works by crowdsourcing cheap labor to review data.

        This is exactly the point I was referring to before. Just because Amazon is crowdsourcing cheap labor to backup their AI doesn’t mean that the AI is “fake”. Getting an AI model to work well takes a lot of man hours to continually train and improve it as well as make sure that it is performing well.

        Amazon was doing something new (with their shopping cart AI) that no model had been trained on before. Training off of demo/test data doesn’t get you the kind of data that you get when you actually put it into a real world environment.

        In the end it looks like there are additional advancements needed before a model like this can be reliable, but even then someone should be asking if AI is really necessary for something like this when there are more reliable methods available.

        • erwan@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          It might not be fake but companies built on top of the OpenAI API don’t bring significant value and won’t last.

          If you already have a solid product and want to add some AI capabilities, the the OpenAI API is great. If it’s your only value proposition, not so much.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          I honestly don’t understand why they didn’t just use RFID for the grocery stores. Or maybe they are, idk, but it’s cheap and doesn’t require much training to apply. That way you can verify the AI without needing much labor at all.

          Then again, I suppose that point wasn’t to make a grocery service, but an optical AI service to sell to others.

          That said, a lot of people don’t seem to understand how AI works, and the natural response to not understanding something is FUD.

          • abhibeckert@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            7 months ago

            Unless you pay for expensive tags (like $20 per tag) or use really short range scanners (e.g. a hotel key), RFID tags don’t work reliably enough.

            Antitheft RFID tags for example won’t catch every single thief who walks out the door with a product. But if a thief comes back again and again stealing something… eventually one of them will work.

            But even unreliable tags are a bit expensive, which is why they are only used on high margin and frequently stolen products (like clothing).

            All the self serve stores in my country just use barcodes. They are dirt cheap and work reliably at longer range than a cheap RFID tag. Those stores use AI to flag potential thieves but never for purchases (for example recently I wasn’t allowed to pay for my groceries until a staff member checked my backpack, which the AI had flagged as suspicious).

        • BakerBagel@midwest.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          Mechanical Turkis a service that Amazon sells to other companies that are trying to pretend to be AI companies. the whole market is full of people making wild claims aboit their product that aren’t true, and them desperately searching for the cheapest labor to actually do it.

          I’m not actually a nuclear fission company if i take millions of R&D investment, pay me amd my buddy half of it, and then pay a bunch of crackheads to pour diesel into an electric generator.

          • QuadratureSurfer@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            After reading through that wiki, that doesn’t sound like the sort of thing that would work well for what AI is actually able to do in real-time today.

            Contrary to your statement, Amazon isn’t selling this as a means to “pretend” to do AI work, and there’s no evidence of this on the page you linked.

            That’s not to say that this couldn’t be used to fake an AI, it’s just not sold this way, and in many applications it wouldn’t be able to compete with the already existing ML models.

            Can you link to any examples of companies making wild claims about their product where it’s suspected that they are using this service? (I couldn’t find any after a quick Google search… but I didn’t spend too much time on it).

            I’m wondering if the misunderstanding here is based on the sections here related to AI work? The kind of AI work that you would do with Turkers is the kind of work that’s necessary to prepare the data for it to be used on training a machine learning model. Things like labelling images, transcribing words from images, or (to put it in a way that most of us have already experienced) solving captchas asking you to find the traffic lights (so that you can help train their self-driving car AI model).