The first invasive brain chip that Neuralink embedded into a human brain has malfunctioned, with neuron-surveilling threads appearing to have become dislodged from the participant’s brain, the company revealed in a blog post Wednesday.

It’s unclear what caused the threads to become “retracted” from the brain, how many have retracted, or if the displaced threads pose a safety risk. Neuralink, the brain-computer interface startup run by controversial billionaire Elon Musk, did not immediately respond to a request for comment from Ars. The company said in its blog post that the problem began in late February, but it has since been able to compensate for the lost data to some extent by modifying its algorithm.

  • Grass@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    I can’t believe anyone willingly got this after the monkey testing thing. They have to be taking advantage of people not fit to make decisions for themselves.

    • CaptDust@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 months ago

      The patient became quadriplegia in a car accident, I wouldn’t call it unfit for decisions but definitely someone desperate to find a sense of normalcy.

      • over_clox@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        I thought the goal was to reconnect the brain to the spinal cord though.

        But dude is still stuck in a wheelchair, and so far it’s basically been just a fancy experimental mouse cursor? Installed in his brain? And already failing?..

        • VirtualOdour@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          Watching his videos he’s a clever snd self aware guy more than capable of thinking for himself. Hate Elon but you don’t need to shit on the disabled by acting like being in a wheelchair means you can’t think for yourself

        • CaptDust@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          7 months ago

          I don’t really keep up with Elon moon shits, but I think the idea is to substitute the brains neurological commands. Research is still on the “read” stage, like knowing what information the brain is requesting. Eventually neurolink will also need to discover how to relay those signals back to the nervous system in a way it understands, engaging muscles and such, effectively rebuilding the bridge that was damaged. Or robot legs or whatever, but the key is first getting the information into a format they can act on. But I’m not smart, this is just how I understood it.

          • over_clox@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            And they already killed how many monkeys testing this stuff? Last I heard was that they tested 15 monkeys or so, and 13 of them ended up dying or having to be euthanized after only a few months.

            They already admitted they had problems with their brain electrodes corroding after a few months or so…

            I like to keep my noodle intact thank you very much. Even if I was a vegetable, I wouldn’t want a chip in my head that’s known to have corroding wires.

            • vrek@programming.dev
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              See the corroding part scares me. Actual electrodes planted in the brain should never corrode. The company I work for actually makes brain implants(no, not nueralink) so I know it’s possible.

              That stuff is EXPENSIVE though … So he must of cheaped out with a cheaper metal and that’s why it corroded.

      • JJROKCZ@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 months ago

        Guy has nothing left to lose really, I don’t blame him for taking this risk considering I would strongly consider it myself were I in his situation

        • TheDarksteel94@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          7 months ago

          The only thing he could lose is only a few important brain functions, if something truly does go wrong. Nothing major. /s

          • Buddahriffic@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            I mean, in that position I’d probably be willing to gamble with my life. Not with Musk involved, but if there was a similar opportunity without his involvement. It would be an honorable death, too, as long as it didn’t result in a halt on the research.

            If I could fully trust the ones doing it, there is a certain % of death risk I’d be willing to take as a healthy person once the tech is more mature. The possibilities of such technology are endless, especially as the tech becomes more interactive rather than just observing and acting on those observations. I’m not sure if I’d want to live in the Matrix, but I’d love to at least visit it or play VR games based on that tech. Altered Carbon would be interesting, too.

            • beebarfbadger@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              Sure, the possibilities are endless, so the first thing we’ll get that has any research money and effort put into it is how to turn it into an advertising platform and then maximally enshittifying it as soon as there’s a market share to speak of.

              • Buddahriffic@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                7 months ago

                Yeah, that “if I could trust it” is pulling a lot of weight there. Like I decline fucking website cookies. Tech like that has way more invasive potential. Maybe they wouldn’t even need to advertise and could directly make you just buy things or give them free labour. You’d just need a module to make a person act like a normal happy person and then with that could potentially do anything “under the hood” without being detected. The possibilities are endless in the dystopian direction, too. Realistically, “if I could trust it” isn’t a requirement that can be met.

                • beebarfbadger@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  7 months ago

                  Oh, they’d never do anything as sinister as that. That may still be illegal (if the ultra-rich lobbying hasn’t taken care of that obstacle by that point).

                  Instead, they’ll just make sure that whatever essential core service they’ve built a monopoly in by just muscling the poorer competitors out of the race will cease to be offered to you if you refuse to hand all your money over to them.

                  See also: insulin, hospital treatment, etc. This is just a new playing field to find old prey in.

      • Psythik@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Wait, is that what these brain chips are for? Well now I can’t hate on them as much as I used to if they’re meant to help people learn how to walk again. I thought it was just supposed to help you process thoughts more quickly or something, like a math coprocessor in an old 90s PC.

        • CaptDust@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          7 months ago

          It’s one use, I’ve heard their designs described as opening the brain’s io processes. Once the “data lanes” are available, it’s then the “applications” implement the uses over it. That could mean adding a math coprocessor, or correcting vision issues, or getting tweets beamed into your mind. Roll the dice on a musk project 🤷

          There’s a ton of potential uses for the tech, if they can get it functional, but it’s going to require too much trust for me to realistically consider it.

  • Apytele@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Oh god I was worried they killed him horribly this is actually probably fine and almost an expected setback.

      • Apytele@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 months ago

        For a technology that could someday help a quadriplegic interact with the world fully and independently again I’m willing to tolerate some hitches. There’s a reason they didn’t pick some full on walkie talkie for their first human trial, and there’s a reason that kid looked motherucking hyped to have brand new technology that he’s the first human to even try installed directly into his fucking brain. The problem is abled people thinking this is fundamentally for them. Bby no, they’re trying to help people walk again, even if the legs are robots. You’re looking at the wrong risk-benefit profile.

        • jennwiththesea@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          The concept is wonderful. I do not trust Elon with that concept. I worry that many folks with high hopes of this helping them will just end up used and hurt.

  • ivanafterall@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    The company said in its blog post that the problem began in late February, but it has since been able to compensate for the lost data to some extent by modifying its algorithm.

    Because that’s what people are worried about: THE LOST DATA.

    • mhague@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 months ago

      When a computer reads some signal, the 0s and 1s in it’s memory is the data. The data must be processed so that the computer can understand it.

      This computer is using threads to read neuron activity. It must necessarily receive data because if it didn’t it wouldn’t be reading neuron activity. They’re the same thing.

      This data is processed so that the computer can make sense of the brain. Once it understands some activity it generates signals that can control external devices.

      Here’s an example. Imagine a device that monitors the heart and does something to fix a problem. The device would get data on the heart and process the data so that it can perform it’s function.

      Wouldn’t monitoring health concerns and mitigating data loss be extremely important in these scenarios?

      • 📛Maven@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 months ago

        The point is that this is the opening paragraph about something going wrong in human brain surgery, and the first thing they tell us is “don’t worry, the data’s fine”, rather than anything about the human. Indeed, you have to read to the last paragraph to find:

        Arbaugh’s safety does not appear to be negatively impacted.

        • Ech@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          7 months ago

          and the first thing they tell us is “don’t worry, the data’s fine”, rather than anything about the human.

          I do agree it would have been significantly more considerate to mention that the person is ok first, but I feel like you’re confusing data storage (ie something they’re collecting) with data processing (ie how the device operates). The data in question is the latter. In other words, they are explaining that the problems are being accounted for so that the device can still function in the human it’s attached to.

          • 📛Maven@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            No, I understood that, I did read the article. I’m lambasting the fact that in an article about “brain chip gone wrong”, burying the “but human seems to be unharmed” at the end of an article is indicative of a set of priorities wildly different from my own.

    • blazeknave@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      TY! My first thought was he was this poor sweet guy who just wanted to play Civ and fell for this grifter

  • gardylou@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    How is this shit legal? Like, why is any company allowed to willy nilly fuck around with trying to implant computer chips into human brains?

    • Alimentar@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Through consent. The guy probably either has terminal illness and is happy to contribute to research or is completely paralysed, that an operation like this could benefit both parties.

      It’s an agreement and I’m sure the risks are expressed to the individual.

      • gardylou@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        I’m saying the continued misuse of technology for unnecessary and dangerous purposes can be considered a threat to public health and safety in the long term. For far too long we have identified risks of certain technology, especially in the hands of amoral alt-whites like Musk (he spent $44 billion to amplify neo-nazis for fuck sake), but shrugged our shoulders at the idea of regulating or banning dangerous technology. I want to challenge people to envision a world where we don’t have to tolerate shit like this.

  • MuchPineapples@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    I don’t know which company I would trust developing my brain implant, but it sure as hell isn’t Tesla. Their software and hardware history is less than stellar.