![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://voyager.lemmy.ml/api/v3/image_proxy?url=https%3A%2F%2Flemmy.kde.social%2Fpictrs%2Fimage%2F19e6d51f-5131-409e-8990-827d3d29e4d3.png)
You forgot quantum! We’re developing super duper plants that suck the carbon out of the atmosphere harder than a crack whore and make everything great for everyone (with money)!!!
You forgot quantum! We’re developing super duper plants that suck the carbon out of the atmosphere harder than a crack whore and make everything great for everyone (with money)!!!
And what is the result? Either you have to check the sources if they really mean what the agent says they do, or you don’t check them meaning the whole thing is useless since they might come up with garbage anyway.
I think you’re arguing on a different level than I am. I’m not interested in mitigations or workarounds. That’s fine for a specific use case, but I’m talking about the usage in principle. You inherently cannot trust an AI. It does hallucinate. And unless we get the “shroominess” down to an extremely low level, we can’t trust the system with anything important. It will always be just a small tool that needs professional supervision.
Even agents suffer from the same problem stated above: you can’t trust them.
Compare it to a traditional SQL database. If the DB says, that it saved a row or that there are 40 rows in the table, then that’s true. They do have bugs, obviously, but in general you can trust them.
AI agents don’t have that level of reliability. They’ll happily tell you that the empty database has all the 509 entries you expect them to have. Sure, you can improve reliability, but you won’t get anywhere near the DB example.
And I think that’s what makes it so hard to extrapolate progress. AI fails miserably at absolute basic tasks and doesn’t even see that it failed. Success seems more chance than science. That’s the opposite of how every technology before worked. Simple problems first, if that’s solved, you push towards the next challenge. AI in contrast is remarkably good at some highly complex tasks, but then fails at basic reasoning a minute later.
The problem I see is mainly the divergence between hype and reality now, and a lack of a clear path forward.
Currently, AI is almost completely unable to work unsupervised. It fucks up constantly and is like a junior employee who sometimes shows up on acid. That’s cool and all, but has relatively little practical use. However, I also don’t see how this will improve over time. With computers or smartphones, you could see relatively early on, what the potential is and the progression was steady and could be somewhat reliably extrapolated. With AI that’s not possible. We have no idea, if the current architectures could hit a wall tomorrow and don’t improve anymore. It could become an asymptotic process, where we need massive increases for marginal gains.
Those two things combined mean, we currently only have toys, and we don’t know if these will turn into tools anytime soon.
There’s a lot of heat to sink.
Absolutely. I barely touch code anymore, but I talk about how to touch code a lot.
Very little substance or conclusions. While technology is improving, you’re not reading into account AI investment is a bubble.
AI can certainly help, but not a single one was able to consistently deliver good results. A technology that needs constant supervision by an actual expert isn’t really all that useful. And this is not just a problem of scale. It’s a limitation of the current approach. Throwing billions at a problem to save a few millions just isn’t worth it.
And that world would be much cooler.
You can’t tell, unfortunately.
God that’s bad. Have an upvote.
I wonder, what will happen to mainstream platforms.
If you talk to “regular” people without our background, they often enough don’t believe that even astroturfing exists, especially not on “their” platform. So there will inevitably be a large amount of people who are oblivious to the fact, that they’re talking to machines and feel like this is the real world. That’s scary.
I was honestly a bit shocked when I first saw videos of it.
It’s so narrow, it looks like you can’t even open the car doors properly and that in a tunnel filled with lithium gas bombs.
Actually I asked something rather similar yesterday: https://feddit.de/post/8791793
There were no real winners in my opinion.
I think, you completely misunderstand how jobs work for most people.
Your direct boss most likely can’t fire you directly, but they assign you work. There’s tons of boring, mind numbing work nobody wants to do. Guess who just volunteered for that? Same is true for shift planning. Just assigning you the shitty shifts nobody wants to do is perfectly legal. Even just completely ignoring you as a person is possible, and will grind you down.
You won’t get a union to go on strike for that. And where is it (legally!) discrimination? Someone has to do the shift, after all!
You can rave about unions and laws all you want, but being an asshole is not illegal, so if your boss acts like one, there’s nothing you can do.
In theory.
Any boss with half a brain will find other ways to either fire that person, make their life miserable, or simply make it very clear, that they’re not going to get any promotions, pay rises, etc. from them.
A bad boss can make your life hell, complete within his rights.
The problem is, that many employees are not in a position to ignore their bosses. Making it downright illegal to call after-hours is the only way to enforce this.
It’s possible by analyzing the title and subtext (and the article snippet, if it exists). I tried to have an AI model estimate the likeness of articles. Worked relatively well, but I lack the motivation to build it out into a usable app.
Even as a link aggregator that would be perfectly fine for me personally.
What really bugs me is that many news sites don’t keep their feeds clean, so you often have duplicates and most importantly: if you have multiple sources, you’ll get multiple copies of the same information packaged slightly differently - often I’m not even interested in one copy.
For example, all news outlets had some Grammy/Taylor Swift crap in their feeds. Each outlet had like three different articles, all regurgitating the same information. I would love to have something like topic clusters, so that I could discard all articles I’m not interested in in bulk.
I even tried building it myself, but wasn’t very successful.
Is life in prison really better than a proper killing?
That’s what’s really confusing me: why add an expensive feature, that obviously doesn’t work and even in the best case adds only minor improvements?
I mean, it’s not another option like with Bing. It’s the default. Every stupid little search will take up AI resources. For what? Market cap?