they almost learned nothing. The grid almost shutdown this time, instead of shutting down…
they almost learned nothing. The grid almost shutdown this time, instead of shutting down…
it’s texas, their power grid doesn’t work.
It could be humble enough to admit it doesn’t know, but it can still be mistaken and think it has the right answer when it doesn’t. It would feel neigh omniscient, but it would never truly be.
yeah and so are humans, so i mean, shit happens. Even then it’d likely be more accurate than a human just based off of the very fact that it knows more subjects than any given human. And all humans alive, because it’s knowledge is based off of the written works of the entirety of humanity, theoretically.
A roundtrip around the globe on glass fibre takes hundreds of milliseconds, so even if it has the truth on some matter, there’s no guarantee that didn’t change in the milliseconds it needed to become aware that the truth has changed. True omniscience simply cannot exists since information (and in turn the truth encoded by that information) also propagates at the speed of light.
well yeah, if we’re defining the ultimate truth as something that propagates through the universe at the highest known speed possible. That would be how that works, since it’s likely a device of it’s own accord, and or responsive to humans, it likely wouldn’t matter, as it would just wait a few seconds anyway.
The dataset that encodes all wrong things would be infinite in size, and constantly change. It can theoretically exist, but realistically it will never happen. And if it would be incomplete it has to make assumptions at some point based on the incomplete data it has, which would open it up to being wrong, which we would call a hallucination.
at that scale yes, but at this scale, with our current LLM technology, which was what i was talking about specifically, it wouldn’t matter. But even at that scale i don’t think it would classify as a hallucination, because a hallucination is a very specific type of being wrong. It’s literally pulling something out a thin air, and a theoretical general intelligence AI wouldn’t be pulling shit out of thin air, at best it would elaborate on what it knows already, which might be everything, or nothing, depending on the topic. But it shouldn’t just make something up out of thin air. It could very well be wrong about something, but that’s not likely to be a hallucination.
and we haven’t even gotten into the problem of what happens when you have no more data to feed it, do you make more? That’s an impossible task.
ok so to give you an um ackshually here.
Technically if we were to develop a real general artificial general intelligence, it would be limited to the amount of knowledge that it has, but so is any given human. And it’s advantage would still be scale of operations compared to a human, since it can realistically operate on all known theoretical and practical information, where as for a human that’s simply not possible.
Though presumably, it would also be influenced by AI posting that we already have now, to some degree, the question is how it responds to that, and how well it can determine the difference between that and real human posting.
the reason why hallucinations are such a big problem currently is simply due to the fact that it’s literally a predictive text model, it doesn’t know anything. That simply wouldn’t be true for a general artificial intelligence. Not that it couldn’t hallucinate, but it wouldn’t hallucinate to the same degree, and possibly with greater motives in mind.
A lot of the reason human biology tends to obfuscate certain things is simply due to the way it’s evolved, as well as it’s potential advantages in our life. The reason we can’t see our blindspots is due to the fact that it would be much more difficult to process things otherwise. It’s the same reason our eyesight is flipped as well. It’s the same reason pain is interpreted the way that it is.
a big mistake you are making here is stating that it must be fed information that it knows to be true, this is not inherently true. You can train a model on all of the wrong things to do, as long it has the capability to understand this, it shouldn’t be a problem.
For predictive models? This is probably the case, but you can also poison the well so to speak, when it comes to those even.
it’s only going to get worse, especially as datasets deteriorate.
With things like reddit being overrun by AI, and also selling AI training data, i can only imagine what mess that’s going to cause.
ah, a classic.
what else are you supposed to do, not make frivolous lawsuits to make a point? You just sit there and go “damn, you got me this time bro”
not once has the fed ever said to a state that the state couldn’t do anything ever, it’s never happened.
Not once. Don’t ask texas about secession.
i’m not here to read articles most of the time, because people talk about what’s in the article here. And in this case, leaf blowers, specifically electric ones are a bit quieter in near field operations.
Which i definitely expected, based off of the headline, but like i said, compared to a traditional ICE leaf blower, especially commercial backpack setups. Does it make a difference? Uhm. Not sure.
It’s funny to me that people yell at me about not reading articles, even though i understand the general pretense of it, without reading it. People literally corrected me by stating numbers, because that was the only thing i didn’t mention, since i didn’t read the article. And i didn’t even come here to speak about it, i mostly came here to complain about the fact that small ICE engines exist on lawn equipment.
im not sure that’s what clockworkorange would say, but i’m sure the L makes all of the difference, especially since you took it.
sue for what? Snow falling on the ground? There’s no way that’s getting through courts lmao.
thank you, a cock work orange. What insightful details you have provided us.
huh…
Wonder why.
man its a good thing the feds can’t mandate what the states can do…
do you run a local militia? How many armored vehicles do you have there?
also, BOXCAR? ARE YOU COMPARING THIS TO THE FUCKING NAZIS? BRO THIS IS THE TEN COMMANDMENTS!
that’s explicitly not true but ok.
i’m pretty sure this is illegal?
Can we get someone on this?
IMO if it’s that little snow, i’m just fucking leaving it.
It’s not gonna kill me, unless it’s sitting on a solar array or something.
good thing you’re only possibly linux. If you were fully linux i’d be retiring from life.