• 0 Posts
  • 5 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • Yes, a theoretical future AI that would be able to self-correct would eventually become more powerful than humans, especially if you could give it ways to run magnitudes more self-correcting mechanisms at the same time. But it would still be making ever so small assumptions when there is a gap in the information it has.

    It could be humble enough to admit it doesn’t know, but it can still be mistaken and think it has the right answer when it doesn’t. It would feel neigh omniscient, but it would never truly be.

    A roundtrip around the globe on glass fibre takes hundreds of milliseconds, so even if it has the truth on some matter, there’s no guarantee that didn’t change in the milliseconds it needed to become aware that the truth has changed. True omniscience simply cannot exists since information (and in turn the truth encoded by that information) also propagates at the speed of light.

    a big mistake you are making here is stating that it must be fed information that it knows to be true, this is not inherently true. You can train a model on all of the wrong things to do, as long it has the capability to understand this, it shouldn’t be a problem.

    The dataset that encodes all wrong things would be infinite in size, and constantly change. It can theoretically exist, but realistically it will never happen. And if it would be incomplete it has to make assumptions at some point based on the incomplete data it has, which would open it up to being wrong, which we would call a hallucination.


  • I’m not sure where you think I’m giving it too much credit, because as far as I read it we already totally agree lol. You’re right, methods exist to diminish the effect of hallucinations. That’s what the scientific method is. Current AI has no physical body and can’t run experiments to verify objective reality. It can’t fact check itself other than be told by the humans training it what is correct (and humans are fallible), and even then if it has gaps in what it knows it will fill it up with something probable - but which is likely going to be bullshit.

    All my point was, is that to truly fix it would be to basically create an omniscient being, which cannot exist in our physical world. It will always have to make some assumptions - just like we do.


  • Hallucinations in AI are fairly well understood as far as I’m aware. Explained in high level on the Wikipedia page for it. And I’m honestly not making any objective assessment of the technology itself. I’m making a deduction based on the laws of nature and biological facts about real life neural networks. (I do say AI is driven by the data it’s given, but that’s something even a layman might know)

    How to mitigate hallucinations is definitely something the experts are actively discussing and have limited success in doing so (and I certainly don’t have an answer there either), but a true fix should be impossible.

    I can’t exactly say why I’m passionate about it. In part I want people to be informed about what AI is and is not, because knowledge about the technology allows us to make more informed decision about the place AI takes in our society. But I’m also passionate about human psychology and creativity, and what we can learn about ourselves from the quirks we see in these technologies.



  • It will never be solved. Even the greatest hypothetical super intelligence is limited by what it can observe and process. Omniscience doesn’t exist in the physical world. Humans hallucinate too - all the time. It’s just that our approximations are usually correct, and then we don’t call it a hallucination anymore. But realistically, the signals coming from our feet take longer to process than those from our eyes, so our brain has to predict information to create the experience. It’s also why we don’t notice our blinks, or why we don’t see the blind spot our eyes have.

    AI representing a more primitive version of our brains will hallucinate far more, especially because it cannot verify anything in the real world and is limited by the data it has been given, which it has to treat as ultimate truth. The mistake was trying to turn AI into a source of truth.

    Hallucinations shouldn’t be treated like a bug. They are a feature - just not one the big tech companies wanted.

    When humans hallucinate on purpose (and not due to illness), we get imagination and dreams; fuel for fiction, but not for reality.