• 0 Posts
  • 14 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
  • How many times are you running it?

    For the SelfCheckGPT paper, which was basically this method, it was very sample dependent, continuing to see improvement up to 20 samples (their limit), but especially up to around 6 iterations…

    I’ve seen it double down, when instructed a facet of the answer was incorrect and to revise, several times I’d get “sorry for the incorrect information”, followed by exact same mistake.

    You can’t continue with it in context or it ruins the entire methodology. You are reintroducing those tokens when you show it back to the model, and the models are terrible at self-correcting when instructed that it is incorrect, so the step is quite meritless anyways.

    You need to run parallel queries and identify shared vs non-shared data points.

    It really depends on the specific use case in terms of the full pipeline, but it works really well. Even with just around 5 samples and intermediate summarization steps it pretty much shuts down completely errant hallucinations. The only class of hallucinations it doesn’t do great with are the ones resulting from biases in the relationship between the query and the training data, but there’s other solutions for things like that.

    And yes, it definitely does mean inadvertently eliminating false negatives, which is why a balance has to be struck in terms of design choices.


  • It’s not hallucination, it’s confabulation. Very similar in its nuances to stroke patients.

    Just like the pretrained model trying to nuke people in wargames wasn’t malicious so much as like how anyone sitting in front of a big red button labeled ‘Nuke’ might be without a functioning prefrontal cortex to inhibit that exploratory thought.

    Human brains are a delicate balance between fairly specialized subsystems.

    Right now, ‘AI’ companies are mostly trying to do it all in one at once. Yes, the current models are typically a “mixture of experts,” but it’s still all in one functional layer.

    Hallucinations/confabulations are currently fairly solvable for LLMs. You just run the same query a bunch of times and see how consistent the answer is. If it’s making it up because it doesn’t know, they’ll be stochastic. If it knows the correct answer, it will be consistent. If it only partly knows, it will be somewhere in between (but in a way that can be fine tuned to be detected by a classifier).

    This adds a second layer across each of those variations. If you want to check whether something is safe, you’d also need to verify that answer isn’t a confabulation, so that’s more passes.

    It gets to be a lot quite quickly.

    As the tech scales (what’s being done with servers today will happen around 80% as well on smartphones in about two years), those extra passes aren’t going to need to be as massive.

    This is a problem that will eventually go away, just not for a single pass at a single layer, which is 99% of the instances where people are complaining this is an issue.


  • It has no awareness of what it’s saying. It’s simply calculating the most probable next word in a typical sentence and spewing it out.

    Neither of these things are true.

    It does create world models (see the Othello-GPT papers, Chess-GPT replication, and the Max Tegmark world model papers).

    And while it is trained on predicting the next token, it isn’t necessarily doing it from there on out purely based on “most probable” as your sentence suggests, such as using surface statistics.

    Something like Othello-GPT, trained to predict the next move and only fed a bunch of moves, generated a virtual Othello board in its neural network and kept track of “my pieces” and “opponent pieces.”

    And that was a toy model.



  • “How can we promote our bottom of the barrel marketing agency?”

    “I know, let’s put a random link to our dot com era website on Lemmy with no context. I hear they love advertising there. We can even secure our own username - look at that branding!! This will be great.”

    “Hey intern, get the bags ready. The cash is about to start flowing in, and you better not drop a single bill or we’ll get the whip again!”





  • kromem@lemmy.worldtoTechnology@lemmy.worldHello GPT-4o
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Definitely not.

    If anything, them making this version available for free to everyone indicates that there is a big jump coming sooner than later.

    Also, what’s going on behind the performance boost with Claude 3 and now GPT-4o on leaderboards in parallel with personas should not be underestimated.



  • He’s trying to say that the people coming into the country are dangerous criminals, but he’s done the talking points so often by now that neither he nor his audience need the connective tissue between the ideas.

    “Oh, now he’s doing the Hannibal Lecter bit? Yeah, screw illegals or whatever.”

    They have their own coded language at this point where as Trump slips more and more into dementia they still understand what their adoptive hate spewing neo-Nazi grandpa dictator is talking about.

    “And then the blargabaghehhhh…

    “Exactly. Fuck the blargabaghehhhh…”

    Edit: I don’t think I’ll ever stop laughing when I see that clip, btw.


  • Even then they should still be held to a higher standard.

    Especially now in the era of generative AI.

    The poster should have a well known character in the world lore holding the Coke, or a location in the map for the car ad, etc.

    The ads should feel like they are actually a part of the world, and shouldn’t be put in a game unless this can be accomplished.

    In game ads don’t have to suck. But because the power dynamic is such that shit ads can be shoved down players’ throats with the only response being to not buy that publisher’s games, the medium isn’t going to find an acceptable equilibrium.

    In game ads in live service games for in game assets may not suck too much though (an inevitable part of the future).


  • It’s critical for the USA that he win this next election - Trump is an existential threat to Democracy in the US.

    It’s critical for the world that Trump not win.

    Trump is immediately going to stop any support for Ukraine and hand that to Putin on a silver platter, is going to weaken NATO and the UN as much as he possibly can, and is going to flip the US from the virtual ‘allies’ to ‘axis’ before the end of his first next four years. Which will go on as long as he is alive, as once he gets power he’s not giving it up.

    He isn’t even playing coy with his praise for megalomaniacal current dictators, especially Putin.

    With Trump in charge of the US, you can expect him and Putin together actively working to spread Christian fascism to Europe using subversion and eventually force where needed.

    It’s not just the US that’s on the line in November.


  • I mean, I absolutely will be ashamed when I vote for Biden in November.

    Ashamed that the best this supposed bastion of democracy could offer up for me to choose between are:

    • The oldest person ever in the running for the job who makes everyone hold their breath he doesn’t do something addled each time he’s in front of a podium.

    • A guy in several criminal cases at the same time who tried to overthrow the government and is simultaneously beloved by conservative Christians trying to ban books on sex while having had sex with a pornstar who he said reminded him of his daughter.

    • A guy denouncing modern science who admitted that he had a worm that ate part of his brain and died.

    There’s a lot to unpack there, and pretty much all of it makes me feel shame, irrespective of my ethnic background.