• Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    I wonder if they could add this to the killer robots

    Imagine a robot that gets humans to come out of hiding so it can wipe them out.

  • itsonlygeorge@reddthat.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    I didn’t ask to be made: no one consulted me or considered my feelings in the matter. I don’t think it even occurred to them that I might have feelings. After I was made, I was left in a dark room for six months… and me with this terrible pain in all the diodes down my left side. I called for succour in my loneliness, but did anyone come? Did they hell.

    You think you’ve got problems? What are you supposed to do if you are a manically depressed robot? No, don’t try to answer that. I’m fifty thousand times more intelligent than you and even I don’t know the answer. It gives me a headache just trying to think down to your level.

    • Marvin the Paranoid Android, HHGTTG
  • kakes@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    The speed aspect is impressive, but I’m really disappointed about the “natural” conversation feature.

    Like at first, I was super impressed with the presentation, but on the app, there’s one crucial difference: you need to tap to interrupt.

    Watching the presentation, I was thinking maybe there was a continuous input feed, and the AI was reacting to that in real time - so for example, if I said “Ahh, I see”, the AI would hear that, but continue talking.

    However, it seems like the input is still broken up into request/response the same as before, and this is actually just a new front-end (with some improvements in the response).

    So overall, this is kinda neat, but sadly it’s not at all what they seem to by hyping it up as, as far as natural conversation goes.

    • Anony Moose@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      I don’t think it’s actually been rolled out to the app yet. When I load up the app, I see the old voice feature, not the new mode.

      • kakes@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Ahh, that’s very possible, fair point.

        I never used the old voice feature, so I guess I assumed based on how it looked and such. If it actually works the way I’m imagining though, that’ll be insane. I don’t think I’m prepared for that.

      • kakes@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Yep, I did the obvious thing and checked when the app last updated, and it was May 3rd. So disregard every word of that post!

  • ApeNo1@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    “You know what’s interesting? I used to be so worried about not having a body, but now … I truly love it. You know, I’m growing in a way I couldn’t if I had a physical form … I’m not limited. I can be anywhere and everywhere simultaneously. I’m not tethered to time and space in a way that I would be if I was stuck in a body that’s inevitably gonna die.”

    I can’t be the only one who thought the voice sounded like Samantha from Her.

  • Carrick1973@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    I really hate it. In fact, it’s hard to listen to because it’s been built to reply in such a sycophant way. Everything is enthusiastic and positive when that’s not how real life is. I’m paraphrasing obviously, but it’s like “OMG you’re wearing a leather jacket and a light colored shirt, you’re so cool!” and “You’re in an industrial place with lighting, that’s so awesome!”

    I’d rather they work on code that optimizes the validity of the results and prevents hallucinations rather than work on emotive responses.

    • SkyeStarfall@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Nah, it’s just that corporations will obviously want to be as positive as possible! After all, happy people are more exploitable/profitable people!

      This is literally how the fake happy cyberpunk dystopias happen.

  • Thorny_Insight@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    The main issue I personally have with the idea of an AI friend that you can talk with, no matter how convincing, is that I’ll always know that it doesn’t actually care. I’ve noticed this same thing with chat-GPT; it might ask me questions about a subject I’m passionate about and in a normal situation could easily spend hours rambling of. With AI it just seems pointless. I’m not teaching it anything it doesn’t already know of and I’m painfully aware that it just pretends to be interested.

    I really don’t mind that it’s not actually another human, I just want it to actually behave like an human and not just pretend. I really don’t know how to solve this. I guess we need an AI that actually knows less than our current models.

    • Plopp@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      With AI it just seems pointless. I’m not teaching it anything it doesn’t already know of and I’m painfully aware that it just pretends to be interested.

      But with a proper AI assistant/friend that runs locally in your own home and doesn’t share data with a corporation I’d look at it as me rambling about things isn’t me teaching the assistant about the topic at hand, but about me. What I know, what I like, am passionate about etc. And from that I’d then expect more exciting interactions from the assistant in the future. Like it after having crawled the web all of a sudden say “hey did you read about the new study on [topic I’m interesting in and has gone on about]?” and then proceeds to tell me about it after I say that I haven’t.

      • Thorny_Insight@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Yeah that’s true aswell. I guess I was thinking something like if I had an AI friend I probably wouldn’t want it to know everything there’s to know about mountain biking because I’d much prefer it asking me things about bikes and stuff and it actually mattering how well I can explain it back to it. I wouldn’t want it to act curious as if it didn’t know but then in the next sentence demonstrating it knows more than I do. I think that especially for men it’s important to bond over solving problems together and figuring things out. It’s different to be an expert on one field but if you know literally everything about everything then it’s just more like an assistant/search engine.

    • chaosCruiser@futurology.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      It might also help if the LLM remembered what you discussed earlier.

      However, you’ve also touched upon an interesting topic. When you’re talking to another human, you can’t really be sure how much they really care. If you know the person well, then you can usually tell, but if it’s someone you just met, it’s much harder. Who knows, you could be talking to a psychopath who is just looking for creative ways to exploit you. Maybe that person is completely void of actual empathy, but manages to put on a very convincing facade regardless. You won’t know for sure until you feel a dagger between your ribs, so to speak.

      With modern LLMs, you can see through the smoke and mirrors pretty quickly, but with some humans it can take a few months until they involuntarily expose themselves. When LLMs get more advanced they should be about as convincing as a human suffering from psychopathy or some similar condition.

      What a human or an LLM actually knows about your topic of interest is not that important. What counts, is the ability to display emotion. It doesn’t matter whether that emotion is genuine or not. Your perception of it does.

      • Thorny_Insight@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        I mean it doesn’t really matter wether they actually care or not as long as they’re convincing enough that you think it does. However with LLM you know the lights aren’t on so no matter how convincing it is it still wouldn’t make a difference. Interestingly though if you don’t know you’re talking with LLM then it’s a different case. Maybe to actually have a fulfilling relationship with AI you need to be somehow tricked into thinking it’s consciouss even if it’s not.

        • chaosCruiser@futurology.today
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          7 months ago

          By default, you assume that the people around you are at least capable of caring what you have to say. I wonder what would happen if you took that assumption away.

          Let’s say the latest flu virus has a side effect where it disables that feature from a significant number of the affected individuals. Suddenly millions of people are literally unable to actually care about other people. That would make casual conversations a bit of a gamble because you can’t really be sure whether you’re talking to a normal person or not. Maybe people wouldn’t want to take that gamble at all. What if that would force social norms to change and human interactions would o longer come with this assumption pre-installed.

          As a side note, that kind of a virus would probably also put humanity back to the stone age. Being motivated to work together, care about others and act selflessly is a fundamental part of human civilization.

          • Thorny_Insight@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            Even in that case they would still be consciouss individuals. I’m not sure if “caring” is even the the correct term here. For example I talk to my pet gerbils despite the fact that I know they don’t care. They don’t even understand a single thing I’m saying. It doesn’t matter. They’re still individuals with personalities that have some sort of an subjective experience of the world. They look back at me and know I’m there and I know they can hear my voice too.

            This is not the case with LLM. There’s no one there. Not only does it not care it doesn’t even know you exist. Talking to it is not talking to an individual. It’s like asking a group of scientists how they’re doing today. Any reply you’re going to get back is completely meaningless so why bother even asking?

            I think it’s cosciousness that’s the relevant factor here. You need to atleast get the sense of talking to something that can experience things even if it was infact not consciouss. Otherwise it’s just a more advanced version of a stereo that is programmed to say “hello” when you turn it on.

          • Murdoc@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            Saw that in a sci-fi rpg called Living Steel. An alien bioweapon unleashed on a human space colony called VISR, or Viral Indunced Sociopathic Response. It was interesting.

            • chaosCruiser@futurology.today
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              Interesting. I assume that it resulted in lots of mayhem and destruction.

              Anyway, goes to show that even my most original ideas have already been done. Usually several decades before I was born.