• 0 Posts
  • 9 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle
  • We have 3 thousand years of tradition on philosophy of the mind, we have a clear idea. It’s just somewhat complex and difficult to grasp with, and there is still room for development and understanding. But this is like saying that we don’t have a clear philosophy of physics just because quantum physics is hard and there are things we don’t fully understand yet. As for non-human agents, what even is that? are dogs non-human agents? fish? virus? Computers are just the newest addition to the list of non-human agents we have philosophized about and we probably understand better the mind of other relatively simple life forms than our own. Definitions and semantics are always being stressed and are always breaking, that’s what symbols are for, that’s one of their main defining use cases. Go talk to an north-east African about rizz and tell me how that goes.


  • They are open sourcing, just keeping a proprietary license on it. Yes, it’s weird, but it is not unheard of. The Unreal game engine’s entire source code is open, anyone can read or submit changes to it. Even make changes and distribute said changes. But it’s still a proprietary product owned by Epic Games, and commercial use is strictly controlled under the licensing terms. Open doesn’t mean Free (as in beer), or Freedom (licensing). Those are three different things. It is just that people have associated the term open source with the entire Free and Open Source Software philosophy. But they aren’t the same thing.

    ZDNET is wrong, Winamp is open sourcing their code. The article is obtuse and refuses to elaborate or provide reasons about their claim that Winamp isn’t open sourcing.

    it cannot be open source with that level of corporate control

    Why?

    It not only can, we have several examples of corporate products that are open source precisely like this with this level of control.

    Open source requiring a specific license is a decades old debate that continues to this day. We have like a million different licenses and people argue and bicker all the time about which ones are Truly Open source™ and which ones aren’t. It’s all legalese that make most people have headaches. But there’s one crux on this whole thing: Open source does not preclude commercialization of software. This is why people are proposing the term source-available software. Winamp might go for that model and the debate would still go on.


  • Not really. Reality is mostly a social construction. If there’s not an other to check and bring about meaning, there is no reality, and therefore no hallucinations. More precisely, everything is a hallucination. As we cannot cross reference reality with LLMs and it cannot correct itself to conform to our reality. It will always hallucinate and it will only coincide with our reality by chance.

    I’m not conflating tokens with anything, I explicitly said they aren’t an internal representation. They’re state and nothing else. LLMs don’t have an internal representation of reality. And they probably can’t given their current way of working.



  • This right here is also the reason why AI fanboys get angry when they are told that LLMs are not intelligent or even thinking at all. They don’t understand that in order for rational intelligence to exist, the LLMs should be able to have an internal, referential inner world of symbols, to contrast external input (training data) against and that is also capable of changing and molding to reality and truth criteria. No, tokens are not what I’m talking about. I’m talking about an internally consistent and persistent representation of the world. An identity, which is currently antithetical with the information model used to train LLMs. Let me try to illustrate.

    I don’t remember the details or technical terms but essentially, animal intelligence needs to experience a lot of things first hand in order to create an individualized model of the world which is used to direct behavior (language is just one form of behavior after all). This is very slow and labor intensive, but it means that animals are extremely good, when they get good, at adapting said skills to a messy reality. LLMs are transactional, they rely entirely on the correlation of patterns of input to itself. As a result they don’t need years of experience, like humans for example, to develop skilled intelligent responses. They can do it in hours of sensing training input instead. But at the same time, they can never be certain of their results, and when faced with reality, they crumble because it’s harder for it to adapt intelligently and effectively to the mess of reality.

    LLMs are a solipsism experiment. A child is locked in a dark cave with nothing but a dim light and millions of pages of text, assume immortality and no need for food or water. As there is nothing else to do but look at the text they eventually develop the ability to understand how the symbols marked on the text relate to each other, how they are usually and typically assembled one next to the other. One day, a slit on a wall opens and the person receives a piece of paper with a prompt, a pencil and a blank page. Out of boredom, the person looks at the prompt, it recognizes the symbols and the pattern, and starts assembling the symbols on the blank page with the pencil. They are just trying to continue from the prompt what they think would typically follow or should follow afterwards. The slit in the wall opens again, and the person intuitively pushes the paper it just wrote into the slit.

    For the people outside the cave, leaving prompts and receiving the novel piece of paper, it would look like an intelligent linguistic construction, it is grammatically correct, the sentences are correctly punctuated and structured. The words even make sense and it says intelligent things in accordance to the training text left inside and the prompt given. But once in a while it seems to hallucinate weird passages. They miss the point that, it is not hallucinating, it just has no sense of reality. Their reality is just the text. When the cave is opened and the person trapped inside is left into the light of the world, it would still be profoundly ignorant about it. When given the word sun, written on a piece of paper, they would have no idea that the word refers to the bright burning ball of gas above them. It would know the word, it would know how it is usually used to assemble text next to other words. But it won’t know what it is.

    LLMs are just like that, they just aren’t actually intelligent as the person in this mental experiment. Because there’s no way, currently, for these LLMs to actually sense and correlate the real world, or several sources of sensors into a mentalese internal model. This is currently the crux and the biggest problem on the field of AI as I understand it.



  • I agree but that has no bearing on my first comment. Common law and civil law are two fundamentally different systems. And Apple seems to always operate as if they’re dealing with a US court, being petty and throwing tamtruns, trying to cheat and weasel their way out. Sure the European courts have their share of corruption, but Apple doesn’t seem to have found said levers yet.

    Mega corporations aren’t magical all powerful infallible entities either. They’re just a bunch of entitled twats with too much money. They’re no better at making decisions as the average Joe.