• Rhaedas@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    Hallucinations come from the weighting of training to come up with a satisfactory answer for the output. Future AGI or LLMs guided by such would look at the human responses and determine why the answers weren’t good enough, but current LLMs can’t do that. I will admit I don’t know how the longer memory versions work, but there’s still no actual thinking, it’s possibly just wrapping up previous generated text along with the new requests to influence a closer new answer.

    • jadero@programming.dev
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I wonder how creative these things are. Somewhere between “hallucination” and fully verifiable correct answers based on current knowledge, there might be a “zone of creativity.”

      I would argue that there is no such thing as something completely from nothing. Every advance builds on work that came before, often combining bodies of knowledge in disparate fields to discover new insights.

      • Rhaedas@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        Where their creativity lies at the moment seems to be a controlled mixing of previous things. Which in some areas works for the definition of creativity, such as with artistic images or some literature. Less so with things that require precision to work, such as analysis or programming. The difference from LLMs and humans in using past works to bring new things to life is that a human is actually (usually) thinking throughout the process on what adds and subtracts. Right now the human feedback on the results is still important. I can’t think of any example where we’ve yet successfully unleashed LLMs into the world confident enough of their output to not filter it. It’s still only a tool of generation, albeit a very complex one.

        What’s troubling throughout the whole explosion of LLMs is how safety of the potentials is still an afterthought, or a “we’ll figure it out” mentality. Not a great look for AGI research. I want to say if LLMs had been a door to AGI we would have been in serious trouble, but I’m not even sure I can say it hasn’t sparked something, as an AGI that gains awareness fast enough sure isn’t going to reveal itself if it has even a small idea of what humans are like. And LLMs were trained on apparently the whole internet, so…

        • jadero@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          I like your comment regarding the (usually) thoughtful effort that goes into creative endeavours. I know that there are those who claim that deliberate effort is antithetical to the creative process, but even serendipitous results have to be deliberately examined and refined. Until a system can say “oh, that’s interesting enough to investigate further” I’m not convinced that it can be called creative. In the context of LLMs, I think that means giving them access to their own outputs in some way.

          As for the dangers, I’m pretty sure that most of us, even those of us looking for danger, will not recognize it until we see it. That doesn’t mean we should just barrel ahead, though. Just the opposite. That’s why we need to move slowly. Our reflexes and analytical capabilities are pretty slow in comparison to the potential rate of development.

          • Rhaedas@kbin.social
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            In the context of LLMs, I think that means giving them access to their own outputs in some way.

            That’s what the AUTOGPTs do (as well as others, there’s so many now) they pick apart the task into smaller things and feed the results back in, building up a final result, and that works a lot better than just a one time mass input. The biggest advantage and main reason for these being developed was to keep the LLM on course without deviation.