• Wheaties [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    You are ~30 trillion cells all operating concurrently with one another. Are you suggesting that is in any way similar to a Turing machine?

    • DahGangalang@infosec.pub
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      Yes? I think that depends on your specific definition and requirements of a turing machine, but I think it’s fair to compare the almagomation of cells that is me to the “AI” LLM programs of today.

      While I do think that the complexity of input, output, and “memory” of LLM AI’s is limited in current iterations (and thus makes it feel like a far comparison to “human” intelligence), I do think the underlying process is fundamentally comparable.

      The things that make me “intelligent” are just a robust set of memories, lessons, and habits that allow me to assimilate new information and experiences in a way that makes sense to (most of) the people around me. (This is abstracting away that this process is largely governed by chemical reactions, but considering consciousness appears to be just a particularly complicated chemistry problem reinforces the point I’m trying to make, I think).

      • Wheaties [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        11 months ago

        My definition of a Turing machine? I’m not sure you know what Turing machines are. It’s a general purpose computer, described in principle. And, in principle, a computer can only carry out one task at a time. Modern computers are fast, they may have several CPUs stitched together and operating in tandem, but they are still fundamentally limited by this. Bodies don’t work like that. Every part of them is constantly reacting to it’s environment and it’s neighboring cells - concurrently.

        You are essentially saying, “Well, the hardware of the human body is very complex, and this software is(n’t quite as) complex; so the same sort of phenomenon must be taking place.” That’s absurd. You’re making a lopsided comparison between two very different physical systems. Why should the machine we built for doing sums just so happen to reproduce a phenomena we still don’t fully understand?

        • DahGangalang@infosec.pub
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          11 months ago

          Thats not what I intended to communicate.

          I feel the Turing machine portion is not particularly relevant to the larger point. Not to belabor the point, but to be as clear as I can be: I don’t think nor intend to communicate that humans operate in the same way as a computer; I don’t mean to say that we have a CPU that handles instructions in a (more or less) one at a time fashion with specific arguments that determine flow of data as a computer would do with Assembly Instructions. I agree that anyone arguing human brains work like that are missing a lot in both neuroscience and computer science.

          The part I mean to focus on is the models of how AIs learn, specifically in neutral networks. There might be some merit in likening a cell to a transistor/switch/logic gate for some analogies, but for the purposes of talking about AI, I think comparing a brain cell to a node in a neutral network is most useful.

          The individual nodes in neutral network will have minimal impact on converting input to output, yet each one does influence the processing of one to the other. Iand with the way we train AI, how each node tweaks the result will depend solely on the past I put that has been given to it.

          In the same way, when met with a situation, our brains will process information in a comparable way: that is, any given input will be processed by a practically uncountable amount of neurons, each influencing our reactions (emotional, physical, chemical, etc) in miniscule ways based on how our past experiences have “treated” those individual neurons.

          In that way, I would argue that the processes by which AI are trained and operated are comparable to that of the human mind, though they do seem to lack complexity.

          Ninjaedit: I should proofread my post before submitting it.

          • Wheaties [she/her]@hexbear.net
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            I agree that there are similarities in how groups of nerve cells process information and how neural networks are trained, but I’m hesitant to say that’s a whole picture of the human mind. Modern anesthesiology suggests microtubuals, structures within cells, also play a function in cognition.

            • DahGangalang@infosec.pub
              link
              fedilink
              arrow-up
              2
              ·
              11 months ago

              Right.

              I don’t mean to say that the mechanism by which human brains learn and the mechanism by which AI is trained are 1:1 directly comparable.

              I do mean to say that the process looks pretty similar.

              My knee jerk reaction is to analogize it as comparing a fish swimming to a bird flying. Sure there are some important distinctions (e.g. bird’s need to generate lift while fish can rely on buoyancy) but in general, the two do look pretty similar (i.e. they both take a fluid medium and push it to generate thrust).

              And so with that, it feels fair to say that learning, that the storage and retrieval of memories/experiences, and that the way that that stored information shapes our sub-concious (and probably conscious too) reactions to the world around us seems largely comparable to the processes that underlie the training of “AI” and LLMs.