• maniclucky@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 hours ago

    Ooooooh. Ok that makes sense.

    With that said, you might look at researchers using AI to come up with new useful ways to fold proteins and biology in general. The roadblock, to my understanding (data science guy not biologist), is the time it takes to discover these things/how long it would take evolution to get there. Admittedly that’s still somewhat quantitative.

    For qualitative examples we always have hallucinations and that’s a poorly understood mechanism that may well be able to create actual creativity. But it’s the nature of AI to remain within (or close to within) the corpus of knowledge they were trained on. Though now it leads to “nothing new under the sun” so I’ll stop rambling now.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      The roadblock, to my understanding (data science guy not biologist), is the time it takes to discover these things/how long it would take evolution to get there. Admittedly that’s still somewhat quantitative.

      Yes.

      But it’s the nature of AI to remain within (or close to within) the corpus of knowledge they were trained on.

      That’s fundamentally solvable.

      I’m not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it’s something general, we in fact want something thinking like a human.

      What all these companies like DeepSeek and OpenAI and others are doing lately, with some “chain-of-thought” model, is in my opinion what they should have been focused on, how do you organize data for a symbolic logic model, how do you generate and check syllogisms, how do you, then, synthesize algorithms based on syllogisms ; there seems to be something like a chicken and egg problem between logic and algebra, one seems necessary for the other in such a system, but they depend on each other (for a machine, humans remember a few things constant for most of our existence). And the predictor into which they’ve invested so much data is a minor part which doesn’t have to be so powerful.

      • maniclucky@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        28 minutes ago

        I’m not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it’s something general, we in fact want something thinking like a human.

        Agreed. The techbros pretending that the stochastic parrots they’ve created are general AI annoys me to no end.

        While not as academically cogent as your response (totally not feeling inferior at the moment), it has struck me that LLMs would make a fantastic input/output to a greater system analogous to the Wernicke/Broca areas of the brain. It seems like they’re trying to get a parrot to swim by having it do literally everything. I suppose the thing that sticks in my craw is the giveaway that they’ve promised that this one technique (more or less, I know it’s more complicated than that) can do literally everything a human can, which should be an entire parade of red flags to anyone with a drop of knowledge of data science or fraud. I know that it’s supposed to be a universal function appropriator hypothetically, but I think the gap between hypothesis and practice is very large and we’re dumping a lot of resources into filling in the canyon (chucking more data at the problem) when we could be building a bridge (creating specialized models that work together).

        Now that I’ve used a whole lot of cheap metaphor on someone who causally dropped ‘syllogism’ into a conversation, I’m feeling like a freshmen in a grad level class. I’ll admit I’m nowhere near up to date on specific models and bleeding edge techniques.