• AIhasUse@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    4 days ago

    It takes a lot of energy to train the models in the first place, but very little once you have them. I run mixture of agents on my laptop, and it outperforms anything openai has released on pretty much every benchmark, maybe even every benchmark. I run it quite a bit and have noticed no change in my electricity bill. I imagine inference on gpt4 must almost be very efficient, if not, they should just switch to piping people open sourced llms run through MoA.

    • Guest_User@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      4 days ago

      Are you saying you have a local agent that is better than anything OpenAI has released? Where did this agent come from? Did you make it from scratch? How are you not worth billions if you can out perform them on “every benchmark”?

      • AIhasUse@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        3
        ·
        4 days ago

        My dude, no, I’m not the creator, settle down. Mixture of agents is free and open to anyone to use. Here is a demo of it by Matthew Berman. It isnt hard to set up.

        https://youtu.be/aoikSxHXBYw

        Believe it or not, openai is no longer making the best models. Claude Sonnet 3.5 is much better than openai’s best models by a considerable amount.