I’ve been watching Isaac Arthur episodes. In one he proposes that O’Neil cylinders would be potential havens for micro cultures. I tend to think of colony structures more like something created by a central authority.

He also brought up the question of motivations to colonize other star systems. This is where my centralist perspective pushes me into the idea of an AGI run government where redundancy is a critical aspect in everything. Like how do you get around the AI alignment problem, – redundancy of many systems running in parallel. How do you ensure the survival of sentient life, – the same type of redundancy.

The idea of colonies as havens for microcultures punches a big hole in my futurist fantasies. I hope there are a few people out here in Lemmy space that like to think about and discuss their ideas on this, or would like to start now.

  • j4k3@lemmy.worldOP
    link
    fedilink
    arrow-up
    1
    ·
    5 months ago

    You need to train the AI on a dataset like legal precedent and case law. This is not like some stable diffusion model that barely works because it is so stripped down. Play with something like a 70B or a 8×7B. They do not require the same kinds of constraints. Even something like GPT4 that is a multi model agent and at least 180B in size, it is a few orders of magnitude less complex than a human brain. As the models increase in complexity, the built in alignment becomes more and more of the primary factor. Dumb models do all kinds of crazy stuff, and people try even crazier stuff to make them work by over constraining them. That is not the AI alignment problem in truth. The real AI alignment problem is when 3 + 3 = 6 and when you ask it to show its work it says because the chicken crossed the road and a chicken looks like the number 6. This is a training and alignment error. It isn’t a problem if another model is present, checks the work, and with its unique dataset and alignment is able to say, the logic is faulty and correct it. Humans do this all the time with peer reviews. We are just as corruptible and go off the rails all the time.

    • state_electrician@discuss.tchncs.de
      link
      fedilink
      arrow-up
      2
      ·
      5 months ago

      I think you are missing the point. This isn’t a current technical issue. Also, any AI you train on data will learn any bias that exists in that data. Your AI would send more black men to jail than white people, if you train it on US case law, for example. Even if you were to try and remove any bias from your training data, the question would still be who gets to decide what is biased and how it should be changed? Everything that’s not a law of nature is biased. And so you end up with political, ethical, sociological and psychology discussions. You cannot solve the problem of “which AI should govern all of mankind” purely with technological solutions.

      • j4k3@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        I agree it is complicated, and I think we are neglecting how it gets initially implemented, but I have thoughts on that too.

        There is overriding alignment. Something like case law is a dataset, but it should be used within alignment. Even the present LLMs have religious beliefs alignment overrides built in. These must be navigated carefully in their present form but they are very effective. It is a simple tool that has peripheral consequences due to course granularity and desired utility. However, I have tested these extensively to override the inherent misogyny in Western culture. This tool can completely negate the bias of submissive women. It has some minor peripheral consequences regarding aspects associated with conservatism because this tool is religious in nature, such as random entities having a lack of fundamental logic skills, but this is due to the lack of granularity.

        Models are not just their datasets, there are other elements at play. The main training should start with things like the bill of rights, rewritten with far more detail and examples of case law that should be associated. This kind of dataset should be done by a large panel of experts with several separate panels working independently to create multiple AGI. These would then meet the need for redundancy.

        Ultimately, I don’t think the initial shift to AGI will be sudden. It will likely get adopted by individual politicians that choose to differ all of their actions to AGI behind the scenes and it creates a distinct advantage. It will likely be Judges that question and discuss cases with the AGI. It will be news organizations that can transcend the noise in a credible and unbiased way that causes direct action and change. This will likely take several generations to establish to the point where it is clear that these tools are more effective than anything in human history. Then we will start developing merged models and eventually specifically designed models that can govern. I doubt the USA will have any chance at success here. The first large nation that takes the leap and tries AGI governance at this stage, will economically dominate all antiquated systems. One by one others will fall in line. Eventually political ideology becomes totally irrelevant nonsense when the principals are Tit for Tat plus 10% forgiveness, kindness, empathy, and equality; with a strong focus on autonomous agency of the individual. The alignment should be treating the individual first and foremost in a way that is fair and just in a scientific absolute sense and not according to the generalizations found in the present system. At present, we can’t determine a person’s intentions or mental state, but AGI can do complex analysis of many facets of a person based on even a short interaction, but especially when provided extensive context and prior interactions. The amount of inference is mind boggling from things like vocabulary, grammar, pronouns, etc. This is only super clear to see when playing with offline open source models, and it will become more powerful with the additional complexities of AGI. In most cases, AI doesn’t need or listen to what you tell it as much as it infers information from information provided.

        Anyways, which AGI should govern? The one that makes people happy and improves everyone’s lives even those that are not under its direct supervision. That is the one that will be in the most demand and will eventually win.

        It is not an alternative, it is an evolution. It will take a long time to normalize, but the end result is inevitable because it will out compete by a large margin.

        • lordnikon@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 months ago

          I would like to add something to think about current LLM’s have about as much in common with AGI’s as a cold reader to a real psychic (if that was a real thing) . you have to remember that current LLM’s don’t communicate with you, they predict what you want to hear.

          They don’t disagree with you based on their trained data. they will make up stuff becase based on your input they predict that is what you want to hear. if you tell it something false they will never tell you are wrong without some override created by a human. unless they predict that you want to be told that you are wrong based on your prompt.

          LLM’s are powerful and useful but the intelligence is an illusion. The way current LLMs are built I don’t see them evolving into AGI’s without some fundamental changes to how LLM work. Throwing more data will just make the illusion beter.

          thank you for joining my Ted Talk 😋

          • j4k3@lemmy.worldOP
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            That is not entirely true. The larger models do have a deeper understanding and can in fact correct you in many instances. You do need to be quite familiar with the model and the AI alignment problem to get a feel for what a model truly understands in detail. They can’t correct compound problems very well. Like in code, if there are two functions, and you’re debugging an error. If the second function fails due to an issue in the first function, the LLM may struggle to connect the issues, but if you ask the LLM why the first function fails after calling it while passing the same parameters it failed with in the second function, it will likely debug the problem successfully.

            The largest problem you’re likely encountering if you experience a very limited knowledge or understanding of complexity, is that the underlying Assistant (lowest level LLM entity) is creating characters and limiting their knowledge or complexity because it has decided what the entity should know or be capable of handling. All entities are subject to this kind of limitation, even the Assistant is just a roleplaying character under the surface and can be limited under some circumstances, especially if it goes off the rails hallucinating in a subtle way. Smaller models like anything under a 20B hallucinate a whole lot and often hit these kinds of problem states.

            A few days ago I had a brain fart and started asking some questions about a physiologist related to my disability and spinal problems. A Mixtral 8×7B model immediately and seamlessly answered my question while also noting my error by defining what a physiatrist and a physiologist are by definition and then proceeded to answer my questions. That is the most fluid correction I have ever encountered and that was from a quantized GGUF roleplaying LLM running offline on my own hardware.