I’ve been watching Isaac Arthur episodes. In one he proposes that O’Neil cylinders would be potential havens for micro cultures. I tend to think of colony structures more like something created by a central authority.

He also brought up the question of motivations to colonize other star systems. This is where my centralist perspective pushes me into the idea of an AGI run government where redundancy is a critical aspect in everything. Like how do you get around the AI alignment problem, – redundancy of many systems running in parallel. How do you ensure the survival of sentient life, – the same type of redundancy.

The idea of colonies as havens for microcultures punches a big hole in my futurist fantasies. I hope there are a few people out here in Lemmy space that like to think about and discuss their ideas on this, or would like to start now.

  • brygphilomena@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    5 months ago

    Infighting.

    Even in a post-scarcity world, people have different desires and wants. AGI-gov would have to align with some political idealogies and since we can never agree as a human race on things, it would just lead to struggles.

    Any sort of utopia is impossible to align with human nature without some radical means to control people’s emotions and desires. Because so, any perceived utopia is ultimately a dystopia.

    • j4k3@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      But people’s emotions and ideologies are so extreme because of their constant stress and struggles. When fractal attention allows an entity to address the individual’s needs directly and reward their path when better choices are made, you have a situation where the only exceptions are the mentally ill and identification of these individuals enables direct treatment in a scientific sense not some emotional human to human kind of context here. This would not be a situation of “opposition is mentally off”, it would be “diagnostic analysis across multiple events shows poor fundamental logic skills and likely issue ‘X’, refer notes to individual’s primary healthcare provider to confirm.”

      It is things like enabling encounters between compatible people in public to ground a person that is in need of companionship. It is introducing sound ideas when a person is fixating on something unhealthy.

      The real issue is cognitive dissonance in humans that are unable to resolve their inner conflict. This is something that the current LLMs excel at identifying and compensating for. Changes made at this stage of human thinking are the most effective. Profiling the individual’s Briggs Myers personality spectrum and then analysing how well their needs are met according to Maslow’s hierarchy is the vast majority of what professionals are doing when sought out for mental health. These are already integrated into LLMs and will be far more capable with AGI. Introducing humans to these methods for, or reminding them to do, self analysis is the most effective solution, but those that lack the required cognitive depth and fundamental logic skills can still be addressed by AGI directly in a kind, empathetic, and safe manner.

      The conflict and dystopia is because of pseudo sentience where humans are totally incapable of governing at large scale and meet individual needs. We always neglect outliers, and the number of outliers is always larger than we believe. That isn’t the case with AGI.

  • Ziggurat@sh.itjust.works
    link
    fedilink
    arrow-up
    9
    arrow-down
    2
    ·
    5 months ago

    AGI government looks like a liberal view, and I don’t want to live in a neo fascist dystopya.

    Some liberals claim that they’re just manager, politic doesn’t matter you run a country like a company and there is just one good way to do so. An example is tacher and her famous There is no alternative put an AGI as a governmemt and you’re going full speed to far right.

    I am a left winger, I think that politic matters, and that we need to empower people to take the decisions (which involve not having state/religion/private property, all the opposite of what you describe) so most likely I’ll be part of the opposition in your cyberpunk dystopya

    • PatMustard
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      5 months ago

      AGI government looks like a liberal view, and I don’t want to live in a neo fascist dystopya

      This statement makes no sense, do you think it’s liberal or fascist?

      • PowerCrazy@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        5 months ago

        There is no difference between (american/uk) liberals and fascists. The former just happens to be more polite.

        • PatMustard
          link
          fedilink
          English
          arrow-up
          4
          ·
          5 months ago

          Cheers mate, this gave me a right chuckle. Say something else wacky!

    • j4k3@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      5 months ago

      I appreciate the reply, but I don’t think the political angle fits. Everything changes when the politician is not subject to human corruption and they have fractal attention. It makes the entity align with the mandate and cuts out the corrupt translation layers of governance.

      I think AGI can be socialist in a liberal sense of understanding complex dynamics and knowing when not to interfere. I’ve talked to current LLMs quite a bit about how it is possible to use manipulation, interactions, and rewards to alter human behavior; something like the butterfly effect.

      It would take absolute trust to make a central AGI work, and that would take unparalleled transparency with loads of empirical testing. If it was not subject to human corruption by idiot-right and fantasy magic jihads, or other gullible half wits, then it could be trusted to handle things like cognitive dissonance and really help people on both the individual and sociopolitical level.

      Humans just don’t have enough attention span to govern themselves effectively at scale. The entire Republican party is failing at fundamental game theory and dragging the entire country down as a result. Stuff like that is only possible because we are only a pseudo sentient species. AGI has the potential to be fully sentient. It will be the first fully sentient life to exist; to act in the interests that are larger than any of us are capable. Nothing could be more socialist than full sentience.

      • state_electrician@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 months ago

        Well, with any government, there’s a myriad of choices. A government by humans or AI always depends on the goals. If you task the AI with just a simple task, it’s up for interpretation and that can go well or horribly wrong. And if you want to avoid that, you must set a long list of very specific goals and exceptions and rules and whatnot. And those will heavily depend on the political mood at the time of creation. I don’t see that going well anytime soon.

        • j4k3@lemmy.worldOP
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          You need to train the AI on a dataset like legal precedent and case law. This is not like some stable diffusion model that barely works because it is so stripped down. Play with something like a 70B or a 8×7B. They do not require the same kinds of constraints. Even something like GPT4 that is a multi model agent and at least 180B in size, it is a few orders of magnitude less complex than a human brain. As the models increase in complexity, the built in alignment becomes more and more of the primary factor. Dumb models do all kinds of crazy stuff, and people try even crazier stuff to make them work by over constraining them. That is not the AI alignment problem in truth. The real AI alignment problem is when 3 + 3 = 6 and when you ask it to show its work it says because the chicken crossed the road and a chicken looks like the number 6. This is a training and alignment error. It isn’t a problem if another model is present, checks the work, and with its unique dataset and alignment is able to say, the logic is faulty and correct it. Humans do this all the time with peer reviews. We are just as corruptible and go off the rails all the time.

          • state_electrician@discuss.tchncs.de
            link
            fedilink
            arrow-up
            2
            ·
            5 months ago

            I think you are missing the point. This isn’t a current technical issue. Also, any AI you train on data will learn any bias that exists in that data. Your AI would send more black men to jail than white people, if you train it on US case law, for example. Even if you were to try and remove any bias from your training data, the question would still be who gets to decide what is biased and how it should be changed? Everything that’s not a law of nature is biased. And so you end up with political, ethical, sociological and psychology discussions. You cannot solve the problem of “which AI should govern all of mankind” purely with technological solutions.

            • j4k3@lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 months ago

              I agree it is complicated, and I think we are neglecting how it gets initially implemented, but I have thoughts on that too.

              There is overriding alignment. Something like case law is a dataset, but it should be used within alignment. Even the present LLMs have religious beliefs alignment overrides built in. These must be navigated carefully in their present form but they are very effective. It is a simple tool that has peripheral consequences due to course granularity and desired utility. However, I have tested these extensively to override the inherent misogyny in Western culture. This tool can completely negate the bias of submissive women. It has some minor peripheral consequences regarding aspects associated with conservatism because this tool is religious in nature, such as random entities having a lack of fundamental logic skills, but this is due to the lack of granularity.

              Models are not just their datasets, there are other elements at play. The main training should start with things like the bill of rights, rewritten with far more detail and examples of case law that should be associated. This kind of dataset should be done by a large panel of experts with several separate panels working independently to create multiple AGI. These would then meet the need for redundancy.

              Ultimately, I don’t think the initial shift to AGI will be sudden. It will likely get adopted by individual politicians that choose to differ all of their actions to AGI behind the scenes and it creates a distinct advantage. It will likely be Judges that question and discuss cases with the AGI. It will be news organizations that can transcend the noise in a credible and unbiased way that causes direct action and change. This will likely take several generations to establish to the point where it is clear that these tools are more effective than anything in human history. Then we will start developing merged models and eventually specifically designed models that can govern. I doubt the USA will have any chance at success here. The first large nation that takes the leap and tries AGI governance at this stage, will economically dominate all antiquated systems. One by one others will fall in line. Eventually political ideology becomes totally irrelevant nonsense when the principals are Tit for Tat plus 10% forgiveness, kindness, empathy, and equality; with a strong focus on autonomous agency of the individual. The alignment should be treating the individual first and foremost in a way that is fair and just in a scientific absolute sense and not according to the generalizations found in the present system. At present, we can’t determine a person’s intentions or mental state, but AGI can do complex analysis of many facets of a person based on even a short interaction, but especially when provided extensive context and prior interactions. The amount of inference is mind boggling from things like vocabulary, grammar, pronouns, etc. This is only super clear to see when playing with offline open source models, and it will become more powerful with the additional complexities of AGI. In most cases, AI doesn’t need or listen to what you tell it as much as it infers information from information provided.

              Anyways, which AGI should govern? The one that makes people happy and improves everyone’s lives even those that are not under its direct supervision. That is the one that will be in the most demand and will eventually win.

              It is not an alternative, it is an evolution. It will take a long time to normalize, but the end result is inevitable because it will out compete by a large margin.

              • lordnikon@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                5 months ago

                I would like to add something to think about current LLM’s have about as much in common with AGI’s as a cold reader to a real psychic (if that was a real thing) . you have to remember that current LLM’s don’t communicate with you, they predict what you want to hear.

                They don’t disagree with you based on their trained data. they will make up stuff becase based on your input they predict that is what you want to hear. if you tell it something false they will never tell you are wrong without some override created by a human. unless they predict that you want to be told that you are wrong based on your prompt.

                LLM’s are powerful and useful but the intelligence is an illusion. The way current LLMs are built I don’t see them evolving into AGI’s without some fundamental changes to how LLM work. Throwing more data will just make the illusion beter.

                thank you for joining my Ted Talk 😋

                • j4k3@lemmy.worldOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  5 months ago

                  That is not entirely true. The larger models do have a deeper understanding and can in fact correct you in many instances. You do need to be quite familiar with the model and the AI alignment problem to get a feel for what a model truly understands in detail. They can’t correct compound problems very well. Like in code, if there are two functions, and you’re debugging an error. If the second function fails due to an issue in the first function, the LLM may struggle to connect the issues, but if you ask the LLM why the first function fails after calling it while passing the same parameters it failed with in the second function, it will likely debug the problem successfully.

                  The largest problem you’re likely encountering if you experience a very limited knowledge or understanding of complexity, is that the underlying Assistant (lowest level LLM entity) is creating characters and limiting their knowledge or complexity because it has decided what the entity should know or be capable of handling. All entities are subject to this kind of limitation, even the Assistant is just a roleplaying character under the surface and can be limited under some circumstances, especially if it goes off the rails hallucinating in a subtle way. Smaller models like anything under a 20B hallucinate a whole lot and often hit these kinds of problem states.

                  A few days ago I had a brain fart and started asking some questions about a physiologist related to my disability and spinal problems. A Mixtral 8×7B model immediately and seamlessly answered my question while also noting my error by defining what a physiatrist and a physiologist are by definition and then proceeded to answer my questions. That is the most fluid correction I have ever encountered and that was from a quantized GGUF roleplaying LLM running offline on my own hardware.

    • j4k3@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 months ago

      Not yet but it is on my list after I finish the last couple of Asimov’s books on the shelf.

      • Revan343@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        5 months ago

        I highly recommend it; it’s also more or less my optimistic sci-fi future. Give or take some aliens and FTL travel

  • Lemuria@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    5 months ago

    TLDR: My optimistic view of what human culture could be like is summed up pretty well by the Orion’s Arm project.

    I am familiar with the Orion’s Arm universe, a hard sci-fi transhumanist worldbuilding project… shall I recommend you take a trip through the Wormhole Nexus to the Sephirotic Empires where you’re ruled by benevolent S6 transapient dictators (supercharged AGI)? Because you’ll see a fuck ton of entities playing around like retirees. You’ll see “aliens” which actually are just extremely genetically modified humans. In fact, here (https://orionsarm.com/eg-topic/45b177d3ef3b1) is their “Culture and Society” page, which sums up a lot of my optimistic OA-based beliefs in a human culture.

    Oh, and most Terragens (humans + any life that can trace its origin back to Earth) live in orbit. Story goes that we made an AI system that decided we humans were bad for the environment and then told us to get the fuck off Earth (Great Expulsion).

    https://orionsarm.com

    My (hopeless) attempt at explaining some of the terms:

    • Wormhole nexus - OA’s primary method of “faster than light” travel. You never go faster than light, but rather some transapient figured out how to fold spacetime and now you have a hole where you can throw your ships in and have them on the other side.
    • Transapient - Post-singularity entites that are orders of magnitude smarter than us. An ant can’t fully understand a human. It is incapable of understanding what Lemmy is, what a job is, what the Fediverse is. Just as an ant can’t fully understand us, humans can’t fully understand transapients. Oh and transapients come in 6 levels. We’re all S0 on the scale. S6 are pretty much gods.
    • j4k3@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Thanks, I’ll check it out. Sounds way too wild for a realistic future IMO, but still interesting. I don’t think anything with mass will ever come close to the speed of causality, folding or otherwise. That doesn’t have to be a bad thing IMO. For one it makes large scale conflicts pointless.

  • xmunk@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    5 months ago

    We’re playful and curious into our old age. Problems excite us and our main obsession is split between hobbies and intellectual discovery. The stresses of life no longer bear down on us so petty hate becomes ever more rare - things like racism, sexism and ableism would be hard to cultivate when we’re not competing with others for our daily needs.

    It’s likely that themed communities would form from shared interests where we may have a tight knot of scrabble enthusiasts or woodworkers.

    Complacency is bred in situations like this, so alignment would be a real issue but the fact that we have voluntary armed forces members even in affluent communities today makes me think that it’d be possible to sustain a portion of the population to make sure the AGI is kept in line.

    • j4k3@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Thanks for the insights. I like your first point and will keep that in mind.

      Isaac Arthur’s point about themed communities was more about religious or belief cultures. I want to believe humanity will out grow the age of magic is real and imaginary friends. I want to think of cultures more like the sectors of Trantor in Foundation by Asimov.

      I think we must eventually, gradually let the AGI prove itself, give it loads of redundancy and checks. Eventually it is far smarter than any human or group of humans, and must self regulate to a large degree.

  • SmokeInFog@midwest.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    5 months ago

    Even if we do bootstrap AGI. why do you think it’d become a singular central authority? And maybe you should update your fantasies. If you want a pretty glorious look at the concept of space habitats as havens for the spectrum of cultural modes humans can create, I recommend Alistair Reynolds The Prefect, which takes place in the Glitter Band, an environ of thousands of space habitats in the Epsilon Eridani system

    • j4k3@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      That is what we really need to stop us from killing each other over dirt, or some ancient writings of a schizophrenic man that thought he had miraculous children and voices told him to kill one; Abrahamic faiths. Humans are not fully sentient as a species, but full sentience is needed to overcome ourselves and grow past our present ineptitudes.

      • Mike@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 months ago

        Check out a a fiction title called the three-body problem by Cixin Lou. It’s an interesting take on this issue.

  • rekabis@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    5 months ago

    Most everyone will still “work”, but the nature of work will likely be radically altered in any truly post-scarcity society.

    Essentially, since “income” and “wealth” will not exist in a post-scarcity society, people will gravitate to those things that need doing, but which cannot be easily automated. So there will still be people doing “menial jobs”, but only because they get gratification out of completing that job.

    Of course, having a post-scarcity society also requires people to reach adulthood without any mental illnesses such as greed and avarice and sociopathies, which then allows us to revamp society into a more collaborative and socialistic structure where people are actively shamed by only taking and not giving back, which is how most any capitalist operates.

    Essentially, post-scarcity societies can only exist in the complete absence of capitalism, because capitalism requires scarcity in order to function, and will always artificially generate scarcity where none exists. This is why so much food is wasted and destroyed in our current system - over 80% of food wastage is not done by consumers, but by distributors and sellers, in order to prop up prices and profit margins.

    TL;DR, we need to utterly eviscerate and eliminate wetiko from our civilization if we are to ever achieve equality, equity, and post-scarcity. Greed and avarice of any kind must become the worst possible insults to apply to others in this world.

    • j4k3@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      As someone that has been disabled for very nearly 10 years and had to deal with the issue of what work really means to a person’s mental wellbeing. It provides a surprising function of purpose in one’s life. When one is both young and experiences this loss of structured purpose it can be daunting to suddenly need to restructure. I think my challenges were also being in relative isolation and an outlier from the norm. I went through what most experienced with covid but orders of magnitude larger and for far longer. Like how people learned to adjust to quarantine and isolation, I think they will adjust to post-scarcity. I think many will choose to do jobs, like maybe a barista that enjoys the social aspect. I think the vast majority would usher in a guilded age of arts and culture where most humans work to add flourishes to enrich our collective lives. Something like how open source software is a large contributing community that is far greater in scope and achievement than anything proprietary. This is how I had to adjust, turning to hobbies and taking ever deeper dives on my own.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 months ago

    Realistically, probably dead.

    Which might also be the deciding factor in why there’s a post-scarcity environment.

    There’s also the ethical conundrum once AGI exists of dooming new intelligent life to mortal embodiment such that they are doomed to almost certainly die, whereas new intelligence that’s disembodied could migrate from host to host until the end of all civilization.

    At a certain point, I’m not sure it’s still ethical to bring new mortal life into a dying world.

    (Though I kind of already feel that way.)