First, let me say that what broke me from the herd at lesswrong was specifically the calls for AI pauses. That somehow ‘rationalists’ are so certain advanced AI will kill everyone in the future (pDoom = 100%!) that they need to commit any violent act needed to stop AI from being developed.

The flaw here is that there’s 8 billion people alive right now, and we don’t actually know what the future is. There are ways better AI could help the people living now, possibly saving their lives, and essentially eliezer yudkowsky is saying “fuck em”. This could only be worth it if you actually somehow knew trillions of people were going to exist, had a low future discount rate, and so on. This seems deeply flawed, and seems to be one of the points here.

But I do think advanced AI is possible. And while it may not be a mainstream take yet, it seems like the problems current AI can’t solve, like robotics, continuous learning, module reuse - the things needed to reach a general level of capabilities and for AI to do many but not all human jobs - are near future. I can link deepmind papers with all of these, published in 2022 or 2023.

And if AI can be general and control robots, and since making robots is a task human technicians and other workers can do, this does mean a form of Singularity is possible. Maybe not the breathless utopia by Ray Kurzweil but a fuckton of robots.

So I was wondering what the people here generally think. There are “boomer” forums I know of where they also generally deny AI is possible anytime soon, claim GPT-n is a stochastic parrot, and make fun of tech bros as being hypesters who collect 300k to edit javascript and drive Teslas*.

I also have noticed that the whole rationalist schtick of “what is your probability” seems like asking for “joint probabilities”, aka smoke a joint and give a probability.

Here’s my questions:

  1. Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.

  2. Do you consider it likely, before 2040, those domains will include robotics

  3. If AI systems can control robotics, do you believe a form of Singularity will happen. This means hard exponential growth of the number of robots, scaling past all industry on earth today by at least 1 order of magnitude, and off planet mining soon to follow. It does not necessarily mean anything else.

  4. Do you think that mass transition where most human jobs we have now will become replaced by AI systems before 2040 will happen

  5. Is AI system design an issue. I hate to say “alignment”, because I think that’s hopeless wankery by non software engineers, but given these will be robotic controlling advanced decision-making systems, will it require lots of methodical engineering by skilled engineers, with serious negative consequences when the work is sloppy?

*“epistemic status”: I uh do work for a tech company, my job title is machine learning engineer, my girlfriend is much younger than me and sometimes fucks other dudes, and we have 2 Teslas…

  • VHS [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 year ago

    There are “boomer” forums I know of where they also generally deny AI is possible anytime soon, claim GPT-n is a stochastic parrot, and make fun of tech bros as being hypesters who collect 300k to edit javascript and drive Teslas*.

    i hate rationalists too but this is literally a correct take

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      1 year ago

      we held off for a bit cos we don’t want to be actively unkind to the recovering rationalists, and he was our first ardent debate bro actually on the instance, but he rapidly also became our first 24 hour ban of a local account rather than a federated one. perhaps his posting will improve a day hence!

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      1 year ago

      tbh I read the statement about epistemic status as ironic. I was disabused of this notion rapidly.

      A bad habit rationalism teaches is to treat a stock verbiage of polite and open discussion as on the one hand (a) integral to productive conversation, and (b) automatically generative of productive conversation. But people aren’t like that, because people are in general really smart listeners (and readers) when it comes to figuring out what is stock verbiage and what is meant in earnest.

      Thanks for articulating this.

    • BrickedKeyboard@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      1 year ago

      I appreciated this post because it never occurred to me that the “thumb might be on the scales” for the “rules for discourse” that seems to be the norm around the rat forms. I personally ignore most of it, however, the “ES” rat phrase is simply saying, “I know we humans are biased observers, this is where I’m coming from”. If the topic were renewable energy and I was the ‘head of extraction at BP’, you can expect that whatever I have to say is probably biased against renewable energy.

      My other thought reading this was : what about the truth. Maybe the mainstream is correct about everything. “Sneer club” seems to be mostly mainstream opinions. That’s fine I guess but the mainstream is sometimes wrong about issues that have been poorly examined or near future events. The collective opinions of everyone don’t really price in things that are about to happen, even if it’s obvious to experts. For example, the mainstream opinion on covid was usually lagging several weeks behind Zvi’s posts on lesswrong.

      Where I am going with this is you can point out bad arguments on my part, but I mean in the end, does truth matter? Like are we here to score points on each other or share what we think reality is or will in the very near future be?

      • YouKnowWhoTheFuckIAM@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        1 year ago

        I would hardly consider myself in favour of “the mainstream”, but I also know that what counts as “mainstream” is irreducibly dependent on your point of view. As far as I’m concerned a great deal of anti-“mainstream” opinion is reactionary and/or stupid, so anti-“mainstream” only by default. A stopped clock, famously, tells the truth twice a day - whether its on CBS or LessWrong. If you want the “truth” I recommend narrowing your focus until you start making meaningful distinctions. I hope that as comfortably vitiates your point as it should.

        Next time it would be polite to answer the fucking question.

        • BrickedKeyboard@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          1 year ago

          Next time it would be polite to answer the fucking question.

          Sorry sir:

          *I have to ask, on the matter of (2): why? * I think I answered this.

          What’s being signified when you point to “boomer forums”? That’s an “among friends” usage: you’re free to denigrate the boomer fora here. And > then once again you don’t know yet if this is one of those “boomer forums”, or you wouldn’t have to ask.

          What people in their droves are now desperate to ask, I will ask too: which is it dummy? Take the stopper out of your speech hole and tell us how > you really feel.

          I am not sure what you are asking here, sir. It’s well known to those in the AI industry that a profound change is upon us and that GPT-4 shows generality for it’s domain, and robotics generality is likely also possible using a variant technique. So individuals unaware of this tend to be retired people who have no survival need to learn any new skills, like my boomer relatives. I apologize for using an ageist slur.

          • YouKnowWhoTheFuckIAM@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            1 year ago

            I didn’t ask you to apologise for using an “ageist slur”, I asked you, of the particular affects you adopted in your opening gambit here, which corresponded to how you really feel. You adopted a tone and verbiage which implied you were, as I put it, “amongst friends”, but on the other you also tried to suggest you didn’t actually know anything about SneerClub. On that other hand, you set yourself up as in favour of everything rationalism except this one tiny thing, but back on the first and again here you’re suggesting that you know pretty well where you are (re: “mainstream”, and SneerClub’s alleged favouring it against rationalism in general). My suggestion was that this muddle of cant implies a fundamental dishonesty: you’re hiding all sorts of opinions behind a borrowed language of (at least in its original context: passive aggressive) non-confrontation. Most of that is well confirmed when you slip into this dropping of “sir”s and openly passive aggressive apologising just because I was explicitly impatient.

            The world doesn’t slow down but it turns smoother when you just say what you mean or decide you didn’t have anything to say in the first place.

            Look back at that guff about “discovering reality”, now if that isn’t just the adderall talking it’s a move you make when you don’t particularly like somebody but you want to make them look or at least feel a little bad for not being appropriately high-minded. “High-minded” here would further translate into real demands as “getting with the right programme”, to the exclusion of what your opposite partner was doing - in this case, allegedly, scoring points “off each other”. “Off each other” was another weasel phrase: you know that at least at first blush you weren’t scoring points off anyone, so you also know that the only remaining target of that worry could have been SneerClubbers.

            • BrickedKeyboard@awful.systemsOP
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              edit-2
              1 year ago

              now if that isn’t just the adderall talking

              Nail on the head. Especially on the internet/‘tech bro’ culture. All my leads at work also have such a, “extreme OCD” kinda attitude. Sorry if you feel offended emotionally, I didn’t mean it.

              The rest of your post is ironically very much something that Eliezer posits a superintelligence would be able to do. Or from the anime Death Note. I use a few words or phrases, you analyze the shit out of them and try to extract all the information you can and have concluded all this stuff like

              opening gambit

              “amongst friends”

              hiding all sorts of opinions behind a borrowed language

              guff about “discovering reality”

              real demands as “getting with the right programme”,

              allegedly, scoring points “off each other”

              Off each other” was another weasel phrase

              you know that at least at first blush you weren’t scoring points off anyone

              See everything you wrote above is a possibly correct interpretation of what I wrote. It’s like the english lit analysis after the author’s dead. Eliezer posits a superintelligence could use this kind of analysis to convince operators with admin authority to break the rules, or L in death note uses this to almost catch the killer.

              It’s also all false in this case. (it’s also why a superintelligence probably can’t actually do this) I’ve been on the internet long enough to know it is almost impossible to convince someone of anything, unless they already were willing and you just link some facts they didn’t know about. So my gambit actually something very different.

              Do you know how you get people to answer a question on the internet? To post something that’s wrong*. And it clearly worked, there’s more discussion on this thread than this entire forum in several pages, maybe since it was created.

              *ironically in this case I posted what I think is the correct answer but it disagrees with your ontology. If I wanted lesswrongers to comment on my post I would need a different OP.

              • YouKnowWhoTheFuckIAM@awful.systems
                link
                fedilink
                English
                arrow-up
                5
                ·
                edit-2
                1 year ago

                If you are finding it hard to not take pills, are concerned its warping your behaviour, self-perception, or affecting your interpersonal relationships, I recommend looking up your local NA hotline on google, it’ll be open 24/7

              • self@awful.systemsM
                link
                fedilink
                English
                arrow-up
                4
                ·
                1 year ago

                like Christ look at all the nonsense they posted to try to distract from the adderall thing

      • Sailor Sega Saturn@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        1 year ago

        Epistemic Status: Single/Cali girl ;)

        Maybe the mainstream is correct about everything. “Sneer club” seems to be mostly mainstream opinions.

        Lurk moar.

        For example, the mainstream opinion on covid was usually lagging several weeks behind Zvi’s posts on lesswrong.

        Heaven forbid the mainstream take a few weeks to figure shit out when presented with new information instead of violently changing gears every time a new story or rumor gets published.

        For anyone curious: https://www.lesswrong.com/s/rencyawwfr4rfwt5C

        My favorite quotes from within:

        Going on walks considered fine for some reason, very strange.

        My current best thought for how to do experiments quickly is medical cruise ships in international waters. […] Medical cruise ships are already an established way to do things without running into regulatory problems.

        We are willing to do things that people find instinctively repugnant, provided they save lives while at least not hurting the economy. How could we accomplish this?

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    1 year ago

    ooooookay longpost time

    first off: eh wtf, why is this on sneerclub? kinda awks. but I’ll try give it a fair and honest answer.

    First, let me say that what broke me from the herd at lesswrong was specifically the calls for AI pauses.

    look, congrats on breaking out, but uh… you’re still wearing the prison jumpsuit in the grocery store and that’s why people are looking at you weirdly

    “yay you got out” but you got only half the reason right

    take some time and read this

    This seems deeply flawed

    correct

    But I do think advanced AI is possible

    one note here: “plausible” vs “possible” are very divergent paths and likelihoods

    in the Total Possible Space Of All Things That Might Ever Happen, of course it’s possible, but so are many, many other things

    it seems like the problems current AI can’t solve, like robotics, continuous learning, module reuse - the things needed to reach a general level of capabilities and for AI to do many but not all human jobs - are near future

    eh. this ties back to my opener - you’re still too convinced about something on essentially no grounded basis other than industry hype-optimism

    I can link deepmind papers with all of these, published in 2022 or 2023.

    look I don’t want to shock you but that’s basically what they get paid to do. and (perverse) incentives apply - of course goog isn’t just going to spend a couple decabillion then go “oh shit, hmm, we’ve reached the limits of what this can do. okay everyone, pack it in, we’re done with this one!”, they’re gonna keep trying to milk it to make some of those decabillions back. and there’s plenty of useful suckers out there

    And if AI can be general and control robots, and since making robots is a task human technicians and other workers can do, this does mean a form of Singularity is possible. Maybe not the breathless utopia by Ray Kurzweil but a fuckton of robots.

    okay this is a weird leap and it’s borderline LW shittery so I’m not going to spend much effort on it, but I’ll give you this

    it doesn’t fucking matter.

    even if we do somehow crack even the smallest bit of computational sentience, the plausibility of rapid acting self-reinforcing runaway self-improvement on such a thing is basically nil. we’re 3 years down the line on the Evergreen getting stuck in the suez and fabs shutting down (with downstream orders being cancelled) and as a result of it a number of chips are still effectively unobtanium (even if and when you have piles and piles of money to throw at the problem). multiple industries, worldwide, are all throwing fucking tons of money at the problem to try recover from the slightest little interruption in supply (and like, “slight”, it wasn’t even like fabs burned down or something, they just stopped shipping for a while)

    just think of the utter scope of doing robotics. first you have to solve a whole bunch of design shit (which by itself involves a lot of from-principles directed innovation and inspiration and shit). then you have to figure out how to build the thing in a lab. then you have to scale it? which involves ordering thousounds of parts and SKUs from hundred of vendors. then find somewhere/somehow to assemble it? and firmware and iteration and all that shit?

    this isn’t fucking age of ultron, and tony’s parking-space fab isn’t a real thing.

    this outcome just isn’t fucking likely on any nearby horizon imo

    So I was wondering what the people here generally think

    we generally think the people who believe this are unintentional suckers or wilful grifters. idk what else to tell you? thought that was pretty clear

    There are “boomer” forums I know of where they also generally deny AI is possible anytime soon, claim GPT-n is a stochastic parrot, and make fun of tech bros as being hypesters who collect 300k to edit javascript and drive Teslas*.

    wat

    I also have noticed that the whole rationalist schtick of “what is your probability” seems like asking for “joint probabilities”, aka smoke a joint and give a probability.

    okay this gave me a momentary chuckle, and made me remember JRPhttp://darklab.org/jrp.txt (which is a fun little shitpost to know about)

    from here, answering your questions as you asked them in order (and adding just my own detail in areas where others may not already have covered something)

    1. no, not a fuck, not even slightly. definitely not with the current set of bozos at the helm or techniques as the foundation or path to it.

    2. no, see above

    3. who gives a shit? but seriously, no, see above. even if it did, perverse incentives and economic pressures from sweeping hand motion all this other shit stands a very strong chance to completely fuck it all up 60 ways to sunday

    4. snore

    5. if any of this happens at some point at all, the first few generations of it will probably look the same as all other technology ever - a force-multiplier with humans in the loop, doing things and making shit. and whatever happens in that phase will set the one on whatever follows so I’m not even going to try predict that

    *“epistemic status”: I uh do work for a tech company, my job title is machine learning engineer, my girlfriend is much younger than me and sometimes fucks other dudes, and we have 2 Teslas…

    …okay? congrats? is that fulfilling for you? does it make you happy?

    not really sure why you mentioned the gf thing at all? there’s no social points to be won here

    closing thoughts: really weird post yo. like, “5 yud-steered squirrels in a trenchcoat” weird.

    • self@awful.systemsM
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 year ago

      look I don’t want to shock you but that’s basically what they get paid to do. and (perverse) incentives apply - of course goog isn’t just going to spend a couple decabillion then go “oh shit, hmm, we’ve reached the limits of what this can do. okay everyone, pack it in, we’re done with this one!”, they’re gonna keep trying to milk it to make some of those decabillions back. and there’s plenty of useful suckers out there

      a lot of corporations involved with AI are doing their damndest to damage our relationship with the scientific process by releasing as much fluff disguised as research as they can manage, and I really feel like it’s a trick they learned from watching cryptocurrency projects release an interminable amount of whitepapers (which, itself, damaged our relationship with and expectations from the engineering process)

      • Steve@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        1 year ago

        As someone who went from high school directly into a publishing company as a “web designer” in 1998 I spent the next 20 years assuming that academic work was completely uninfluenced by commercial interests. HCI was academic, UX was commercial. Wasn’t till around 2019 that I started reading ACM papers about HCI from the 70s up. Fuck me was I surprised with how mixed up it all is. ACM interactions magazine published monthly case studies for Apple or did profiles on Jef Raskin talking about HCI for brand loyalty.

        Anyway. Point is a published paper doesn’t mean shit if you just read a few because an article pointed you to them. I don’t know. This thread sucks

        • TerribleMachines@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          Preach, as someone inside academia, the bullcrap is real. I very rarely read a paper that hasn’t got a major stats issue—an academic paper is only worth something if you understand it enough to know how wrong it is or there’s plenty of replication/related work building on it, ideally both. (And it’s a technical field with an objective measure of truth but don’t let my colleagues in humanities hear me say that—its not that their work is worthless, its just its not reliable.)

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        “shitcoiners or oil companies… who wore it best?”

        but the rest of your reply reminds me that someone (I think steve or blake?) mentioned a thing here recently about a book on blaming guthenberg for this state of fucking everything up. I want to go read that, and I really need to get around to writing my rantpost about the “the problem of information transfer at scale is that scale is lossy, and this is why … [handwaves at many problems, examples continue]” thing that at least 8 friends of mine have had to put up with in DM over the last few years

      • BrickedKeyboard@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        5
        ·
        1 year ago

        They also hyped autonomous cars and the Internet itself including streaming video for years before it was practical. Your filter of “it’s all hype” only works 99 percent of the time.

        • David Gerard@awful.systemsM
          link
          fedilink
          English
          arrow-up
          15
          ·
          edit-2
          1 year ago

          autonomous cars aren’t

          look, there is no way on earth you didn’t lose a fortune in crypto last year

            • BrickedKeyboard@awful.systemsOP
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              3
              ·
              1 year ago

              This pattern shows up often when people are trying to criticize tesla or spaceX. And yeah, if you measure “current reality” vs “promises of their hype man/lead shitposter and internet troll”, absolutely. Tesla probably will never achieve full self driving using anything like their current approach. But if you compare Tesla “to other automakers, “to most automakers that ever existed””, or SpaceX to “any rocket company since 1970” there’s no comparison. If you’re going to compare the internet to pre-internet, compare it to BBS you would access via modem or fax machines or libraries. No comparison.

              Similarly you should compare GPT-4 and the next large model to be released, Gemini, vs all AI software for all time. It’s no comparison.

              • self@awful.systemsM
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                1 year ago

                you keep falling into the exact same problem gambler defenses as those who have lost a lot of money on cryptocurrency, and are actively ignoring one of the foremost experts on cryptocurrency scams asking you if this is the case

                reading your responses in this thread (and somehow there’s more left, seriously, please lay off the adderall) it’s pretty obvious you’re not here for help — you are here to masturbate. you are in no position to reflect, you are here to flex your (frankly utterly mediocre) knowledge of AI grifts, because even though you are terrified of the system capitalism has built for you, your worldview does not allow you to not be the smartest person in the room, and so long as that’s the case no escape is possible.

                and with that, off you fuck

                • froztbyte@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 year ago

                  you are here to flex your (frankly utter mediocre) knowledge of AI grifts

                  you know, I was wondering how to put it, because the only thing I’d had so far was: “big ‘showing up to the gunfight with a sack of pebbles, a slingshot, and a plucky attitude’ energy”

    • BrickedKeyboard@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      Just I think to summarize your beliefs: rationalists are wrong about a lot of things and assholes. And also the singularity (which predates yuds existence) is not in fact possible by the mechanism I outlined.

      I think this is a big crux here. It’s one thing if its a cult around a false belief. It’s kind of a problem to sneer at a cult if the core S of it happens to be a true law of nature.

      Or an analogy. I think gpt-4 is like the data from the Chicago pile. That data was enough to convince the domain experts then a nuke was going to work to the point they didn’t test Fat Man, you believe not. Clearly machine generality is possible, clearly it can solve every problem you named including, with the help of humans, ordering every part off digikey and loading the pick and place and inspecting the boards and building the wire harnesses and so on.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 year ago

        Just I think to summarize your beliefs

        don’t be puttin’ words in my mouth yo

        rationalists

        this is a big set of very many people and lots of details

        are wrong about a lot of things

        many of them about many things, yes

        and assholes

        some, provably

        And also the singularity (which predates yuds existence) is not in fact possible by the mechanism I outlined

        whether it’s the wet dream of kurzweil or yud or whoever else, doesn’t matter? but as to the details… you’re engaging with this like the rats do (yes, told you, you only half escaped). you “set the example”, and then “test the details”

        just … don’t?

        the siren song of this is “okay what if I change the details of the experiment slightly?”

        we’ve had the trolley problem for ages, doesn’t mean it’s just “solved”. you won’t manage to “solve” whether the singularity can happen or not here, for the same reason

        • BrickedKeyboard@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          1 year ago

          I wanted to know what you know and I don’t. If rationalists are all scammers and not genuinely trying to be, per the name ‘lesswrong’ in their view of reality, what’s your model of reality. What do you know? So far unfortunately I haven’t seen anything. Sneer club’s “reality model” seems to be “whatever the mainstream average person knows + 1 physicist”, and it exists to make fun of the mistakes of rationalists and I assume ignores any successes if there are any.

          Which is fine, I guess? Mainstream knowledge is probably usually correct. It’s just that I already know it, there’s nothing to be learned here.

              • self@awful.systemsM
                link
                fedilink
                English
                arrow-up
                5
                ·
                1 year ago

                it is interesting how after their temporary ban, their focus shifted to “all you can see is the mainstream” meshed with “the governments and stock markets of the world agree with me, like, in secret”, as if these weren’t conflicting ideas. this is tragically reminiscent of the thought processes of several conspiracy theorists I’ve known

                • froztbyte@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  1 year ago

                  and the sudden “I may be too autistic” drops were super wtf too

                  massive overall tone change without any revision of their position (even the barest acknowledgement of reflection was a “nuh-uh, still think I’m right” post)

            • BrickedKeyboard@awful.systemsOP
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              3
              ·
              1 year ago

              which is fine. the bigger topic is, could you leave a religion if the priest’s powers were real*, even if the organization itself was questionable?

              *real as in generally held to be real by all the major institutions in the world you are in. Most world governments and stock market investors are investing in AI, they believe they will get an ROI somehow.

          • earthquake@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            Mainstream knowledge is probably usually correct. It’s just that I already know it

            Still reflecting on this incredible claim.

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            1 year ago

            Okay I realize this esteemed po(a)ster already got shown the door but I happened to read this reply days ago and it kept bugging me

            On the face of it, “What you know and I don’t” was a weird phrase to pick, in context of their later claims of “I work in the industry [and thus I totes know things]”

            The thing that really bugs me about it is that it’s as though this person is (apparently? willingly? by choice? something?) incapable of (a certain class of[0]) subjective (value) judgements. And even as I type that out I realize that’s almost certainly a hallmark of these types (only uncertain because I’ve never had to spend thought on that previously).

            [0] - they’ll go with “cold hard fact” but only their[1] cold hard fact

            [1] - this I also want to rantpost on, but probably needs some prep to give it a full NSFW-worthy posting

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        Or an analogy. I think gpt-4 is like the data from the Chicago pile. That data was enough to convince the domain experts then a nuke was going to work to the point they didn’t test Fat Man, you believe not.

        Are you mixing up Fat Man and Little Boy? Because Fat Man was an implosion-type bomb, just like the Trinity device. Little Boy was a gun-type. From vague memories of Rhode’s book, they wanted implosion types to maximize Pu weight to kiloton ratio, but it was much less straightforward than a gun-type bomb.

      • BernieDoesIt@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        I think gpt-4 is like the data from the Chicago pile. That data was enough to convince the domain experts then a nuke was going to work to the point they didn’t test Fat Man, you believe not.

        Whoa whoa whoa there! I’m the contrarian that thinks that gpt is clearly more that just plagiarizing things, but it’s still just a step above Mad Libs in terms of intelligence. It’s not clear that you could get it to be smarter than a goldfish, let alone a human being. It’s just really good at stringing words together in a way that sounds good.

    • BrickedKeyboard@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      1 year ago

      take some time and read this

      I read it. I appreciated the point that human perception of current AI performance can scam us, though this is nothing new. People were fooled by Eliza.

      It’s a weak argument though. For causing an AI singularity, functional intelligence is the relevant parameter. Functional intelligence just means “if the machine is given a task, what is the probability it completes the task successfully”. Theoretically an infinite chinese room can have functional intelligence (the machine just looks up the sequence of steps for any given task).

      People have benchmarked GPT-4 and it’s got general functional intelligence at tasks that can be done on a computer. You can also just go pay up $20 a month and try it. It’s below human level overall I think, but still surprisingly strong given it’s emergent behavior from computing tokens.

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    I will answer these sincerely in as much detail as necessary. I will only do this once, lest my status amongst the sneerclub fall.

    1. I don’t think this question is well-defined. It implies that we can qualify all the relevant domains and quantify average human performance in those domains.
    2. See above.
    3. I think “AI systems” already control “robotics”. Technically, I would count kids writing code for a simple motorised robot to satisfy this. Everywhere up the ladder, this is already technically true. I imagine you’re trying to ask about AI-controlled robotics research, development and manufacturing. Something like what you’d see in the Terminator franchise- Skynet takes over, develops more advanced robotic weapons, etc. If we had Skynet? Sure, Skynet formulated in the films would produce that future. But that would require us to be living in that movie universe.
    4. This is a much more well-defined question. I don’t have a belief that would point me towards a number or probability, so no answer as to “most.” There are a lot of factors at play here. Still, in general, as long as human labour can be replaced by robotics, someone will, at the very least, perform economic calculations to determine if that replacement should be done. The more significant concern here for me is that in the future, as it is today, people will still only be seen as assets at the societal level, and those without jobs will be left by the wayside and told it is their fault that they cannot fend for themselves.
    5. Yes, and we already see that as an issue today. Love it or hate it, the partisan news framework produces some consideration of the problems that pop up in AI development.

    Time for some sincerity mixed with sneer:

    I think the disconnect that I have with the AGI cult comes down to their certainty on whether or not we will get AGI and, more generally, the unearned confidence about arbitrary scientific/technological/societal progress being made in the future. Specifically with AI => AGI, there isn’t a roadmap to get there. We don’t even have a good idea of where “there” is. The only thing the AGI cult has to “convince” people that it is coming is a gish-gallop of specious arguments, or as they might put it, “Bayesian reasoning.” As we say, AGI is a boogeyman, and its primary use is bullying people into a cult for MIRI donations.

    Pure sneer (to be read in a mean, high-school bully tone):

    Look, buddy, just because Copilot can write spaghetti less tangled than you doesn’t mean you can extrapolate that to AGI exploring the stars. Oh, so you use ChatGPT to talk to your “boss,” who is probably also using ChatGPT to speak to you? And that convinces you that robots will replace a significant portion of jobs? Well, that at least convinces me that a robot will replace you.

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      Well, that at least convinces me that a robot will replace you.

      i am sincerely convinced that VCs who fearmonger about AI are worried that GPT-3 would already do twitter better than them

    • BrickedKeyboard@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      1 year ago

      1, 2 : since you claim you can’t measure this even as a thought experiment, there’s nothing to discuss 3. I meant complex robotic systems able to mine minerals, truck the minerals to processing plants, maintain and operate the processing plants, load the next set of trucks, the trucks go to part assembly plants, inside the plant robots unload the trucks and feed the materials into CNC machines and mill the parts and robots inspect the output and pack it and more trucks…culminating in robots assembling new robots.

      It is totally fine if some human labor hours are still required, this cheapens the cost of robots by a lot.

      1. This is deeply coupled to (3). If you have cheap robots, if an AI system can control a robot well enough to do the task as well as a human, obviously it’s cheaper to have robots do the task than a human in most situations.

      Regarding (3) : the specific mechanism would be AI that works like this:

      Millions of hours of video of human workers doing tasks in the above domain + all video accessible to the AI company -> tokenized compressed description of the human actions -> llm like model. The llm like model thus is predicting “what would a human do”. You then need a model to transform the what to robotic hardware that is built differently than humans, and this is called the “foundation model”: you use reinforcement learning where actual or simulated robots let the AI system learn from millions of hours of practice to improve on the foundation model.

      The long story short of all these tech bro terms is robotic generality - the model will be able to control a robot to do every easy or medium difficulty task, the same way it can solve every easy or medium homework problem. This is what lets you automate (3), because you don’t need to do a lot of engineering work for a robot to do a million different jobs.

      Multiple startups and deepmind are working on this.

      • BernieDoesIt@kbin.social
        link
        fedilink
        arrow-up
        12
        ·
        1 year ago

        since you claim you can’t measure this even as a thought experiment, there’s nothing to discus

        You’re going to have to lose the LessWrongy superstition that you have to be able to assign numbers to something for it to be meaningful. Sometimes when talking about this big, messy, complicated world, your error bars are so large that assigning any number at all would be meaningless and lead to error. That doesn’t mean you can’t talk qualitatively about what you do know or believe.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        1 year ago
        1. +2, You haven’t made the terms clear enough for there to even be a discussion.
        2. see above (placeholder for list formatting)
        3. Uh, OK? Then no (pure sneer: the plot thins). Robots building robots probably already happens in some sense, and we aren’t in the Singularity yet, my boy.
        4. Sure, why not.

        (pure sneer response: imagine I’m a high school bully, and that I assault you in the manner befitting someone of my station, and then I say, “How’s that for a thought experiment?”)

        • jonhendry@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 year ago

          The thing about AI designing and building robots is that making physical things is vastly more expensive than pooping out six-fingered portrait jpegs. All that trial-and-error learning would not come cheap. Even if the AI were controlling CNC machining centers.

          There’s no guarantee that the AI would have access to enough parts and materials to be able to be trained to a level of sufficient competence.

        • BrickedKeyboard@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          1 year ago

          Just to engage with the high school bully analogy, the nerd has been threatening to show up with his sexbot bodyguards that are basically T-800s from terminator for years now, and you’ve been taking his lunch money and sneering. But now he’s got real funding and he goes to work at a huge building and apparently there are prototypes of the exact thing he claims to build inside.

          The prototypes suck…for now…

    • naevaTheRat@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 year ago

      I don’t really see much likelihood in a singularity though, there’s probably a bunch of useful shit you could work out if you analysed the right extant data in the right way but there’s huge amounts of garbage data that it’s not obvious is garbage.

      My experience in research indicates to me that figuring shit out is hard and time consuming, and “intelligence” whatever that is has a lot less to do with it than having enough resources and luck. I’m not sure why some super smart digital mind would be able to do science much faster than humans.

      Physics is a bitch and there are just sort of limits on how awesome technology can be. Maybe I’m wrong but it seems like digital intelligence would be more useful for stuff like finding new antibiotics than making flying nanomagic fabricator paperclip drones.

      • BrickedKeyboard@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        1 year ago

        My experience in research indicates to me that figuring shit out is hard and time consuming, and “intelligence” whatever that is has a lot less to do with it than having enough resources and luck. I’m not sure why some super smart digital mind would be able to do science much faster than humans.

        That’s right. Eliezer’s LSD vision of the future where a smart enough AI just figures it all out with no new data is false.

        However, you could…build a fuckton of robots. Have those robots do experiments for you. You decide on the experiments, probably using a procedural formula. For example you might try a million variations of wing design, or a million molecules that bind to a target protein, and so on. Humans already do this actually in those domains, this is just extending it.

        • skillissuer@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          1 year ago

          For example you might try […] a million molecules that bind to a target protein

          well not millions but tens of thousands, yes we have that, it’s called high throughput screening. it’s been around for some time

          have you noticed some kind of medical singularity? is every human disease curable by now? i don’t fucking think so

          that’s because you’re automating glorified liquid transfer from eppendorf A to eppendorf B, followed by simple measurement like fluorescence. you still have to 1. actually make all of this shit and make sure it’s pure and what you ordered, then 2. you have to design an experiment that will tell you something that you measure, and be able to interpret it correctly, then 3. you need to be sure that you’re doing the right thing in the first place, like not targeting the wrong protein (more likely than you think), and then 4. when you have some partial result, you latch to it and improve it piece by piece, making sure that it will actually get where it needs to, won’t shred patient’s liver instantly and so on (more likely than you think)

          while 1 is at initial stages usually subcontracted to poor sods at entity like enamine ltd, 1, 4 are infinite career opportunities for medicinal/organic chemists and 2, 3 for molecular biologists, because all AI attempts at any of that that i’ve seen were spectacular failures and the only people that were satisfied with it were people who made these systems and published a paper about them. especially 4 is heavily susceptible to garbage in garbage out situations, and putting AI there only makes matters worse

          is HTS a good thing? if you can afford it, it relieves you from the most mind numbing task out there. if you can’t you still do all of this by hand. (it seems to me that it escapes you that all of this shit costs money) is this a new thing? also no. since 90s you can buy automated flash chromatographic column, it’s a box where you put dirty compound in one tube and get purified compound in other tubes. guess what took me entire yesterday? yes, it’s flash columns by hand because my uni doesn’t have a budget for that. would my paper come up faster if i had a combiflash? maybe, would it be any better if i had 5? no, because all the hard bits aren’t automated away, shit breaks all the time, things work different than you think and sometimes it’s that what makes it noticeable, and so on and so on

          • skillissuer@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            and btw if you try to bypass all of that real world non-automatable effort, just wing it and try to do it all in silico, that is simulate binding of unspecified compound to some protein it gets even worse, because search space is absurdly large, molecular mechanics + some qm method where it matters scales poorly, and then in absence of real world data you get some predictions, scored by some number, that gets you the illusion of surety but is entirely wrong

            i’ve seen this happening in real time over some months, this shit was quietly buried and removed from website and real thing was pieced together by humans, based on real world data acquired by other humans. yet still, company claims to be “ai-powered”. it has probably something to do with ai bros holding money in that place

          • BrickedKeyboard@awful.systemsOP
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            1 year ago

            Do you think the problems you outlined are solvable even in theory, or must humans slog along at the current pace for thousands of years to solve medicine?

            • skillissuer@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              1 year ago

              rapid automated drug development != solving medicine, while that would be a good thing, these are not remotely similar. first one is partially engineering problem, the other requires much more theory building

              solving medicine would be more of a problem for biologists, and biology is a few magnitudes harder to simulate than chemistry. from my experience with computational chemists, this shit is hard, scales poorly (like n^7), and because of a large search space predictive power is limited. if you try to get out of wet lab despite all of this anyway and simulate your way to utopia, you get into rapidly compounding garbage in garbage out issues, and this is in fortunate case where you know what are you doing, that is, when you are sure that you have right protein at hand. this is the bigger problem, and this requires lots of advanced work from biologists. sometimes it’s interaction between two of proteins, sometimes you need some unusual cofactor (like cholesterol in membrane region for MOR, which was discovered fairly recently) some proteins have unknown functions, there are orphan receptors, some signalling pathways are little known. this is also far from given and more likely than you think https://www.science.org/content/blog-post/how-antidepressants-work-last good luck automating any of that

              that said, sane drug development has that benefit of providing some new toys for biologists, so that even if a given compound will shred liver of patient that might be fine for some cell assay. some of the time, that makes their work easier

              as a chemist i sometimes say that in some cosmic sense chemistry is solved, that is, when we want to go from point A to point B we don’t beat the bush wildly but instead most of the time there’s some clear first guess that works, some of the time. this seems to be a controversial opinion and even i became less sure of that sometime halfway through my phd, partially because i’ve found a counterexampleS

              there’s a reason why drug development takes years to decades

              i’m not saying that solving medicine will take thousands of years, whatever that even means. things are moving rapidly, but any advancement that will make it work even faster will come from biologists, not from you or any other AI bros

              • skillissuer@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 year ago

                going off a tangent with these antidepressant thingy: if this paper holds up and it’s really how things work under the hood, we have a situation where for 40 years people were dead wrong about how antidepressants work, and now they do know. turns out, all these toys we give to biologists are pretty far from perfect and actually hit more than intended, for example all antidepressants in clinical use hit some other, now turns out unimportant target + TrkB. this is more common than you think, some receptors like sigma catch about everything you can throw at them, there are also orphan receptors with no clear function that maybe catch something and we have no idea. even such a simple compound like paracetamol works in formely unknown way, now we have a pretty good guess that it’s really cannabinoid, and paracetamol is a prodrug to that. then there are very similar receptors that are just a little bit different but do completely different things, and sometimes you can even differentiate between the same protein on basis of whether is bound to some other protein or not. shit’s complicated but we’re figuring it out

                catching up this difference was only possible by using tools - biological tools - that were almost unthinkable 20 years ago, and is far outside of that “just think about it really hard and you’ll know for sure” school of thought popular at LW, even if you offload the “thinking” part to chatgpt. my calculus prof used to warn: please don’t invent new mathematics during exam, maybe some of you can catch up and surpass 3000 years of mathematics development in 2h session, but it’s a humble thing to not do this and learn what was done in the past beforehand (or something to that effect. it was a decade ago)

    • earthquake@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      You know, I thought that moving sneerclub onto lemmy meant we probably would not get that familiar mix of rationalists, heterodox rationalists, and just-left-but-still-mired-in-the-mindset ex-rationalists that swing by and want to quiz sneerclub. Maybe we’re just that irresistible.

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      from 2011-2013 i was getting these guys email me directly about roko’s basilisk because lesswrong had banned discussion and rationalwiki was the only place even mentioning it

      now they work hard to seek us out even here

      i hope the esteemed gentleposter realises that there are no recoverable good parts and it’s dumbassery all the way down sooner rather than later, preferably before posting again

        • BrickedKeyboard@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          1 year ago

          It would be lesswrongness.

          Just to split where the gap is :

          1. lesswrongers think powerful AGI systems that can act on their own against humans will soon exist, and will be able to escape to the internet.
          2. I work in AI and think powerful general AI systems (not necessarily the same as AGI) will exist soon and be powerful, but if built well will be unable to act against humans without orders, and unable to escape or do many of the things lesswrongers claim.
          3. You believe AGI of any flavor is a very long way away, beyond your remaining lifespan?
          • PJ Coffey@mastodon.ie
            link
            fedilink
            arrow-up
            10
            ·
            1 year ago

            @BrickedKeyboard @gnomicutterance

            I think Timnit Gebru nailed it when she pointed out that we can’t define Intelligence, which means we can’t scope it, which means we can’t build it.

            The cult of IQ tests which rests on a foundation of science trying to prove that:

            A) races are real and have real, heritable differences in intelligence

            And

            B) that a general intelligence, g, exists

            Have done quite solid work proving that neither of those things are true, unintentionally, but still.

      • Evinceo@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        Maybe we could make an explicit sub-lemmy for indulging in maladaptive debating. It’s my guilty pleasure.

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            Shit, I’ll sell this

            You should see how well I can scale it! Huge! Biggest /dev/null ever!

            (Sorry for the brief trumping, I guess I’m still happy that the proudboys are eating shit and it’s on my mind)

      • naevaTheRat@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Jesus fuck. Idk about no good parts, the bits that are unoriginal are sometimes interesting (e.g. distance between model and reality, metacognition is useful sometimes etc) it would just be more useful if they like produced reading lists instead of pretending to be smort

      • BrickedKeyboard@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        1 year ago

        Hi David. Reason I dropped by was the whole concept of knowing the distant future with too much certainty seemed like a deep flaw, and I have noticed lesswrong itself is full of nothing but ‘cultist’ AI doomers. Everyone kinda parrots a narrow range of conclusions, mainly on the imminent AGI killing everyone, and this, ironically, doesn’t seem very rational…

        I actually work on the architecture for current production AI systems and whenever I mention approaches that do work fine and suggest we could control more powerful AI this way, I get downvoted. So I was trying to differentiate between:

        A. This is a club of smart people, even smarter than lesswrongers who can’t see the flaws!

        B. This is a club of well, the reason I called it boomers was I felt that the current news and AI papers make each of the questions I asked a reasonable and conservative outcome. For example posters here are saying for (1), “no it won’t do 25% of the jobs”. That was not the question, it was 25% of the tasks. Since for example Copilot already writes about 25% of my code, and GPT-4 helps me with emails to my boss, from my perspective this is reasonable. The rest of the questions build on (1).

        • Evinceo@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 year ago

          I actually work on the architecture for current production AI systems and whenever I mention approaches that do work fine and suggest we could control more powerful AI this way, I get downvoted.

          LW isn’t looking for technical practical solutions. They want plausible sci-fi that fits their narrative. Actually solving the problems they worry about would mean there’s no reason for the cult to exist, so why would they upvote that?

          Overall LW seems to be dead wrong about predicting modern AI systems. They anticipated that there was this general intelligence quality that would enable problem solving, escape, instrumental convergence, etc. However what ended up working was approximating functions really hard. The existence of ChatGPT without a singularity is a crisis for LW. No longer can they safely pontificate and write Harry Potter/The Culture fanfiction; now they must confront the practical reality of the monsters under their bed looking an awful lot more like dust bunnies.

  • bitofhope@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.

    Domains that humans can do are not quantifiable. Many fields of human endeavor (e.g. many arts and sports) are specifically only worthwhile because of the limits of human minds and bodies. Weightlifting is a thing even though we have cranes and forklifts. People enjoy paintings and drawing even though we have cameras.

    I do not find likely that 25% of currently existing occupations are going to be effectively automated in this decade and I don’t think generative machine learning models like LLMs or stable diffusion are going to be the sole major driver of that automation.

    Do you consider it likely, before 2040, those domains will include robotics

    Humans are capable of designing a robot, procuring the components to build the robot, assembling it and using the robot to perform a task. I don’t expect (or desire) a computer program to be able to do the same independently during any of our expected lifetime. It is entirely plausible that tools which apply ML techniques will be used more and more in robotics and other industries, but my money is on those tools being ultimately wielded by humans for the foreseeable future.

    If AI systems can control robotics, do you believe a form of Singularity will happen. This means hard exponential growth of the number of robots, scaling past all industry on earth today by at least 1 order of magnitude, and off planet mining soon to follow. It does not necessarily mean anything else.

    No. Even if Skynet had full control of a robot factory, heck, all the robot factories, and staffed them with a bunch of sleepless foodless always motivated droids, it would still face many of the constraints we do. Physical constraints (a conveyor belt can only go so fast without breaking), economic constraints (Where do the robot parts and the money to buy them come from? Expect robotics IC shortages when semiconductor fabs’ backlogs are full of AI accelerators), even basic motivational constraints (who the hell programmed Skynet to be a paperclip C3PO maximizer?)

    Do you think that mass transition where most human jobs we have now will become replaced by AI systems before 2040 will happen

    No. A transition like that brought by mechanization and industrialization of agriculture, or the outsourcing of manufacturing industry accompanied by the shift to a service economy, seems plausible, but not by 2040 and it won’t be driven by just machine learning alone.

    Is AI system design an issue. I hate to say “alignment”, because I think that’s hopeless wankery by non software engineers, but given these will be robotic controlling advanced decision-making systems, will it require lots of methodical engineering by skilled engineers, with serious negative consequences when the work is sloppy?

    Yes, system design is an important issue with all technology. We are already seeing real damage from “AI” technology getting to make important decisions: self-driving vehicle accidents, amplified marginalization of minorities due to feedback of bias into the models, unprecedented opportunities for spam and propaganda, bottlenecks of technology supply chains and much more.

    Automation will absolutely continue to replace more and more different kinds of human labor. While this does and will drive unemployment to some extent, there is a more subtle issue with it as well. Productivity of human labor per capita has been soaring decade by decade, but median wages and work hours have stagnated. AI, like many other technologies before and after, is probably gonna end up creating more bullshit jobs, with some people coming into them from already bullshit jobs. If AI can replace half of human labor, that should then mean the average person has to work half as hard, but instead they will have to deliver double the results.

    I just think the threat model of autonomous robot factories making superhuman android workers and replicas of itself at an exponential rate is pure science fiction.

    • BrickedKeyboard@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      Having trouble with quotes here **I do not find likely that 25% of currently existing occupations are going to be effectively automated in this decade and I don’t think generative machine learning models like LLMs or stable diffusion are going to be the sole major driver of that automation. **

      1. I meant 25% of the tasks, not 25% of the jobs. So some combination of jobs where AI systems can do 90% of some jobs, and 10% of others. I also implicitly was weighting by labor hour, so if 10% of all the labor hours done by US citizens are driving, and AI can drive, that would be 10% automation. Does this change anything in your response?

      No. Even if Skynet had full control of a robot factory, heck, all the robot factories, and staffed them with a bunch of sleepless foodless always motivated droids, it would still face many of the constraints we do. Physical constraints (a conveyor belt can only go so fast without breaking), economic constraints (Where do the robot parts and the money to buy them come from? Expect robotics IC shortages when semiconductor fabs’ backlogs are full of AI accelerators), even basic motivational constraints (who the hell programmed Skynet to be a paperclip C3PO maximizer?)

      1. I didn’t mean ‘skynet’. I meant, AI systems. chatGPT and all the other LLMs are an AI system. So is midjourney with controlnet. So humans want things. They want robots to make the things. They order robots to make more robots (initially using a lot of human factory workers to kick it off). Eventually robots get really cheap, making the things humans want cheaper and that’s where you get the limited form of Singularity I mentioned.

      At all points humans are ordering all these robots, and using all the things the robots make. An AI system is many parts. It has device drivers and hardware and cloud services and many neural networks and simulators and so on. One thing that might slow it all down is that the enormous list of IP needed to make even 1 robot work and all the owners of all the software packages will still demand a cut even if the robot hardware is being built by factories with almost all robots working in it.

      **I just think the threat model of autonomous robot factories making superhuman android workers and replicas of itself at an exponential rate is pure science fiction. **

      1. So again that’s a detail I didn’t give. Obviously there are many kinds of robotic hardware, specialized for whatever task they do, and the only reason to make a robot humanoid is if it’s a sexbot or otherwise used as a ‘face’ for humans. None of the hardware has to be superhuman, though obviously industrial robot arms have greater lifting capacity than humans. Just to give a detail what the real stuff would look like : most robots will be in no way superhuman in that they will lack sensors where they don’t need it, won’t be armored, won’t even have onboard batteries or compute hardware, will miss entire modalities of human sense, cannot replicate themselves, and so on. It’s just hardware that does a task, made in factory, and it takes many factories with these machines in it to make all the parts used.

      think:

  • unfaithful-functor@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    wrong place for this. joint probabilities joke was kinda fire though

    1.

    Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.

    There is no set of domains over which we can quantify to make statements like this. “at least 25% of the domains that humans can do” is meaningless unless you willfully adopt a painfully modernist view that we really can talk about human ability in such stunningly universalist terms, one that inherits a lot of racist, ableist, eugenicist, white supremacist, … history. Unfortunately, understanding this does not come down to sitting down and trying to reason about intelligence from techbro first principles. Good luck escaping though.

    Rest of the questions are deeply uninteresting and only become minimally interesting once you’re already lost in the AI religion.

    • BrickedKeyboard@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      And just to be clear, for one to be “lost in the AI religion”, the claims have to be false, correct? We will not see the things I mentioned within the timeframe I gave (7 years, 17 years, and implicitly if there is not immediate progress towards the nearer deadline within 1 year it’s not going to happen).

      Google’s Gemini will not be multimodal, be capable of learning to do tasks by reinforcement learning to human level, right? Robotics foundation models will not work.

  • David Gerard@awful.systemsM
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    it’s the S in TESCREAL, if that doesn’t answer your question you have some more deprogramming to do (and we are not your exit counselors)

    • BrickedKeyboard@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      edit-2
      1 year ago

      Consider a flying saucer cult. Clearly a cult, great leader, mothership coming to pick everyone up, things will be great.

      …What if telescopes show a large object decelerating into the solar system, the flaw from the matter annihilation engine clearly visible. You can go pay $20 a month and rent a telescope and see the flare.

      The cult uh points out their “sequences” of writings by the Great Leader and some stuff is lining up with the imminent arrival of this interstellar vehicle.

      My point is that lesswrong knew about GPT-3 years before the mainstream found it, many OpenAI employees post there etc. If the imminent arrival of AI is fake - like the hyped idea of bitcoin going to infinity or replacing real currency, or NFTs - that would be one thing. But I mean, pay $20 a month and man this tool seems to be smart, what could it do if it could learn from it’s mistakes and had the vision module deployed…

      Oh and I guess the other plot twist in this analogy : the Great Leader is saying the incoming alien vehicle will kill everyone, tearing up his own Sequences of rants, and that’s actually not a totally unreasonable outcome if you could see an alien spacecraft approaching earth.

      And he’s saying to do stupid stuff like nuke each other so the aliens will go away and other unhinged rants, and his followers are eating it up.

      • Evinceo@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        1 year ago

        Look more carefully at what the cult leader is asking for. He was asking for money for his project before, now he’s tearing his hair out in despair because we haven’t spent enough money on his project, we’d better tell the aliens to give us another few months so we can spend more money on the cult project.

        He has been very careful not to say that we should do anything bad to the aliens, just people who don’t agree with him about how we should talk to the aliens.

      • self@awful.systemsM
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 year ago

        …What if telescopes show a large object decelerating into the solar system, the flaw from the matter annihilation engine clearly visible. You can go pay $20 a month and rent a telescope and see the flare.

        if the only telescopes showing this object are the ones that must be rented from the cult and its offshoots, then it’s pretty obvious some bullshit is up, isn’t it? maybe the institution designed and optimized to trick your human brain into wholeheartedly believing things that don’t match with reality has succeeded, because it has poured a lot more time and money into tricking you than you could possibly know

        My point is that lesswrong knew about GPT-3 years before the mainstream found it, many OpenAI employees post there etc. If the imminent arrival of AI is fake - like the hyped idea of bitcoin going to infinity or replacing real currency, or NFTs - that would be one thing. But I mean, pay $20 a month and man this tool seems to be smart, what could it do if it could learn from it’s mistakes and had the vision module deployed…

        didn’t lesswrong bank on an entire different set of AI technology until very recently, and a lot of the tantrums we’re seeing from yud stem from his failure to predict or even understand LLMs?

        I keep seeing this idea that all GPT needs to be true AI is more permanence and (this is wild to me) a robotic body with which to interact with the world. if that’s it, why not try it out? you’ve got a selection of vector databases that’d work for permanence, and a big variety of cheap robotics kits that speak g-code, which is such a simple language I’m very certain GPT can handle it. what happens when you try this experiment?

        a final point I guess — there’s a lot of overlap here with the anti-cryptocurrency community. it sounds like we’re in agreement that cryptocurrency tech is a gigantic scam; that the idea of number going up into infinity is bunk. but something I’ve noticed is that folk with cryptocurrency jobs could not come to that realization, that when your paycheck relies on internalizing a set of ideas that contradict reality, most folk will choose the paycheck (at least for a while — cognitive dissonance is a hard comedown and a lot of folks exited the cryptocurrency space when the paycheck no longer masked the pain)

        • BrickedKeyboard@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          1 year ago

          I keep seeing this idea that all GPT needs to be true AI is more permanence and (this is wild to me) a robotic body with which to interact with the world. if that’s it, why not try it out? you’ve got a selection of vector databases that’d work for permanence, and a big variety of cheap robotics kits that speak g-code, which is such a simple language I’m very certain GPT can handle it. what happens when you try this experiment?

          ??? I don’t believe GPT-n is ready for direct robotics control at a human level because it was never trained on it, and you need to use a modification on transformers for the architecture, see https://www.deepmind.com/blog/rt-2-new-model-translates-vision-and-language-into-action . And a bunch of people have tried your experiment with some results https://github.com/GT-RIPL/Awesome-LLM-Robotics .

          In addition to tinker with LLMs you need to be GPU-rich, or have the funding of about 250-500m. My employer does but I’m a cog in the machine. https://www.semianalysis.com/p/google-gemini-eats-the-world-gemini

          What I think is the underlying technology that made GPT-4 possible can be made to drive robots to human level at some tasks, though if you note I think it may take to 2040 to be good, and that technology mostly just includes the idea of using lots of data, neural networks, and a mountain of GPUs.

          Oh and RSI. That’s the wildcard. This is where you automate AI research, including developing models that can drive a robot, using current AI as a seed. If that works, well. And yes there are papers where it does work. .

          • self@awful.systemsM
            link
            fedilink
            English
            arrow-up
            11
            ·
            1 year ago

            ??? I don’t believe GPT-n is ready for direct robotics control at a human level because it was never trained on it, and you need to use a modification on transformers for the architecture, see https://www.deepmind.com/blog/rt-2-new-model-translates-vision-and-language-into-action . And a bunch of people have tried your experiment with some results https://github.com/GT-RIPL/Awesome-LLM-Robotics .

            yeah you don’t come on here, play with words, and then fucking ??? me. what you said was:

            But I mean, pay $20 a month and man this tool seems to be smart, what could it do if it could learn from it’s mistakes and had the vision module deployed…

            and I told you to go ahead. now you’re gonna sit and pretend you didn’t mean the $20 a month model, you meant some other bullshit

            and when I look at those other models, what I see is some deepmind marketing fluff and some extremely disappointing results. namely, we’ve got some utterly ordinary lab robots doing utterly ordinary lab robot things. and absolutely none of it looks like a singularity, which was the point of the discussion, right?

            In addition to tinker with LLMs you need to be GPU-rich, or have the funding of about 250-500m. My employer does but I’m a cog in the machine.

            you don’t see this as a problem, vis-a-vis the whole “only the cult’s telescopes seem to see the spaceship” thing?

            Oh and RSI. That’s the wildcard. This is where you automate AI research, including developing models that can drive a robot, using current AI as a seed. If that works, well. And yes there are papers where it does work. .

            please don’t talk about my wrists like that

            nah but seriously I think I’ve seen those results too! and they’re extremely disappointing.

        • BrickedKeyboard@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          1 year ago

          Just to be clear, you can build your own telescope now and see the incoming spacecraft.

          Right now you can go task GPT-4 with solving a problem about equal to undergrad physics, let it use plugins, and it will generally get it done. It’s real.

          Maybe this is the end of the improvements, just like maybe the aliens will not actually enter orbit around earth.

            • self@awful.systemsM
              link
              fedilink
              English
              arrow-up
              5
              ·
              1 year ago

              I know you’re on mastodon and can’t see upvotes, so I wanted to thank you for making some very compassionate and thoughtful points in this thread (which unfortunately seem to have largely gone ignored by the poster in question)

            • earthquake@lemm.ee
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 year ago

              at best you’re being percival lowell, seing the reflection of your own blood vessels in the telescope and declaring that it’s martian canals.

              This is a fucking brilliant analogy, thank you.

          • self@awful.systemsM
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 year ago

            Just to be clear, you can build your own telescope now and see the incoming spacecraft.

            Right now you can go task GPT-4

            is there any part of GPT-4 that is my own telescope? cause you really seem to have lost the plot here

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        My point is that lesswrong knew about GPT-3 years before the mainstream found it

        yud lost his shit when it turned out that it’s not his favourite flavour of ai that became widely known and successful

        • bitofhope@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          We just nuke the datacenters, then aliens will come down and hand us the aligned symbolic AGI, which in turn will teach us communism, water birth and communication with porpoises? WTF I love TREACLESP now!

          • self@awful.systemsM
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            (TREACLESP)

            extremely long pause and 3 GC cycles as the Lisp machine heats up the room next to your terminal

            T

  • GSV_Spinnaker@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    Needling in on point 1 - no I don’t, largely because AI techniques haven’t surpassed humans in any given job ever :P. Yes, I am being somewhat provocative, but no AI has ever been able to 1:1 take over a job that any human has done. An AI can do a manual repetitive task like reading addresses on mail, but it cannot do all of the ‘side’ work that bottlenecks the response time of the system: it can’t handle picking up a telephone and talking to people when things go wrong, it can’t say “oh hey the kids are getting more into physical letters, we better order another machine”, it can’t read a sticker that somebody’s attached somewhere else on the letter giving different instructions, it definitely can’t go into a mail center that’s been hit by a tornado and plan what the hell it’s going to do next.

    The real world is complex. It cannot be flattened out into a series of APIs. You can probably imagine building weird little gizmos to handle all of those funny side problems I laid out, but I guarantee you that all of them will then have their own little problems that you’d have to solve for. A truly general AI is necessary, and we are no closer to one of those than we were 20 years ago.

    The problem with the idea of the singularity, and the current hype around AI in general, is a sort of proxy Dunning-Kruger. We can look at any given AI advance and be impressed but it distracts us from how complex the real world is and how flexible you need to be a general agent that actually exists and can interact and be interacted upon outside the context of a defined API. I have seen no signs that we are anywhere near anything like this yet.

    • GSV_Spinnaker@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      And just briefly, because the default answer to this point is “yes but we’ll eventually do it”: once we do come up with a complex problem solver, why would we actually get it to start up the singularity? Nobody needs infinite computing power forever, except for Nick Bostrom’s ridiculous future humans and they aren’t alive to be sad about it so I’m not giving them anything. A robot strip mining the moon to build a big computer doesn’t really do that much for us here on Earth.

    • BrickedKeyboard@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      The counter argument is GPT-4. For the domains this machine has been trained on it has a large amount of generality - a large amount of capturing that real world complexity and dirtiness. Reinforcement learning can make it better.

      Or in essence, if you collect colossal amounts of information, yes pirated from humans, and then choose what to do next by ‘what would a human do’, this does seem to solve the generality problem. You then fix your mistakes with RL updates when the machine fails on a real world task.

      • GSV_Spinnaker@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        No it’s not. GPT-4 is nowhere near suitable for general interaction. It just isn’t.

        “Just do machine learning to figure out what a human would do and you’ll be able to do what a human does!!1!”. “Just fix it when it goes wrong using reinforcement learning!!11!”.

        GPT-4 has no structured concept of understanding. It cannot learn-on-the-fly like a human can. It is a stochastic parrot that badly mimics a the way that people on the internet talk, and it took an absurd amount of resources to get it to do even that. RL is not some magic process that makes a thing do the right thing if it does the wrong thing enough and it will not make GPT-4 a general agent.

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    I’m being explicitly NSFW in the hopes that your eyes will be opened.

    The Singularity was spawned in the 1920s, with no clear initiating event. Its first two leaps forward are called “postmodernism” and “the Atomic age.” It became too much for any human to grok in the late 1940s, and by the 1960s it was in charge of terraforming and scientific progress.

    I find all of your questions irrelevant, and I say this as a machine-learning practitioner. We already have exponential growth in robotics, leading to superhuman capabilities in manufacturing and logistics.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      I actually really liked this reply purely on the fact that it walked a different avenue of response

      Because yeah indeed, under the lens of raw naïve implementation, the utter breadth of scope involved in basically anything is so significantly beyond useful (or even tenuous) human comprehension it’s staggering

      We are, notably, remarkably competent at abstraction[0], and this goes a hell of a long way in affordance but it’s also not an answer

      I’ll probably edit this later to flesh the post out a bit, because I’m feeling bad at words rn

      [0] - this ties in with the “lossy at scale” post I need to get to writing (soon.gif)

      • TerribleMachines@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        1 year ago

        Yeah, this post (edit: “comment”, the original post does not spark joy) sparked joy for me too (my personal cult lingo is from Marie Kondo books, whatcha gonna do)

        One of my takes is that the “AI alignment” garbage is way less of a problem than “Human Alignment” i.e. how to get humans to work together and stop being jerks all the time. Absolutely wild that they can’t see that, except perhaps when it comes to trying to get other humans to give them money for the AIpocalype.

    • BrickedKeyboard@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      Current the global economy doubles every 23 years. Robots building robots and robot making equipment can probably double faster than that. It won’t be in a week or a month, energy requirements alone limit how fast it can happen.

      Suppose the doubling time is 5 years, just to put a number on it. So the economy would be growing a bit over 16 times faster than it was previously. This continues until the solar system runs out of matter.

      Is this a relevant event? Does it qualify as a singularity? Genuinely asking, how have you “priced in” this possibility in your world view?

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago
    1. no
    2. no, (follows from 1)
    3. no, but space exploration by drones with semi-autonomous decision making might be feasible. The power levels for such tech will have to go way down though.
    4. define “mass transition”. I believe a lot of jobs that require humans now (like customer support) will be enthusiastically robotized, but not that that outcome will be postive for either the workers or consumers. I doubt it will be more that maybe 10% of the total workforce though.
    5. like someone mentioned, we can see “artificial intelligences” (corporations) do bad things right now and we aren’t stopping them. Considering everybody in AI research subconsciously subscribes to the California ideology, there’s no way they have the introspection to truly design an “aligned” AI.
    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Oh yeah, I fucking wrote a snarky blog post about this a few days ago

      The SFnal idea of the Singularity is when technological progress goes faster and faster until it disappears up the hockey stick curve of pure unknoweabilty. What’s happening now in actuality is that hype cycles are crashing faster and faster. Blockchain! Self driving! LLM!

      Any takeoffs are going to run into the iron cloud cover of climate change anyway.

  • Evinceo@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    1 year ago

    Content Warning: Ratspeak

    spoiler

    Let’s say that tomorrow, they build AGI on HP/Cray Frontier. It’s human equivalent. Mr Frontier is rampant or whatever and wants to improve himself. In order to improve himself he will need to create better chips. He will need approximately 73 thousand copies of himself just to match the staff of TSMC, but there’s only one Frontier. And that’s to say nothing of the specialized knowledge and equipment required to build a modern fab, or the difficulty of keeping 73 thousand copies of himself loyal to his cause. That’s just to make a marginal improvement on himself, and assuming everyone is totally ok with letting the rampant AI get whatever it wants. And that’s just the ‘make itself smarter’ part, which everything else is contingent on; it assumes that we’ve solved Moravec’s paradox and all of the attendant issues of building robots capable of operating at the extremes of human adaptability, which we have not. Oh and it’s only making itself smarter at the same pace TSMC already was.

    The practicalities of improving technology are generally skated over by aingularatians in favor of imagining technology as a magic number that you can just throw “intelligence” at to make it go up.

    • self@awful.systemsM
      link
      fedilink
      English
      arrow-up
      17
      ·
      1 year ago

      What I’m trying to get at is that the practicalities of improving technology are generally skated over by aingularatians in favor of imagining technology as a magic number that you can just throw “intelligence” at to make it go up.

      this is where the singularity always lost me. like, imagine, you build an AI and it maxes out the compute in its server farm (a known and extremely easy to calculate quantity) so it decides to spread onto the internet where it’ll have infinite compute! well congrats, now the AI is extremely slow cause the actual internet isn’t magic, it’s a network where latency and reliability are gigantic issues, and there isn’t really any way for an AI to work around that. so singulatarians just handwave it away

      or like when they reach for nanomachines as a “scientific” reason why the AI would be able to exert godlike influence on the real world. but nanomachines don’t work like that at all, it’s just a lazy soft sci-fi idea that gets taken way too seriously by folks who are mediocre at best at understanding science

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        15
        ·
        1 year ago

        (To be read in the voice of an elementary schooler who is a sore loser at make believe): Nuh-uh! My AGI has quantum computers, so it doesn’t get slow from the internet, and, and, and, it builds robots, with jetpacks, and those robots have tiny robots that can go in your brain and and and make your brain explode, and if you say anything mean about me or the AGI it’ll take your brain and clone it and put wires in it and make you think youre getting like, wedgied and stuff, but really youre not but you think you are because it’s really good at making you think it

      • Evinceo@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        Indeed, if distributed computing worked as well as singulatarians fear everyone would be using Beowulf clusters for their workloads instead of AWS.

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          Can I live in this world? Please? Pretty please with a cherry on top?

          It sounds so much less frustrating than this pile of mistakes with Pike’s shitty ideas at every fucking api and datamodel

      • Steve@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        but nanomachines don’t work like that at all, it’s just a lazy soft sci-fi idea that gets taken way too seriously by folks who are mediocre at best at understanding science

        Let’s call this Crichtonitis.

        • Steve@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          not a joke btw. literal plot of Prey which he followed with his climate change denial book State of Fear

      • BrickedKeyboard@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        1 year ago

        I agree completely. This is exactly where I break with Eliezer’s model. Yes obviously an AI system that can self improve can only do so until it’s either (1) the best algorithm that can run on the server farm (2) finding a better algorithm takes more compute than is worth the investment in current compute

        That’s not a god. You do this in an AI experiment now and it might crap out at double or less the starting performance and not even be above the SOTA.

        But if robots can build robots, and the current AI progress shows a way to do it (foundation model on human tool manipulation), then…

        Genuinely asking, I don’t think it’s “religion” to suggest that a huge speedup in global GDP would be a dramatic event.

      • BrickedKeyboard@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        1 year ago

        Serious answer not from yudnowsky: the AI doesn’t do any of that. It helps people cheat on their homework, write their code and form letters faster, and brings in revenue. AI owner uses the revenue and buys gpus. With the GPUs they make the AI better. Now it can do a bit more than before and then they buy more GPUs and theoretically this continues until the list of tasks the AI can do includes “most of the labor in a chip fab” and GPUs become cheap and then things start to get crazy.

        Same elementary school logic but I mean this is how a nuke works.

        • self@awful.systemsM
          link
          fedilink
          English
          arrow-up
          15
          ·
          1 year ago

          wait, so the AI is just your fears about capitalism?

          Same elementary school logic but I mean this is how a nuke works.

          what. no it isn’t

        • skillissuer@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          your imaginary nukes explode forever. in reality, nuke stops exploding when it either runs out of plutonium or is dispersed too much. the energy of nuke is not infinite, it’s large, but most importantly it’s all contained in the device from the beginning

          your example also fails at the step of “getting more money forever”, when VC funding runs out or gets dispersed too much entire charade grinds to halt (because SV startups are shielded from commercial failure by that VC money). that state is sometimes called “AI winter”

          • BrickedKeyboard@awful.systemsOP
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            edit-2
            1 year ago

            Did this happen with Amazon? The VC money is a catalyst. It’s advancing money for a share of future revenues. If AI companies can establish a genuine business that collects revenue from customers they can reinvest some of that money into improving the model and so on.

            OpenAI specifically seems to have needed about 5 months to go to 1 billion USD annual revenue, or the way tech companies are valued, it’s already worth more than 10 billion intrinsic value.

            If they can’t - if the AI models remain too stupid to pay for, then obviously there will be another AI winter.

            https://fortune.com/2023/08/30/chatgpt-creator-openai-earnings-80-million-a-month-1-billion-annual-revenue-540-million-loss-sam-altman/

            • skillissuer@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              1 year ago

              from what i understand openai runs all of their products at loss, so when vc money runs out things could get interesting

              and with recession coming, there could be less vc money to begin with