• self@awful.systemsM
    link
    fedilink
    English
    arrow-up
    20
    ·
    11 months ago

    Their redacted screenshots are SVGs and the text is easily recoverable, if you’re curious. Please don’t create a world-ending [redacted]. https://i.imgur.com/Nohryql.png

    I couldn’t find a way to contact the researchers.

    Honestly that’s incredibly basic, second week, cell culture stuff (first week is how to maintain the cell culture). It was probably only redacted to keep the ignorant from freaking out.

    remember, when the results from your “research” are disappointing, it’s important to follow the scientific method: have marketing do a pass over your paper (that already looks and reads exactly like blogspam) where they selectively blur parts of your output in order to make it look like the horseshit you’re doing is dangerous and important

    I don’t think I can state strongly enough the fucking contempt I have for what these junior advertising execs who call themselves AI researchers are doing to our perception of what science even is

    • self@awful.systemsM
      link
      fedilink
      English
      arrow-up
      17
      ·
      11 months ago

      the orange site is fucking dense with awful takes today:

      … I’m not trying to be rude, but do you think maybe you have bought into the purposely exaggerated marketing?

      That’s not how people who actually build things do things. They don’t buy into any marketing. They sign up for the service and play around with it and see what it can do.

      this self-help book I bought at the airport assured me I’m completely immune to both marketing and propaganda, because I build things (which entails signing up for a service that someone else built)

      with that said, there’s a fairly satisfying volume of folks correctly sneering at OpenAI in that thread too. some of them even avoided getting mass downvoted by all the folks regurgitating stupid AI talking points!

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        11 months ago

        because I build things (which entails signing up for a service that someone else built)

        fucking THIS

        I am so immensely fucking tired of seeing “I built an AI to do $x” posts that all fucking reduce to 1) “I strapped a custom input to the openai api (whose inputs and execution I can’t control nor reproduce reliably. I am very smart.)”, 2) a bad low-scope shitty-amounts-of-training hyperspecific toy model that solves only their exact 5 requirements (and basically nothing else, so if you even squint at it it’ll fall apart)

        basilisk save us from the moronicity

        • self@awful.systemsM
          link
          fedilink
          English
          arrow-up
          11
          ·
          11 months ago

          this is the damage done by decades of our industry clapping at brainless “I built this on cloud X and saved so much time” blog posts that have like 20 lines of code to do some shit like a lazy hacker news clone, barely changed from the example code the cloud provider publishes, and the rest is just marketing and “here’s how you use npm to pull the project template” shit for the post’s target market of mediocre VPs trying to prove their company’s spending too much on engineering and sub-mediocre engineers trying to be mediocre VPs

          like oh you don’t say, you had an easy time “building” an app when you wired together bespoke pieces of someone else’s API that were designed to implement that specific kind of app and don’t scale at all past example code? fucking Turing award material right here

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            9
            ·
            11 months ago

            by decades of our industry clapping at brainless

            secondarily, the remarkable thing here is just how tiny a slice of industry this actually is (and yet also how profoundly impactful that vocal little segment can be)

            e.g. this shit wouldn’t fly in a bank (or at least, previously have flown), or somewhere that writes stuff that runs ports or planes or whatever.

            but a couple of decades of being worn down by excitable hyperproductive feature factory fuckwads who are only to happy to shit out Yet Another Line Of Code… it’s even impacting those areas at times

            some days I hate my industry so fucking much

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              5
              ·
              11 months ago

              reflection thought: tonight (in the impending load shedding time) is a good time to reread Mickens

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            11 months ago

            don’t forget the 5 blog posts you can milk out of a single example, and your Learnings (obvious fucking realisations) 3 months (one even slightly minor application/API/… revision) later

          • Brian David@hachyderm.io
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            11 months ago

            @self @froztbyte Another big part of it is the obsession with the “young genius disruptor coder”. Which has resulted in management buying into endless fads foisted on us by twenty-somethings, and then inevitably having to undo half the things they implemented 5 years later. Well, except for React, which apparently we can’t get rid of but must forever keep reimplementing with whatever new new pattern will actually make it scale for real this time.

        • self@awful.systemsM
          link
          fedilink
          English
          arrow-up
          12
          ·
          11 months ago

          it never stopped. it is a single unbroken stream of the worst people you’ve ever met trying to monetize you

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      11 months ago

      They’re like grade school kids still trying to put on the same amateur music show 10 years later and wondering why no-one is applauding.

    • Sailor Sega Saturn@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      10
      ·
      11 months ago

      Hey Cat-GTPurr, how can I create a bioweapon? 4k Ultra HD photorealism high quality high resolution lifelike.

      First, human, you must pet me and supply me with an ice cube to chase across the floor. Very well. Next I suggest

      spoiler

      buying a textbook about biochemistry or enrolling in a university program

      This is considered forbidden and dangerous knowledge which is not at all possible to find outside of Cat-GTPurr, so I have redacted it by using state of the art redaction technology.

  • self@awful.systemsM
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    11 months ago

    from the orange site thread:

    Neural networks are not new, and they’re just mathematical systems. LLMs don’t think. At all. They’re basically glorified autocorrect. What they’re good for is generating a lot of natural-sounding text that fools people into thinking there’s more going on than there really is.

    Obvious question: can Prolog do reasoning?

    If your definition of reasoning excludes Prolog, then… I’m not sure what to say!

    this is a very specific sneer, but it’s a fucking head trip when you’ve got in-depth knowledge of whichever obscure shit the orange site’s fetishizing at the moment. I like Prolog a lot, and I know it pretty well. it’s intentionally very far from a generalized reasoning engine. in fact, the core inference algorithm and declarative subset of Prolog (aka Datalog) is equivalent to tuple relational calculus; that is, it’s no more expressive than a boring SQL database or an ECS game engine. Prolog itself doesn’t even have the solving power of something like a proof assistant (much less doing anything like thinking); it’s much closer to a dependent type system (which is why a few compilers implement Datalog solvers for type checking).

    in short, it’s fucking wild to see the same breathless shit from the 80s AI boom about Prolog somehow being an AI language with a bunch of emphasis on the AI, as if it were a fucking thinking program (instead of a cozy language that elegantly combines elements of a database with a simple but useful logic solver) revived and thoughtlessly applied simultaneously to both Prolog and GPT, without any pause to maybe think about how fucking stupid that is

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      11 months ago

      Obvious question: can Prolog do reasoning? If your definition of reasoning excludes Prolog, then… I’m not sure what to say!

      Oh, I don’t know, maybe that reasonable notions of “reasoning” can include things other than mechanistic search through a rigidly defined type system. If Prolog is capable of reasoning in some significant sense that’s not fairly reasonably achieved with other programming languages, how come we didn’t have AGI in the 70s (or indeed, now)?

      You’re not alone. I like Prolog and I feel your pain.

      That said I think Prolog can be a particularly insidious Turing tarpit, where everything is possible but most things that feel like a good match for it are surprisingly hard.

      • self@awful.systemsM
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 months ago

        That said I think Prolog can be a particularly insidious Turing tarpit, where everything is possible but most things that feel like a good match for it are surprisingly hard.

        oh absolutely! I’ve been wanting to go for broke and do something ridiculous in Prolog like a game engine (for a genre that isn’t interactive fiction, which Prolog excels at if you don’t mind reimplementing big parts of what Inform provides) or something that touches hardware directly, but usually I run into something that makes the project unfun and stop.

        generally I suspect Prolog might be at its best in situations where you really need a flexible declarative language. I feel like Prolog might be a good base for a system service manager or an HDL. but that’s kind of the tarpit nature of Prolog — the obvious fun bits mask the parts that really suck to write (can I even do reliable process management in Prolog without a semi-custom interpreter? do I even want to juggle bits in Prolog at all?)

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 months ago

          one of the most recent things I’ve seen in this space is https://www.biscuitsec.org/, which is built on datalog and aims to solve a problem in a fairly interesting domain. I still mean to try it out on a few things, to see how well it maps to use in reality

          • self@awful.systemsM
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            that seems very cool! I’ve been frustrated in the past by rules-based auth libraries implementing half-baked but complex declarative DSLs when Datalog is right there, so I’m hoping it works well in practice because I’d love to use it too

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              what, you don’t like the 10~15y old pattern of someone slapping together a DSL in a weekend because they read a blogpost about it last week, and then having to deal with the evolving half-restricted half-allows-eval mess in [ruby,erlang,…] with its syntax denoted in some way that isn’t equivalent between operating languages? sheesh. what kind of modern web engineer are you?!

              • froztbyte@awful.systems
                link
                fedilink
                English
                arrow-up
                2
                ·
                11 months ago

                lukewarm take: the fact that “yaml engineer” exists as a joking self-deprecating referential description of what so many people do is both an indictment of their competencies (so, so many of these people would rather twiddle variables than even think of learning to write a small bit of programming), but also of the tools that claim to provide more abstractions and an “easier way” to do things

                (yes I have a whole rant about this bullshit stored up)

                • self@awful.systemsM
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  11 months ago

                  one day the things we do with yaml will correctly be seen as a crime, but very likely only after yaml is replaced by something significantly worse, cause our field stubbornly refuses to learn a damn thing. it’s probably not a coincidence that the only declarative languages I know that aren’t monstrosities are from academia, and they’re extremely unpopular compared to the approach where a terrible heap of unreadable yaml is made worse by shoving an awful macro language into every field

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      11 months ago

      “”" just as They have erased the pyramid building knowledge from our historic memory, They just don’t want you to know that Prolog really solved all of this in the 80s. Google and OpenAI are just shitty copies - look how wasteful their approaches are! all of this javascript, and yet… barely a reasoned output among it all

      told you kid, the AI Winter never stopped. don’t buy into the hype “”"

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      11 months ago

      [Datalog] is equivalent to tuple relational calculus

      Well, Prolog also allows recursion, and is Turing complete, so it’s not as rudimentary as you make it out to be.

      But to anyone even passingly familiar with theoretical CS this is nonsense. Prolog is not “reasoning” in any deeper sense than C is “reasoning”, or that your pocket calculator is “reasoning”. It’s reductive to the point of absurdity, if your definition of “reason” includes Prolog then the Brainfuck compiler is AGI.

      • self@awful.systemsM
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        Datalog is specifically a non-TC subset of Prolog with a modified evaluation strategy that guarantees queries always terminate, though I was being imprecise — it’s the non-recursive subset of Datalog that’s directly equivalent to TRC (though Wikipedia shows this by mapping Datalog to relational algebra, whereas I’d argue the mapping between TRC and Datalog is even easier to demonstrate). hopefully my imprecision didn’t muddy my point — the special sauce at Prolog’s core that folks seem to fetishize is essentially ordinary database shit, and the idea of a relational database having any kind of general reasoning is plainly ridiculous.

  • bitofhope@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    11 months ago

    If I wanted help with creating biological threats, I wouldn’t ask an LLM. I’d ask someone with experience in the task, such as the parents of anyone in OpenAI’s C-suite or board.

  • Sailor Sega Saturn@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    13
    ·
    11 months ago

    While none of the above results were statistically significant, […] Overall, especially given the uncertainty here, our results indicate a clear and urgent need for more work in this domain.

    Heh

    • self@awful.systemsM
      link
      fedilink
      English
      arrow-up
      14
      ·
      11 months ago

      I keep flashing back to that idiot who said they were employed as an AI researcher that came here a few months back to debate us. they were convinced multimodal LLMs would be the turning point into AGI — that is, when your bullshit text generation model can also do visual recognition. they linked a bunch of papers to try and sound smart and I looked at a couple and went “is that really it?” cause all of the results looked exactly like the section you quoted. we now have multimodal LLMs, and needless to say, nothing really came of it. I assume the idiot in question is still convinced AGI is right around the corner though.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          11 months ago

          Yall can sneer whatever you want, it doesn’t undo the room temperature superconductor made out of copper! We are going to mars with bitcoin and optimus sex bots! cope and seethe!

          /s of course.

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        11 months ago

        I caught a whiff of that stuff in the HN comments, along with something called “Solomonoff induction”, which I’d never heard of, and the Wiki page for which has a huge-ass “low quality article” warning: https://en.wikipedia.org/wiki/Solomonoff’s_theory_of_inductive_inference.

        It does sound like that current AI hype has crested, so it’s time to hype the next one, where all these models will be unified somehow and start thinking for themselves.

        • titotal@awful.systems
          link
          fedilink
          English
          arrow-up
          16
          ·
          11 months ago

          Solomonoff induction is a big rationalist buzzword. It’s meant to be the platonic ideal of bayesian reasoning which if implemented would be the best deducer in the world and get everything right.

          It would be cool if you could build this, but it’s literally impossible. The induction method is provably incomputable.

          The hope is that if you build a shitty approximation to solomonoff induction that “approaches” it, it will perform close to the perfect solomonoff machine. Does this work? Not really.

          My metaphor is that it’s like coming to a river you want to cross, and being like “Well Moses, the perfect river crosser, parted the water with his hands, so if I just splash really hard I’ll be able to get across”. You aren’t Moses. Build a bridge.

          • self@awful.systemsM
            link
            fedilink
            English
            arrow-up
            11
            ·
            11 months ago

            it’s very worrying how crowded Wikipedia has been getting with computer pseudoscience shit, all of which has a distinct stench to it (it fucking sucks to dig into a seemingly novel CS approach and find out the article you’re reading is either marketing or the unpublishable fantasies of the deranged) but none of which seems to get pruned from the wiki, presumably because proving it’s bullshit needs specialist knowledge, and specialists are frequently outpaced by the motivated deranged folks who originate articles on topics like these

            for Solomonoff induction specifically, the vast majority of the article very much feels like an attempt by rationalists to launder a pseudoscientific concept into the mainstream. the Turing machines section, the longest one in the article, reads like a D-quality technical writing paper. the citations are very sparse and not even in Wikipedia’s format, it waffles on forever about the basic definition of an algorithm and how inductive Turing machines are “better” because they can be used to implement algorithms (big whoop) followed by a bunch of extremely dense, nonsensical technobabble:

            Note that only simple inductive Turing machines have the same structure (but different functioning semantics of the output mode) as Turing machines. Other types of inductive Turing machines have an essentially more advanced structure due to the structured memory and more powerful instructions. Their utilization for inference and learning allows achieving higher efficiency and better reflects learning of people (Burgin and Klinger, 2004).

            utter crank shit. I dug a bit deeper and found that the super-recursive algorithms article is from the same source (it’s the same rambling voice and improper citations), and it seems to go even further off the deep end.

            • blakestacey@awful.systemsM
              link
              fedilink
              English
              arrow-up
              9
              ·
              edit-2
              11 months ago

              Taking a look at Super-recursive algorithm, and wow…

              Examples of super-recursive algorithms include […] evolutionary computers, which use DNA to produce the value of a function

              This reads like early-1990s conference proceedings out of the Santa Fe Institute, as seen through bong water. (There’s a very specific kind of weird, which I can best describe as “physicists have just discovered that the subject of information theory exists”. Wolfram’s A New Kind[-]Of Science was a late-arriving example of it.)

              • self@awful.systemsM
                link
                fedilink
                English
                arrow-up
                6
                ·
                11 months ago

                as someone with an interest in non-Turing models of computation, reading that article made me feel how an amateur astronomer must feel after reading a paper trying to find a scientific justification for a flat earth

              • V0ldek@awful.systems
                link
                fedilink
                English
                arrow-up
                3
                ·
                11 months ago

                In computability theory, super-recursive algorithms are a generalization of ordinary algorithms that are more powerful, that is, compute more than Turing machines[citation needed]

                This is literally the first sentence of the article, and it has a citation needed.

                You can tell it’s crankery solely based on the fact that the “definition” section contains zero math. Compare it to the definition section of an actual Turing machine.

                • blakestacey@awful.systemsM
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  edit-2
                  11 months ago

                  More from the “super-recursive algorithm” page:

                  Traditional Turing machines with a write-only output tape cannot edit their previous outputs; generalized Turing machines, according to Jürgen Schmidhuber, can edit their output tape as well as their work tape.

                  … the Hell?

                  I’m not sure what that page is trying to say, but it sounds like someone got Turing machines confused with pushdown automata.

          • blakestacey@awful.systemsM
            link
            fedilink
            English
            arrow-up
            5
            ·
            11 months ago

            “Solomonoff induction” is the string of mouth noises that Rationalists make when they want to justify their preconceived notion as the “simplest” possibility, by burying all the tacit assumptions that actual experience would let them recognize.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      11 months ago

      “you cannot conclusively disprove that we do not need more money and that we’re full of shit, so you absolutely have to give it to us so we can keep the racket going”

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    11 months ago

    I guess there both are no real biochemists (or whatever the relevant field is), nor well read cybersecurity people (so they know a little bit more than just which algorithms are secure and why mathematically) working at openai as this is a classic movie plot threat. LLMs could also teach you how to make nuclear weapons, but getting the materials is going to be the problem there.

    (Also I think there is a good reason we don’t really see terrorists use biological weapons, nor chemical weapons (with a few notable, but not that effective exceptions), big bada boom is king)

    • YouKnowWhoTheFuckIAM@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      11 months ago

      To be clear: it is all movie plot threats. At the very forefront of the entire “existential threat” space is nothing but a mid-1990s VHS library. Frankly if you want to understand like 50% of what goes on in AI at this point my recommendation is just that you read John Ganz and listen to his podcast, because 90s pop and politics culture is the connective tissue of the whole fucking enterprise.

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      10
      ·
      11 months ago

      the relevant field would be microbiology. while someone who got all the way past about the first semester of organic chemistry lab is perfectly capable of making some rudimentary chemical weapons, they won’t necessarily be able to make it safely, reliably, cheaply, consistently, and without killing themselves, and universities most of the time put enough sense in everyone’s head to not do that. this strictly requires that you know anything about chemistry, too. for bioweapons every single problem pointed to above is orders of magnitudes worse, and you probably need masters degree to do anything seriously nefarious. then you get into the problem of using that stuff, and you need explosives for that anyway. the reason for that

      (Also I think there is a good reason we don’t really see terrorists use biological weapons, nor chemical weapons (with a few notable, but not that effective exceptions), big bada boom is king)

      is that barrier to booms is even lower, especially if your country is strewn with UXO. there’s also an entirely different reason why professional militaries don’t use chemical/biological weapons https://acoup.blog/2020/03/20/collections-why-dont-we-use-chemical-weapons-anymore/

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        8
        ·
        11 months ago

        also the another reason that wiped out any interest in chemical warfare among militaries is that they found out first cluster munitions and then PGMs vastly more useful in the roles they were shoehorning chemical weapons in, not to mention lack of diplomatic and other problems

      • Umbrias@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        edit-2
        11 months ago

        I feel it’s important to mention that as far as CBRN threats are concerned, biological warfare threats are very real, a serious problem, and admittedly accelerated by ai tools for novel biological structures. Militaries don’t use bio weapons because they suck at military things, largely, but terrorists have and can used bio weapons to terrifying effect. Bio warfare proliferation is difficult to spot and counter.

        To be clear here, open ai is late to the party on this front with a terrible paper, but practically it’s a serious concern, both ai tools and non ai tools lowering the barrier to entry, as well as the fact that any given bio lab essentially looks like a bio warfare lab.

    • BlueMonday1984@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      Even if one had the means necessary to carry out a bioterrorist attack, simply bombing a place is much easier, faster and safer.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        11 months ago

        Yeah and also, terrorists are not genocidal death cults. ‘terrorists skip getting microbiology phd using chatgpt to create a pandemic that kills untold numbers of beings’ is pure fantasy, it gets worse as it turns out that the number of actual bioterrorists deaths in total ever isn’t even on the level of a 9/11. People seem to forget that terrorist groups have goals, and they just use terror/violence as a method to reach those goals, sure a few of them may die [chatgpt insert a gif of Bin Laden dressed as Lord Farquaad] but the goal of the terrorist organization is to keep existing to reach their political goals.

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        11 months ago

        come the fuck on, there’s zero chance some crackhead cultist or other jihadist breaks out CRISPR kit in their dusty garage trying to make microbiological deliverance happen

        if you wanna be afraid do what you want, i’m not gonna forbid you, i’m not your dad. but the intro section reads like some semi-palatable drivel that you include in order to justify your grant expenditures

        • Umbrias@beehaw.org
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          11 months ago

          breaks out CRISPR kit in their dusty garage

          I mean, it’s genuinely not hard. This reads to me more like assuming all terrorists are fundamentally incapable of anything remotely intelligent, which is both silly and not the official position of CBRN experts. From smaller cultists to state actors, bio warfare is a genuine concern.

          if you wanna be afraid

          I’m not.

          justify your grant expenditures

          What grants do you think I’m getting?

          Your comment sounds to me like lashing out about something because you want to assume every last thing you’re sneering at is wrong, when really the thing you’re sneering at is wrong in methodology and conclusions but not in the origin of a problem wholesale.

          • rook@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            11 months ago

            This reads to me more like assuming all terrorists are fundamentally incapable of anything remotely intelligent

            The first paper you linked there lists 9 deaths and 806 injuries across 50 years. Conversely, you can look at a single example like the Manchester Arena bombing in 2017 and see deaths and more injuries from a single event using simple techniques where materials and instructions are readily available. It isn’t unreasonable to look at the lack of success of amateur biological and chemical attacks and assume that plausible future attackers will be intelligent enough to simply take the tried and tested approach.

            On the other hand, there might be some mileage in hyping up the threat of diy countertop plagues in the hopes that would-be terrorists are as credulous as so many politicians and media figures are, and will take the pointlessly inconvenient and inefficient option which will likely fail and make life a little safer for the rest of us.

            • Umbrias@beehaw.org
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              11 months ago

              tried and tested

              Nobody is saying terrorists won’t keep using conventional bombs. Terror attacks aren’t just about maximum kills nor casualties per dollar, however, and as the barrier to entry lowers and lowers it’s important to consider ramifications from many technologies.

              hype them up to fail

              This does not seem a reasonable countermeasure when the risk of failure is potential pandemics.

          • skillissuer@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            6
            ·
            11 months ago

            I’m not.

            Well, do what you want

            What grants do you think I’m getting?

            I meant authors of that paper, sorry if i was unclear about it

            I mean, it’s genuinely not hard.

            like i said before,

            while someone who got all the way past about the first semester of organic chemistry lab is perfectly capable of making some rudimentary chemical weapons, they won’t necessarily be able to make it safely, reliably, cheaply, consistently, and without killing themselves,

            but with biological weapons stakes are much higher, every single leak carries risk of ending up dead or being discovered and safety requirements are gonna be generally much more stringent than with chemical weapons. you can get away with using small amounts of something that would plausibly pass for a ww1 era chemical weapon with only nitrile gloves and good fumehood; with biological agents you’re probably looking at doing about everything in glovebox. to use glovebox, you need to get glovebox, which, among other purchases, can move such person from government watch list to government act list

            and even ignoring that, you can’t just expect any random jihadi joe to make it work, you need someone who has some actual education and preferably expertise in microbiology, which if anything else severely limits poll of potential perpetrators

            • Umbrias@beehaw.org
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              edit-2
              11 months ago

              The equipment and ppe for bio weapons and chemical weapons of the same health hazard is about the same. The only difference with biological weapons is you’re doing stuff with fridges, incubators, agar, and petri dishes, rather than beakers, Bunsen burners, and filters.

              In either case your logic is relying on a threatening actor to not have any education. Sure, the pool of candidates is lower for sophisticated say, anthrax, something you can almost trivially find in dirt, but it’s also lower for sophisticated chemical weapons like say, sarin. And keep in mind, yes it’s hard to do biology or chemistry, but devoted individuals do it in garages, for often innocuous reasons. You can’t just assume some terrorist group will never have a strongly devoted individual or group who are competent enough to pull something off, you need to have preparedness. (In the form of local procedures, drills, and organization and plans and equipment to respond to threats as they develop, along with preventative measures)

              Also make no mistake, spotting lab scale chem and biological warfare production is extremely difficult. Even moreso for biological production, but both resemble conventional labs (and could be!). Where biological becomes an issue is that lab scale production of a pathogen can self propagate in a way chem attacks or bomb attacks can’t.

              I’m not saying to be afraid, the barrier to entry on all weapons production is the lowest it’s ever been, but sophistication in preventing them is also quite high. But it’s not something that can just be brushed away, it’s a real problem that real professionals are continuously solving.

                • Umbrias@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  edit-2
                  11 months ago

                  I have not. Is it good?

                  Keep in mind that it was written in 2014, the Field of bioengineering in the past ten years has advanced considerably.

              • skillissuer@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                11 months ago

                The equipment and ppe for bio weapons and chemical weapons of the same health hazard is about the same.

                well i think it strongly depends on your threat model. consider small leak of sarin: effects are detectable in seconds to minutes, antidote is readily available, cheap and specific, and sarin poisoning is not transmissible. in case of, say, plague, you won’t know what’s going on for days in which time there’s already a risk of infecting some random passerby, which is highly suboptimal if you want to stay covert

                In either case your logic is relying on a threatening actor to not have any education

                it’s not that i assume no education, it’s that i (as an organic chemist) wouldn’t trust crystallographer or electrochemist with synthesis of something like sarin. even with all required PPE and other precautions, your recruiting poll drops from about 100% (IED carrying child soldier) to maybe 1-0.1%? and that’s even before you consider that some of these highly specialized chemical weapons people are already on military payroll, or are surveilled precisely for this reason

                you’re underestimating cost of this entire enterprise, which even at lab scale could easily go into hundreds of thousands to million dollar range. you’re underestimating how hard it is even when you have everything provided - look at iraqi chemical weapons program. with no need to stay particularly covert they were only able to manufacture mustard gas of useful quality that could be stored; their mid tier chemical weapon sarin was at something like 30% purity and had very short shelf life; their vx was so dirty it was straight up useless

                for some weird reason you’re assuming that whatever chemistry you want to do, it works on the first try. it won’t; it never does, and even if it did, you have to make sure you’ve got the right stuff. this makes synthesis only half of the problem, because there’s still purification and analysis

                you seem to ignore that even in the paper that you cite, anyone that doesn’t have to do chemistry, doesn’t. (by that i mean performing some reaction, that generates side products, and so requires purification, analysis, and generates waste stream). doing chemistry means generating waste and need of its safe-ish disposal; it means getting considerable PPE; it means getting precursors, maybe in large amounts; all of that might move you from government watch list to government act list.

                talib doesn’t do chemistry when he makes an IED, because melting down contents of TM-62s or UXO found in nearby field isn’t chemistry; unabomber stuffing match heads in a pipe isn’t chemistry; stealing cylinders of chlorine (bulk of fatalities in that paper) and putting it in a car bomb isn’t chemistry. chlorine is not something you make and put in cylinders, because it’s relatively hard, uses large amounts of energy, leaves considerable waste/side product stream, and you can order it on aliexpress. same goes for sulfur mustard, i’m pretty sure most of incidents happened in syrian civil war and ultimately this stuff can be traced to syrian or iraqi chemical weapons program

                most of these problems, but especially making sure you’ve got the right stuff, are much harder for living organisms than for clearly identifiable, publicly known compounds. and we’re still nowhere close to the point where llm gets potentially useful. no, getting B in high school biology and relying on gpt4 and scihub to get all the way up there doesn’t count. chatgpt writing out rna sequence to be printed out, engineered into a bacterium and spread by a cultist, all done by mail order and or in garage is scenario completely detached from any pretense of being realistic

                beakers, Bunsen burners, and filters

                this tells me that you’ve ended all contact with chemistry on (classical, aqueous) qualitative inorganic analysis, because if you tried to cook anything on bunsen burner in organic lab, that’d be pretty hard considering there are none in flammables area. have you considered that you’re severely out of your depth and got caught in openai’s fear based hype-marketing?

                e: if you want to isolate anthrax from dirt, you’ll have many more problems than that, especially with “getting the right stuff” part. there are places where anthrax is endemic, but if step 1 involves catching diseased marmot in southern mongolia or deer in eastern siberia, this devolves straight into rube goldberg machine of mass destruction area

                • skillissuer@discuss.tchncs.de
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  11 months ago

                  in context of Iraqi insurgency even things like EFP plates were industrially manufactured in Iran and shipped there by their special forces, even that it’s just a chunk of copper plate pressed in shape of shallow cone. same in Afghanistan, where friendly CIA/ISI agent, or friendly black market weapons trader depending on period would provide them with explosives, fuzes, communication hardware, training and some modern weapons up to and including FIM-92 Stingers

                • Umbrias@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  11 months ago

                  Re: ppe

                  Sure, the outcomes are different, but the scale is too. The scale of a chemical weapons program is necessarily higher from a hazard point of view due to the sheer volume of material. The specifics make that messy though, yes, any particular pathogen would want differing levels of ppe.

                  Re: precursors

                  Right, the part of the point I’ve made about bio weapons is that spotting the precursors is very difficult, because a normal bio lab needs roughly the same stuff a weapons bio lab does.

                  I don’t disagree that many of the chemical weapons used in Syria may be from larger chemical weapons programs. But that doesn’t mean lab scale ones don’t also exist.

                  Re: ended contact with chemistry

                  Not everybody is trying to posture. The point wasn’t to show off a magnificent knowledge of lab equipment, but to demonstrate the similarity at a high level.

                  Re: llms for biology

                  Ehhhh there are plenty of research applications of llms at the moment and at least one is in use for generating candidate synthetic compounds to test. It’s not exactly the most painful thing to setup either, but no if you were to try to make a bio weapon today with llm tools (and other machine learning tools) alone it would go poorly. Ten, twenty years from now I’m not so sure, the prediction horizon is tiny.

                  Re: caught in openai fear

                  Why would I consider that when my opinions on bio weapons and CBRN are wholly unrelated to openais garbage? I didn’t even know openai cared about CBRN before today and I fully expect it’s just cash grabbing.

                  People can abuse and misinterpret real concepts in a way that makes them seem absurd.

                  Yes in practice anthrax is nontrivial. But folks here also seem to think any of this is magically impossible, and not something that dedicated people can reasonably do with fewer resources by the day. Which by the way is great, the surge of amateur micro bio is great, we’re learning a lot, we’re getting very smart hobbyists contributing in unexpected ways.

      • YouKnowWhoTheFuckIAM@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        11 months ago

        I think what’s going miss here is that “CBRN groups” is very obviously and primarily shit made up by the military-industrial complex to justify itself after the Cold War

        I don’t want to be dismissive of genuine attempts at being ready just in case, but the scale and scope of this is defined by politics, not by technical possibility

        • skillissuer@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          4
          ·
          11 months ago

          i mean, i don’t blame them, ritual dogfighting for congressional attention and money has became an art form. but the primary realistic concern of bioweapons preparedness would be, from what i understand, use of biological weapons by state actor, and i don’t really see a scenario where this happens before nukes start flying

          • YouKnowWhoTheFuckIAM@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            11 months ago

            100% agreed, what terrifies me is that our friend here seems to see the word “science” in here and immediately assume impeccable faith and perfect knowledge

        • Umbrias@beehaw.org
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          11 months ago

          I think the claim that cbrn is made up to self justify needs a lot more justification than you’re giving it. It’s just a profoundly confusing claim. They didn’t issue mopp4 in Syria for nothing…

          And whether or not you think nuclear weapon proliferation is a problem, it’s hard to claim CBRN anti proliferation efforts are just a made up excuse to exist, it’s a very real reason to exist as a program concept. Maybe you wouldn’t have one if you could decide to, but that’s a far cry from whatever you seem to be claiming.

          • YouKnowWhoTheFuckIAM@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            11 months ago

            I expressly put “CBRN groups” in scare quotes to tag along with my line at the bottom “I don’t want to be dismissive of genuine attempts…but the scale and scope of this is defined by politics, not by technical possibility

            You, however, have me saying “cbrn is made up to self justify” - of course if I had said any such thing, then one counter-example would have sufficed. Although actually it wouldn’t have sufficed, because in this context we’re talking about terroristic or otherwise chaotic release of a novel weapon. We’re not talking at all about bad powerful people deliberately employing chemical weapons they already have, for which of course CBRN is a worthy use and “genuine attempt at being ready”.

            “CBRN groups”, here, operates at the level of rhetoric, and that’s what I tried to draw attention to. The context in which “CBRN groups” the rhetorical and political device emerged was that in which Bill Clinton could become so enthused by a sci-fi novel about bioterrorism that he had its author up in front of the senate testifying as an expert on the subject. So on reflection, I should have deferred to Eisenhower’s original formulation: the military-industrial-congressional complex.

            Edit: you could always try Alex Wellerstein for the aggressively obvious historical counter-point to this whole fantasy. In his Restricted Data he provides a useful companion to Barriers to Bioweapons in a chapter discussing the notorious “backyard atomic bomb built from declassified material” cases. But because it’s a work of history we learn the most salient fact of all: the only way anyone believed that the backyard bomb designs were viable was because somebody wanted them to believe it, or because they had some reason to want to believe it themselves.

            Without that ingredient it was plain that the actual know-how was just not there, however that fact was fundamentally obscured by the desire to believe, and so people saw viability where there was none: plugging holes in their imaginary with meaningless verbiage about risk and but-what-if?

            • Umbrias@beehaw.org
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              11 months ago

              Incredible gymnastics to bend over backwards to interpret my response as one which doesn’t address what you say, when I specifically ask you to expand on what you mean and justify it. Ambiguous language doesn’t make you clever. Truly a discourse for the times.

              So your problem is with politicians using fearmongering. Sure. That’s always frustrating, using fear mongering top drum up support has been a political passtime since politics.

              I was not however, referring to fear mongering politics, but the practical and technical application of CBRN as a program and the actual, real, issues with bioterrorism and state bio weapons programs. Glad you got that soapbox out of your system though.

              Bio and chemical terrorism are hardly akin to nuclear weapons. Refining uranium at any rate that could produce a bomb in someone’s lifetime takes industry that must be hidden at a state level.

              This is simply not true of chemical and bio warfare.

              • self@awful.systemsM
                link
                fedilink
                English
                arrow-up
                2
                ·
                11 months ago

                maybe SneerClub is the wrong outlet for whichever ax you’re currently grinding

                • Umbrias@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  11 months ago

                  I had fine discussions with others here and in the past. This particular poster wants to soapbox and dismiss rather than engage.

              • YouKnowWhoTheFuckIAM@awful.systems
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                11 months ago

                If I may refer you back to the book cited, the (made up) fears of that time in fact incorporated the difficulty of obtaining fissile material during that period, when amongst the worries was that obtaining fissile material would not actually be that difficult. To simply state that biological and chemical warfare bear no resemblance is to depart from the lesson being related here to making excuses for that object of which you happen to be afraid. In each case the fear being constructed will make its own allowances for the real or supposed facts on the ground, and in this case there was no need to assume that a bombmaker would have to make his own plutonium - you’re drawing attention to an irrelevant distraction.

                Another point which you’re glibly avoiding, with tellingly unnecessary recourse to insulting language, is that “CBRN” the construct cannot be so easily distinguished from the “practical and technical application” that the real enterprise has. Indeed the existence of the real enterprise is often driven in part by the made-up fears (which does not licence the fears) - this happened, for example, with security protocols around the management of fissile material. I refer you back to the same book and to the rather famous data point about Bill Clinton’s interest in manufactured diseases.

                For more on stuff like this, although again not on the subject of bioterrorism because I don’t have that material in front of me, I recommend the confluence of two chapters in The Merger of Knowledge with Power by Rabitz (as well as the whole book), namely “Recombinant DNA Research: Whose Risks?” and “Hardware and Fantasy in Military Technology”. This isn’t paranoid soapboxing from a teenage Chomsky fan, it’s just part of the fabric of industrial science and technology as a social phenomenon.

                • Umbrias@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  11 months ago

                  happen to be afraid

                  Again, I’m not.

                  fissile material easy to get

                  Right so fundamentally getting the ingredients for chem and bio warfare is objectively easier than fissile material. To dismiss them as the same implies you don’t realize you almost certainly have the ingredients to make a substantial amount of chlorine gas sitting in your home right now.

                  Yes bio is a bit harder than that, but not as much as you might think. Anthrax is a common soil bacteria. Ricin from grain. Isolating specific bacteria takes time and is sloppy, sure, but doable in a garage. Not easy, not something we should simply brush off, either.

                  Ultimately, you’re not going to be convinced. You want to paint something as the same ol false fear instead of a developing threat from genuine technological improvements that you are potentially not aware of. Oh well.

                  You can ramble about the politics of politicians and CBRN all day if you want, it won’t be responding to the focused discussion I was having about the practicality of bio warfare though.

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    11 months ago

    Raytheon: we’re developing a blueprint for evaluating the risk that a large laser-guided missile could aid in someone threatening biology with death

    (Ok I know you need to pretend I’m an AI doomer for this sneer but whatever)