Less than a month after New York Attorney General Letitia James said she would be willing to seize former Republican President Donald Trump’s assets if he is unable to pay the $464 million required by last month’s judgment in his civil fraud case, Trump’s lawyers disclosed in court filings Monday that he had failed to secure a bond for the amount.

In the nearly 5,000-page filing, lawyers for Trump said it has proven a “practical impossibility” for Trump to secure a bond from any financial institutions in the state, as “about 30 surety companies” have refused to accept assets including real estate as collateral and have demanded cash and other liquid assets instead.

To get the institutions to agree to cover that $464 million judgment if Trump loses his appeal and fails to pay the state, he would have to pledge more than $550 million as collateral—“a sum he simply does not have,” reportedThe New York Times, despite his frequent boasting of his wealth and business prowess.

  • Ashyr@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    ·
    9 months ago

    Plus, that’s not a good task for an llm because its context window would almost certainly be too short.

    It would “hallucinate” because it could only “remember” a fraction of the content and then everyone would be all pissy because they used the program wrong.

    • TropicalDingdong@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      13
      ·
      9 months ago

      I mean you can pretty simply just engineer around that. Dumping 5k pages is obviously an idiotic way of approaching the issue. But having an LLM going through 500 words at a time, with 125 words of overlap in each sequence to pull out key words, phrases, and intentions, then put that into a structured data form like a JSON. Then parse the JSONs to pick up on regions where specific sets of phrases and words occur. Give those sections in part or entirely to the LLM again; again have it give you structured output. Further parse and repeat. Do all of these actions several times to get a probability distribution of each assumption around what is being said or is intended. Build the results into a Bayes net, or however you like, to get at the most likely summaries of what the document is saying. These results can then be manually reviewed. If you are touchy, you can even adjust the sensitivity to pick up on much more nuanced reads of the text.

      Like, if the limit of your imagination is throwing spaghetti against a wall, obviously your results are going to turn out like shit. But with a bit of hand holding, some structure and engineering, LLM’s can be made to substantially outperform their (average) human counter parts. They do already. Use them in a more probabilistic way to create distributions around the assumptions they make, and you can set up a system which will vastly outperform what an individual human can do.

      • brbposting@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        9 months ago

        (just asked up the thread:)

        GPT-4 & Claude 3 Opus have made little summarization oopsies for me this past week. You’d trust ‘em in such a high profile case?

      • MagicShel@programming.dev
        link
        fedilink
        arrow-up
        1
        arrow-down
        6
        ·
        edit-2
        9 months ago

        LLMs are still pretty limited, but I would agree with you that if there was a single task at which they can excel, it’s translating and summarizing. They also have much bigger contexts than 500 words. I think ChatGPT has a 32k token context which is certainly enough to summarize entire chapters at a time.

        You’d definitely need to review the result by hand, but AI could suggest certain key things to look for.

        • TropicalDingdong@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          6
          ·
          9 months ago

          LLMs are still pretty limited,

          People were doing this somewhat effectively with garbage Markov chains and it was ‘ok’. There is research going on right now doing precisely what I described. I know because I wrote a demo for the researcher whose team wanted to do this, and we’re not even using fine tuned LLMs. You can overcome much of the issues around ‘hallucinations’ by just repeating the same thing several times to get to a probability. There are teams funded in the hundreds of millions to build the engineering around these things. Wrap calls in enough engineering and get the bumper rails into place and the current generation of LLM’s are completely capable of what I described.

          This current generation of AI revolution is just getting started. We’re in the ‘deep blue’ phase where people are shocked that an AI can even do the thing as good or better than humans. We’ll be at alpha-go in a few years, and we simply won’t recognize the world we live in. In a decade, it will be the AI as the authority and people will be questioning allowing humans to do certain things.

          • MagicShel@programming.dev
            link
            fedilink
            arrow-up
            4
            ·
            edit-2
            9 months ago

            Read a little further. I might disagree with you about the overall capability/potential of AI, but I agree this is a great task to highlight its strengths.

            • TropicalDingdong@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              4
              ·
              9 months ago

              Sure. and yes I think we largely agree, but on the differences, I seen that they can effectively be overcome by making the same call repeatedly and looking at the distribution of results. Its probably not as good as just having a better underlying model, but even then the same approach might be necessary.