• rufus@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    10 months ago

    Try MythoMax. I’ve had good results with that. For storytelling and all kinds of stuff. See if it’s the model. I think some of the merges or super-somethings had issues or were difficult to pull off correctly.

    also try the option --usemirostat 2 5.0 0.1 That overrides most options and automatically adjusts things. In your case it should mostly help rule out some possibilities of misconfiguration.

    • darkeox@kbin.socialOP
      link
      fedilink
      arrow-up
      2
      ·
      10 months ago

      The MythoMax looks nice but I’m using it in story mode and it seems to have problems progressing once it’s reached the max token, it appears stuck:

      Generating (1 / 512 tokens)
      (EOS token triggered!)
      Time Taken - Processing:4.8s (9ms/T), Generation:0.0s (1ms/T), Total:4.8s (0.2T/s)
      Output:
      
      

      And then stops when I try to prompt it to continue the story.

      • rufus@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        10 months ago

        That is correct behaviour. At some point it’ll decide this is the text you requested and follow it up with an EOS token. You either need to suppress that token and force it to generate endlessly. (With your --unbantoken you activate that EOS token and this behaviour.) Or manually add something and hit 'generate again. For example just a line break after the text often does the trick for me.

        I can take a screenshot tomorrow.

        Edit: Also your rope config doesn’t seem correct for a superHOT model. And your prompt from the screenshot isn’t what I’d expect when dealing with a WizardLM model. I’ll see if I can reproduce your issues and write a few more words tomorrow.

        Edit2: Notes:

        • I think SuperHOT means linear scale. So for a 8k LLaMA1: --contextsize 8192 --ropeconfig 0.25 10000
        • No --unbantokens if you don’t want it to stop
        • WizardLM prompt format is: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.USER: Who are you? ASSISTANT: I am WizardLM.......
        • SuperCOT prompt format is: Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\n\n\n### Input:\n\n\n### Response:\n
        • Storywrite is probably plain stories. But idk.

        The chosen model is kind of badly documented. And a bit older. I’m not sure if it’s the best choice.

        Edit3: I’ve put this in better words and made another comment including screenshots and my workflow.

        • micheal65536@lemmy.micheal65536.duckdns.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          Yeah, I think you need to set the contextsize and ropeconfig. Documentation isn’t completely clear and in some places sort of implies that it should be autodetected based on the model when using a recent version, but the first thing I would try is setting these explicitly as this definitely looks like an encoding issue.

    • darkeox@kbin.socialOP
      link
      fedilink
      arrow-up
      1
      ·
      10 months ago

      I’ll try that Model. However, your option doesn’t work for me:

      koboldcpp.py: error: argument model_param: not allowed with argument --model