• j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago
    • Technical/general/code - Llama2 70B GGUF Q5k_M instruct

    • Learning - Llama2 70B GGUF Q5k_M chat

    Chat/Roleplay

    • Pygmalion 2 (the trick to using it in Oobabooga [without code mods] is to add the special tokens to the chat character profile sections. I can’t type the tokens directly in Lemmy because the way Lemmy is coded, but you should be able to figure this out. The readme for the model has the special token syntax. Put the user and character (model) tokens in front of the names in the top boxes, then start the context with the system token. Don’t use the user or model tokens in the context, use the names instead. This isn’t perfect, like you can’t also use Silero TTS with this method, but it will work.)
    • GPT4chan sourcing instructions and links are in the main Oobabooga readme.
  • micheal65536@lemmy.micheal65536.duckdns.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    WizardLM 13B (I didn’t notice any significant improvement with the 30B version), tends to be a bit confined to a standard output format at the expense of accuracy (e.g. it will always try to give both sides of an argument even if there isn’t another side or the question isn’t an argument at all) but is good for simple questions

    LLaMa 2 13B (not the chat tuned version), this one takes some practice with prompting as it doesn’t really understand conversation and won’t know what it’s supposed to do unless you make it clear from contextual clues, but it feels refreshing to use as the model is (as far as is practical) unbiased/uncensored so you don’t get all the annoying lectures and stuff

  • librecat@lemmy.basedcount.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Anything based on llama2 tbh. It’s fast enough and logical enough to handle the kinds of programming related tasks I want to use a llm for (writing boilerplate code, generating placeholder data, simple refactoring). With the release of the vicuna and codellama models things are getting even better.