I feel if you have to specify that you are not GPT-4 then you are likely GPT-4. The output screenshots in this thread would likely suggest the same as it gave proper replies instead of disinfo, which is of course because that’s what the model was trained on. You can only steer an LLM so much away from its trained model.
I feel if you have to specify that you are not GPT-4 then you are likely GPT-4. The output screenshots in this thread would likely suggest the same as it gave proper replies instead of disinfo, which is of course because that’s what the model was trained on. You can only steer an LLM so much away from its trained model.