I believe the main distinction in open source vs. not with LLMs is, if the model is open source, others can finetune it (a kind of further training on top of its already done training). Depending on how deep a finetune, such can drastically change the biases of the model and in doing so, proliferate alternative versions that are far off from any intended biases. So it would make sense they wouldn’t want to open source it if the goal is to promote a certain kind of model biases.
Seems like lots of potential for bad actors to fuck with the inputs in an open-sourced model.
I believe the main distinction in open source vs. not with LLMs is, if the model is open source, others can finetune it (a kind of further training on top of its already done training). Depending on how deep a finetune, such can drastically change the biases of the model and in doing so, proliferate alternative versions that are far off from any intended biases. So it would make sense they wouldn’t want to open source it if the goal is to promote a certain kind of model biases.
Edit: wording
You don’t have to push every change to prod tho?