both OpenAI and Microsoft are probing whether DeepSeek used OpenAI’s application programming interface (API) without permission to train its own models on the output of OpenAI’s systems, an approach referred to as distillation.
That would definitely show up in the quality of responses. Surely they have better and cheaper training sources…
I think it’s reasonably likely. There was a research paper about how to do basically that a couple years ago. If you need a basic LLM trained on a specialized form of input and output, getting the expensive existing LLMs to generate that text for you is pretty efficient/inexpensive, so it’s a reasonable way to get a baseline model. Then you can add stuff like chain of reasoning and mixture of experts to improve the performance back up to where you need it. It’s not going to be a way to push the state of the art forward, but it’s sure a cheap way to catch up to models that have done that pushing.
That would definitely show up in the quality of responses. Surely they have better and cheaper training sources…
I think it’s reasonably likely. There was a research paper about how to do basically that a couple years ago. If you need a basic LLM trained on a specialized form of input and output, getting the expensive existing LLMs to generate that text for you is pretty efficient/inexpensive, so it’s a reasonable way to get a baseline model. Then you can add stuff like chain of reasoning and mixture of experts to improve the performance back up to where you need it. It’s not going to be a way to push the state of the art forward, but it’s sure a cheap way to catch up to models that have done that pushing.