πŸ“š(Large Language Models) LLMs

Which model or (Large Language Model) LLM does it use?

By default, we use the GPT-3.5 turbo LLM model from OpenAI to answer questions, but we will always direct the question to whichever model will give the best quality answer in the fastest time, GPT-4 or GPT-3.5.

We have also performed extensive testing comparing the 2 models and found that GPT-3.5 is able to answer 99% of questions GPT-4 can at 3x the speed.

We do have an option to switch to GPT-4 answers from GPT 3.5 turbo on our paid plans, the GPT-4 question limits will then replace your original question limits.

Last updated