We support a number of different models provided by multiple AI service providers, namely OpenAI, Antrophic, and Mistral, each with their own series of models. Each model has its own unique specifications and use cases, and in this guide we will explain how to choose the best model for your use case.
Choosing between AI Models
OpenAI
- GPT-3.5-turbo (Consumes 1 message): This is the most basic and most cost effective model. It is suitable for general applications that do not require extensive context memory or complex function calling. A good base prompt paired with GPT-3.5 Turbo can produce decent results, such as the Wonderchat bot on our website. It has lower resource requirements than GPT-4, making it ideal for projects with limited resources.
- GPT-3.5 (16k) (Consumes 5 message): This model is the same as the above GPT-3.5-turbo model with an expanded context window (5 times larger than the regular gpt-3.5-turbo). This makes it more effective for applications that require processing substantial amounts of text data. It has a 16k context, which means it can understand and generate natural language or code with a larger context window.
- Best suited for: Users with long custom prompts, PDF pages with content that is in a repetitive format
- GPT-4 Turbo (Consumes 10 messages): GPT-4 offers a faster response and it uses a more intelligent AI model than GPT-3.5 Turbo. It is generally good at following instructions with minimal hallucination. It also has a larger context window to produce better quality responses. However it comes at a higher price point than GPT-3.5 Turbo, although it is still cheaper than GPT-4. Though note that its responses may not be as reliable or as accurate as the smarter GPT-4 model.
- GPT-4 (Consumes 20 messages): This model has advanced function calling capabilities and a larger context window than GPT-3.5-turbo. It is the way to go if you need these capabilities. However, it comes at a higher price point compared to GPT-3.5-turbo, and its larger context window and enhanced capabilities come with increased computational resource needs**2. It features an updated and improved model with function-calling capabilities3**.
In summary, if you are working on a budget or with limited resources, GPT-3.5-turbo is a suitable choice
- If your application doesn’t require extensive context memory or complex function calling, GPT-3.5-turbo will serve you well
- If you require your AI to have a large context window because you have a long custom instruction for your chatbot, GPT-3.5 Turbo (16k) is a good choice.
- If you need your chatbot to constantly cite accurate links or for your chatbot to do advanced maths or calculation, GPT-4 is the way to go
Selecting Between Claude AI models
Anthropic has released several models of its Large Language Model series, each with its own unique specifications and use cases. In this document, we will discuss the most optimal use cases for these models.
The difference between Anthropic AI models lies in their capabilities and pricing tiers. Anthropic's new family of AI models, Claude 3, consists of three models: Claude 3 Opus, Claude 3 Sonnet, and Claude 3 Haiku