Model Selection Guide

Straico's Multi-Model Exploration and Selection

Model Comparison

We’ve curated the best closed and open-sourced models, and here we’ll guide you through the pros and cons of each. Discover how to select the perfect model to unlock the full potential of your AI experience

Model Name Word Limit per Chat Coins per 100 Words Moderation Parameters Training Data
OpenAI: GPT-3.5 Turbo 16K
~ 12.000 words
0 coins
Filtered
13 Billion
Up to Sep 2021
OpenAI: GPT-4 8K
~ 6.000 words
20 coins
Filtered
~ 1 Trillion
Up to Sep 2021
OpenAI: GPT-4 Turbo 128K
~ 96.000 words
8 coins
Filtered
~ 1 Trillion
Up to Apr 2023
Anthropic: Claude Instant v1
~ 75.000 words
2 coins
Filtered
93 Billion
Early 2023
Anthropic: Claude v2.1
~ 150.000 words
12 coins
Filtered
137 Billion
Mid 2023
Gryphe: MythoMax L2
~6.000 words
1 coin
Unfiltered
13 Billion
Early 2023
Google: PaLM 2 Bison
~24.500 words
1 coin
Unfiltered
340 Billion
Mid 2021
Meta: Llama v2 70B Chat (beta)
~3.000 words
1 coin
Unfiltered
70 Billion
Up to Sep 2022
Meta: CodeLlama 34B Instruct (beta)
~3.000 words
1 coin
Unfiltered
34 Billion
Up to Sep 2022
Mistral 7B Instruct v0.1 (beta)
~3.000 words
1 coin
Unfiltered
7 Billion
Mid 2021
Perplexity 70B Online
~3.000 words
1 coin
Unfiltered
70 Billion
Real time internet access

Word Limit Per Chat:

The maximum number of words the model can process in a single request. A higher word limit allows the model to take in more input words and generate more output words.

Coins per 100 words:

The coin cost for using 100 words of the selected model.

How the total words per interaction is calculated if no attachments are used:

Total words = Full current conversation + New message + Upcoming answer

How the total words per interaction is calculated if an attachment is used:

Total words = Attachment + Full current conversation + New message + Upcoming answer

Starting a new chat instead of continuing a conversation can significantly reduce the total words needed, as past context is reset. Another good option is using GPT 3.5 Turbo, which remains free regardless of word count on our platform.

Moderation:

Filtered models have built-in content guidelines to make appropriate moderation judgments. Unfiltered models do not have such measures, but this does not necessarily mean they will respond inappropriately.

Parameters:

Parameters are configurable variables internal to the language model. Their values are learned from training data. Parameters are key for the model to generate predictions and produce quality text output.

More parameters allow the model to handle more complex tasks, potentially improving overall output.

Training Data:

The cutoff date for data used to train the model. The model is knowledgeable of events and information that occurred before its training data date.