Straico's Generative AI Models
Your Comprehensive Guide
Welcome to the Straico AI Model Resource Center — your definitive encyclopedia for navigating our diverse array of generative AI models. Whether you’re spearheading startup ventures, orchestrating powerful marketing narratives, or simply indulging your curiosity in AI, this center equips you with the knowledge to choose the ideal AI tools tailored to your aspirations.
Compare AI Models at a Glance
Our AI Comparison Table presents a straightforward view of key attributes for each model, emphasizing cost, capabilities, and more, to swiftly guide your selection.
Model Name | Editor's Choice | Max Words (approx) | Coins 🪙 per 100 Words | Type | Parameters (in billion) | Capabilities |
---|---|---|---|---|---|---|
OpenAI: GPT-4 Turbo 128K | 👑 | 96,000 | 8 | Proprietary | 1,000 | 📃Text |
Anthropic: Claude v2.1 | 👑 | 150,000 | 8 | Proprietary | 137 | 📃Text |
Gryphe: MythoMax L2 | 👑 | 6,000 | 1 | Open Source | 13 | 📃Text |
Mistral: Mixtral 8x7B Instruct (beta) | 👑 | 24,000 | 2 | Open Source | 56 | 📃Text |
Perplexity: Sonar 8x7B Online | 👑 | 3,000 | 1 | Proprietary | 70 | 📃Text 🛜Internet |
Anthropic: Claude 3 Opus | - | 150,000 | 24 | Proprietary | 2,000 | 📃Text |
Anthropic: Claude 3 Sonnet | - | 150,000 | 5 | Proprietary | 70 | 📃Text |
Anthropic: Claude 3 Haiku | - | 150,000 | 1 | Proprietary | 20 | 📃Text |
OpenAI: GPT-3.5 Turbo 16K | - | 12,000 | 0 | Propietary | 13 | 📃Text |
OpenAI: GPT-4 8K | - | 6,000 | 20 | Proprietary | 1,000 | 📃Text |
OpenAI: GPT-4 Vision | - | 96,000 | 8 | Proprietary | 1000 | 📃Text 🖼️Vision |
Anthropic: Claude Instant v1 | - | 75,000 | 2 | Proprietary | 93 | 📃Text |
Google: PaLM 2 Bison | - | 24,500 | 1 | Open Source | 340 | 📃Text |
Google: Gemini Pro (preview) | - | 98,280 | 1 | Proprietary | 540 | 📃Text |
Meta : Llama 3 8B Instruct | - | 6,000 | 0.5 | Proprietary | 8 | 📃Text |
Meta : Llama 3 70B Instruct nitro | - | 6,000 | 1 | Proprietary | 70 | 📃Text |
Mistral 7B Instruct v0.1 (beta) | - | 3,000 | 1 | Open Source | 7 | 📃Text |
Dolphin 2.6 Mixtral 8x7B | - | 24,000 | 1 | Open Source | 56 | 📃Text |
Goliath 120B | - | 4,608 | 5 | Open Source | 120 | 📃Text |
* Models marked with a 👑 are our Editor’s Choices, selected for their proven effectiveness in practical use on Straico.
Explore Editor’s Choice LLMs
Browse our Editor’s Choice tabs, a collection born from our team’s extensive efforts to evaluate AI models through empirical use rather than technical specifications. Here, discover the practical merits and constraints each selected model offers. We also provide links to shared chats and prompt templates, allowing you to experience and test their effectiveness firsthand on the Straico platform.
- OpenAI: GPT-4 Turbo 128K
- Anthropic: Claude v2.1
- Gryphe: MythoMax L2 13B 8k
- Mistral: Mixtral 8x7B
- Perplexity: Sonar 8x7B Online
OpenAI: GPT-4 Turbo 128k
Open AI GPT-4 Turbo 128k is a propiertary large language model (LLM) developed by OpenAI. This model is an evolved version of GPT-4 and GPT-3.5 Turbo 16k, which is one of the most recognized LLMs in the world.
Specialities
Consistent with complex questions, organized and structured answers, formatting, content generation, complex reasoning.
Limitations
Not up-to-date information, multiple guardrails, exhibit biases, hallucinations
Chat examples:
– Testing the model with long contexts
Prompt templates examples:
Anthropic: Claude v2.1
Anthropic Claude v2.1 is a propiertary large language model (LLM) developed by Anthropic, specialized in handling complex multi-step instructions over large amounts of content. Claude v2.1
Specialities
Suitable for very large context and files, elaborated analysis from many sources of information, very good with complex reasoning.
Limitations
Not up-to-date information, multiple guardrails, exhibit biases, hallucinations, refuses to promote “unsafe” conversations.
Chat examples:
Gryphe: Mythomax L2 13B 8K (beta)
Mythomax L2 13B 8K is an open source large language model (LLM) created by Gryphe, that specializes in storytelling and advanced roleplaying. It is built on the foundation of the Llama 2 architecture and is a part of the Mytho family of Llama-based models, which also includes MythoLogic and MythoMix. The MythoMax L2 13B variant is an optimized version of MythoMix, incorporating a more comprehensive tensor merger strategy that increases coherency and performance.
Specialities
Roleplaying, storytelling, uncensored, low-priced.
Limitations
Not suitable for extremely large contexts, not always enough detailed answers.
Chat examples:
– Simulating seller-customer interaction
Prompt templates examples:
Mistral: Mixtral 8x7B
Mistral: Mixtral 8x7B is an open source language model (LLM) developed by Mistral AI.
According to the creators, Mixtral 8x7B outperforms other well-known LLMs such as Llama 2 70B and GPT-3.5 in several benchmarks, making it one of the most powerful open-source models available.
Mixtral 8x7B has been praised for its cost-effectiveness and creative text formats for storytelling and roleplaying.
Specialities
Storytelling, role playing, suitable for long contexts.
Limitations
Censored.
Chat examples:
– Famous character roleplaying
– Simulating a job interview
Prompt templates examples:
Perplexity: Sonar 8x7B Online
Perplexity: Sonar 8x7B Online is a propiertary large language model (LLM) developed by Perplexity AI, which provides real-time access to the internet and up-to-date information.
Specialities
Up-to-date information, detailed answers to short prompts.
Limitations
Not always suitable for very large contexts.
Chat examples:
– Up-to-date calls
– Consultancy online
Prompt templates examples:
– Business Consultant to Start Your Company
– Chasing Calls from Accelerators and VC
Understanding Costs and Interactions
‘Max Words’ refers to the maximum number of words that can be processed by an AI model in a single interaction. This limit includes all chat input, chat output, and the content from any attachments used during the interaction.
The total cost for an interaction in coins is determined by the combined word count of the chat input, chat output, and any included attachment content. The ‘Coins 🪙 per 100 words’ rate specific to the selected AI model will apply.
When attachments are used, the total word count includes all text from the attachments—whether files or URLs —along with the whole conversation in the chat thus far, plus the new message and AI’s anticipated response.
Various attachment types such as .docx, .txt, .pptx, .xlsx, .pdf files, YouTube video URLs, and web pages can be included. Each contributes to the overall word count that the AI model processes, which in turn affects coin cost.
Yes, there are several strategies for cost-effective interactions. Starting new chat sessions can reduce word count by clearing previous context. For those seeking value, Straico provides models like Gemini Pro in the proprietary realm and Mixtral 8x7B in the open-source domain, both at an economical rate of 1 coin per 100 words. Furthermore, Straico offers GPT 3.5 Turbo, which remains free of charge, allowing unlimited interactions without impacting your coin balance.
Certainly! Pricing for image generation on Straico varies depending on the size of the image and the AI model chosen. When generating images with DALL·E 3 via our chat assistant, the coin cost is approximately 210 for high-resolution images of 2048×2048 pixels, and around 20 coins for smaller, 512×512 pixel images.
For those opting to use Stable Diffusion in our dedicated image generation section, we offer straightforward pricing—regardless of the image size, each creation is just 20 coins.
With both models delivering impeccable quality, you can choose the best option that matches your requirements and budget.
‘Image input capability’ refers to the ability of an AI model to accept and interpret visual information. For instance, with our GPT-4 Vision model, you can upload an image directly into the chat assistant and ask queries about its content, just as readily as if you were discussing text. This means the model can analyze the image and engage in a detailed dialogue about what it depicts.
‘Real-time capabilities’ denote the model’s prowess in leveraging the latest information available up until the moment of interaction. This is exemplified by our Perplexity 70B online (PPLX 70B), which can pull in the most recent data to enrich its responses, ensuring you’re receiving the most up-to-date content possible. This feature is invaluable for tasks requiring current knowledge and insights.