NEW API PARAMETERS ALERT

Hey there, developers! We’ve got some exciting news to share with you all.

The Straico API has just been updated with two brand-new parameters that will give you even more control over the responses generated by all ML models. Get ready to take your projects to the next level!


🎯 MAX_TOKENS

  • This parameter allows you to set the maximum number of tokens (words) in the response.
  • Adjust this to control the length and verbosity of the output.
  • Perfect for tailoring the response length to your specific needs.
  • It’s important to review the max_output limit for each model, which is the maximum number of tokens the model can generate. You can find this information in the model details at:
    • /v0/models
    • /v1/models
  • Setting the max_tokens parameter higher than the model’s max_output may result in truncated responses.

🔥 TEMPERATURE

  • The temperature parameter adjusts the “creativity” of the AI model.
  • Lower values (0.1-1.0) result in more conservative, factual responses.
  • Higher values (1.0-2.0) lead to more diverse, imaginative outputs.
  • Experiment with different temperatures to find the perfect balance for your use case.

🚀 Getting Started

To start using these new parameters, simply include them in your API requests to the following endpoints:

  • /v0/prompt/completion
  • /v1/prompt/completion

We can’t wait to see how you’ll leverage these new features to build even more powerful and engaging AI-driven applications. As always, feel free to reach out to our support team if you have any questions or need further assistance.

Happy coding, everyone! 🚀

Check the documentation here.

Ready to Revolutionize Your AI Experience?

Join our all-in-one AI platform and revolutionize your workflow. 
Tap into the power of advanced generative models for text, 
images, and audio—all in one place.