New Feature: Visible Reasoning Process for Compatible Models
Reasoning models have something special, something different; they think before answering and that thinking process is written as if they were humans thinking out loud. We know that the thinking process is very important for you in order to evaluate the model response and get more transparency about how the LLMs work.
How It Works
Now, when using compatible models, you will see this thinking process and you can collapse it with the button you’ll see at the top of the response. You can immediately check how it looks in this shared chat example.

Compatible Reasoning Models
The reasoning models compatible with this feature are:
- DeepSeek R1
- DeepSeek R1: Nitro
- Claude 3.7 Sonnet Reasoning
- Perplexity Sonar Reasoning
- Perplexity Sonar DeepResearch
- Qwen3 235B A22B
When Reasoning Models Excel
These reasoning processes allow models to excel in scenarios that require breaking down complex problems into logical steps, such as:
Analytical Tasks
- Mathematics
- Programming
- Strategic planning
Problem-Solving
- Complex puzzles
- Multi-step reasoning
- Logic-based challenges
These are problems where not only is information needed, but thinking is also the most important component.
Benefits of Visible Reasoning
- Transparency: See exactly how the model arrives at its conclusions
- Evaluation: Better assess the quality and logic of model responses
- Learning: Understand the step-by-step thought process
- Debugging: Identify where reasoning might go wrong
- Collapsible: Hide or show the reasoning as needed
This feature provides unprecedented insight into AI decision-making, making your interactions with reasoning models more transparent and educational.