Skip to main content
POST
/
perplexity
/
sonar-deep-research
Sonar Deep Research — exhaustive research & comprehensive reports
curl --request POST \
  --url https://api.aisa.one/apis/v1/perplexity/sonar-deep-research \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "sonar-deep-research",
  "messages": [
    {
      "role": "user",
      "content": "Write a comprehensive analysis of the global semiconductor supply chain risks in 2026"
    }
  ]
}
'
{
  "id": "<string>",
  "model": "<string>",
  "object": "chat.completion",
  "created": 123,
  "choices": [
    {
      "index": 123,
      "message": {
        "role": "<string>",
        "content": "<string>"
      },
      "finish_reason": "stop"
    }
  ],
  "citations": [
    "<string>"
  ],
  "search_results": [
    {
      "title": "<string>",
      "url": "<string>",
      "snippet": "<string>",
      "date": "<string>",
      "source": "<string>"
    }
  ],
  "usage": {
    "prompt_tokens": 123,
    "completion_tokens": 123,
    "total_tokens": 123,
    "search_context_size": "<string>"
  }
}
Sonar Deep Research is Perplexity’s most thorough model. It conducts exhaustive multi-step web searches and generates comprehensive, well-cited research reports. Best for: In-depth research, comprehensive analysis, literature reviews, market research reports. Model: sonar-deep-research
Note: Deep research requests may take significantly longer to complete (up to several minutes) due to the exhaustive multi-step search process.

Example

curl -X POST "https://api.aisa.one/apis/v1/perplexity/sonar-deep-research" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "sonar-deep-research",
    "messages": [
      {"role": "user", "content": "Write a comprehensive analysis of the global semiconductor supply chain risks in 2026"}
    ]
  }'

Response

The response follows the OpenAI chat completion format, with additional citations and search_results fields. Deep research responses are typically much longer and more detailed than other models.

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
model
enum<string>
required

The Sonar model to use.

Available options:
sonar,
sonar-pro,
sonar-reasoning-pro,
sonar-deep-research
messages
object[]
required

A list of messages comprising the conversation so far.

max_tokens
integer

The maximum number of tokens to generate in the response.

temperature
number
default:0.2

Sampling temperature between 0 and 2. Lower values make output more focused and deterministic.

Required range: 0 <= x <= 2
top_p
number
default:0.9

Nucleus sampling parameter. The model considers tokens with top_p probability mass.

Required range: 0 <= x <= 1
top_k
integer
default:0

The number of tokens to keep for top-k filtering.

Required range: 0 <= x <= 2048
stream
boolean
default:false

Whether to stream the response using server-sent events.

search_context
enum<string>
default:low

Controls how much search context to use. Affects per-request cost.

Available options:
low,
medium,
high
frequency_penalty
number
default:1

Penalizes new tokens based on their existing frequency in the text so far. Positive values decrease the likelihood of repeating the same line verbatim.

Required range: 0 <= x <= 2
presence_penalty
number
default:0

Penalizes new tokens based on whether they appear in the text so far. Positive values increase the likelihood of talking about new topics.

Required range: -2 <= x <= 2
return_citations
boolean
default:true

Whether to return citations and search results in the response.

search_recency_filter
enum<string>

Filter search results by recency.

Available options:
month,
week,
day,
hour
search_domain_filter
string[]

Limit search to specific domains.

Response

200 - application/json

Successful response with AI answer and citations

id
string

Unique identifier for the completion.

model
string

The model used for the completion.

object
string
Example:

"chat.completion"

created
integer

Unix timestamp of when the completion was created.

choices
object[]
citations
string[]

List of source URLs referenced in the answer.

search_results
object[]

Detailed search results with titles, snippets, and URLs.

usage
object