POST/v1/chat/completions
Chat API (General)
OpenAI-compatible general chat endpoint.
- Works across OpenAI/Claude/Gemini style chat models.
Headers
- Authorization: Bearer <API_KEY>
- Content-Type: application/json
Full URL
https://api.valueapi.ai/v1/chat/completionsBody Parameters
| Name | Type | Required | Description |
|---|
| model | string | Yes | Model ID. |
| messages | array<object> | Yes | Conversation history. |
| temperature | number | No | Sampling temperature. |
| top_p | number | No | Nucleus sampling control. |
| max_tokens | number | No | Maximum output tokens. |
| stream | boolean | No | SSE streaming switch. |
| response_format | object | No | Structured output options. |
| stop | string | string[] | No | Stop sequences. |
| user | string | No | End-user identifier. |
Body demo
{
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": "Hello"
}
],
"temperature": 0.7,
"max_tokens": 512,
"stream": false
}
POST/v1/chat/completions
Chat API (Image Analysis)
Multimodal chat with text + image blocks.
Headers
- Authorization: Bearer <API_KEY>
- Content-Type: application/json
Full URL
https://api.valueapi.ai/v1/chat/completionsBody Parameters
| Name | Type | Required | Description |
|---|
| model | string | Yes | Vision-capable model. |
| messages | array<object> | Yes | Message list. |
| messages[].content[] | array<object> | Yes | Mixed blocks. |
| messages[].content[].type | string | Yes | text | image_url. |
| messages[].content[].text | string | No | Text prompt. |
| messages[].content[].image_url.url | string | No | Image URL or base64 data URL. |
| max_tokens | number | No | Maximum output tokens. |
Body demo
{
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image"
},
{
"type": "image_url",
"image_url": {
"url": "https://example.com/images/demo.jpg"
}
}
]
}
],
"max_tokens": 512
}
POST/v1/chat/completions
Chat API (Function Calling)
OpenAI-style tools/function calling.
Headers
- Authorization: Bearer <API_KEY>
- Content-Type: application/json
Full URL
https://api.valueapi.ai/v1/chat/completionsBody Parameters
| Name | Type | Required | Description |
|---|
| model | string | Yes | Model ID. |
| messages | array<object> | Yes | Conversation messages. |
| tools | array<object> | Yes | Tool definitions list. |
| tools[].type | string | Yes | Must be function. |
| tools[].function.name | string | Yes | Function name. |
| tools[].function.parameters | object | Yes | JSON schema for arguments. |
| tools[].function.strict | boolean | No | Strict schema mode. |
Body demo
{
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": "What's weather in Paris?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather by location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string"
}
},
"required": ["location"]
}
}
}
],
"tool_choice": "auto"
}
POST/v1/chat/completions
Chat API (o1-o3 Models)
Special request shape for o1/o3 style models.
- Source note: o1 stream is currently unsupported.
Headers
- Authorization: Bearer <API_KEY>
- Content-Type: application/json
Full URL
https://api.valueapi.ai/v1/chat/completionsBody Parameters
| Name | Type | Required | Description |
|---|
| model | string | Yes | Example: o1-mini. |
| messages | array<object> | Yes | Conversation messages. |
| max_completion_tokens | number | No | Max generated tokens. |
| stream | boolean | No | Use false. |
Body demo
{
"model": "o1-mini",
"messages": [
{
"role": "user",
"content": "Explain this code"
}
],
"max_completion_tokens": 1024,
"stream": false
}
POST/v1/chat/completions
gpt-4o-all File Analysis
Model alias for image/file/web-capable use cases.
Headers
- Authorization: Bearer <API_KEY>
- Content-Type: application/json
Full URL
https://api.valueapi.ai/v1/chat/completionsBody Parameters
| Name | Type | Required | Description |
|---|
| model | string | Yes | Use gpt-4o-all. |
| messages | array<object> | Yes | Conversation messages. |
| messages[].content | string | Yes | Supports file_url + space + question. |
| max_tokens | number | No | Maximum output tokens. |
| temperature | number | No | Sampling temperature. |
Body demo
{
"model": "gpt-4o-all",
"messages": [
{
"role": "user",
"content": "https://example.com/files/api-doc.pdf Summarize this file"
}
],
"max_tokens": 1024,
"temperature": 0.2
}
POST/v1/chat/completions
gpt-4-all File Analysis
GPT-4 based alias with file-analysis style input.
Headers
- Authorization: Bearer <API_KEY>
- Content-Type: application/json
Full URL
https://api.valueapi.ai/v1/chat/completionsBody Parameters
| Name | Type | Required | Description |
|---|
| model | string | Yes | Use gpt-4-all. |
| messages | array<object> | Yes | Conversation messages. |
| messages[].content | string | Yes | Supports file_url + space + question. |
| max_tokens | number | No | Maximum output tokens. |
| temperature | number | No | Sampling temperature. |
Body demo
{
"model": "gpt-4-all",
"messages": [
{
"role": "user",
"content": "https://example.com/files/api-doc.pdf Summarize this file"
}
],
"max_tokens": 1024,
"temperature": 0.2
}
POST/v1/completions
Legacy Text Completions
Classic prompt-completion endpoint.
Headers
- Authorization: Bearer <API_KEY>
- Content-Type: application/json
Full URL
https://api.valueapi.ai/v1/completionsBody Parameters
| Name | Type | Required | Description |
|---|
| model | string | Yes | Example: gpt-3.5-turbo-instruct. |
| prompt | string | Yes | Input prompt text. |
| max_tokens | number | No | Maximum output tokens. |
| temperature | number | No | Sampling temperature. |
| top_p | number | No | Nucleus sampling control. |
| frequency_penalty | number | No | Frequency penalty. |
| presence_penalty | number | No | Presence penalty. |
| logprobs | number | No | Top token probabilities count. |
Body demo
{
"model": "gpt-3.5-turbo-instruct",
"prompt": "The weather is good",
"max_tokens": 100,
"temperature": 0.7,
"top_p": 1
}
POST/v1/chat/completions
Claude (OpenAI Format)
Claude via OpenAI-compatible payload, including PDF/image-style blocks.
- Source note: use native format for prompt caching behavior.
Headers
- Authorization: Bearer <API_KEY>
- Content-Type: application/json
Full URL
https://api.valueapi.ai/v1/chat/completionsBody Parameters
| Name | Type | Required | Description |
|---|
| model | string | Yes | Claude model ID. |
| messages | array<object> | Yes | Conversation messages. |
| messages[].content[] | array<object> | No | Supports text/file/file_url/image_url blocks. |
| max_tokens | number | No | Maximum output tokens. |
| stream | boolean | No | Streaming switch. |
| thinking | object | No | Thinking config for supported Claude models. |
Body demo
{
"model": "claude-3-5-sonnet-20241022",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Summarize this PDF"
},
{
"type": "file_url",
"file_url": {
"url": "https://example.com/files/api-doc.pdf"
}
}
]
}
],
"max_tokens": 1024,
"stream": false
}
POST/v1/messages
Claude (Native Format)
Claude native endpoint with native content blocks and cache controls.
Headers
- Authorization: Bearer <API_KEY>
- Content-Type: application/json
Full URL
https://api.valueapi.ai/v1/messagesBody Parameters
| Name | Type | Required | Description |
|---|
| model | string | Yes | Native-capable model ID. |
| messages | array<object> | Yes | Conversation messages. |
| messages[].content[] | array<object> | No | text/image/document blocks. |
| messages[].content[].source | object | No | base64 or url source object. |
| messages[].content[].cache_control | object | No | Optional cache config. |
| max_tokens | number | No | Maximum output tokens. |
| thinking | object | No | Thinking config on supported models. |
Body demo
{
"model": "claude-3-5-sonnet-20240620",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Summarize this PDF"
},
{
"type": "document",
"source": {
"type": "url",
"url": "https://example.com/files/api-doc.pdf"
}
}
]
}
],
"max_tokens": 1024
}
POST/v1/chat/completions
Gemini (OpenAI Format)
Gemini via OpenAI format with file analysis and search-style options.
Headers
- Authorization: Bearer <API_KEY>
- Content-Type: application/json
Full URL
https://api.valueapi.ai/v1/chat/completionsBody Parameters
| Name | Type | Required | Description |
|---|
| model | string | Yes | Gemini model ID. |
| messages | array<object> | Yes | Conversation messages. |
| messages[].content[] | array<object> | No | text/file/file_url/image_url blocks. |
| max_tokens | number | No | Maximum output tokens. |
| tools | array<object> | No | Function tools (e.g. googleSearch). |
| reasoning_effort | string | No | low | medium | high for compatible models. |
| extra_body.google.thinking_config | object | No | Gemini thinking controls. |
Body demo
{
"model": "gemini-2.5-pro",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Summarize this"
},
{
"type": "file",
"file": {
"filename": "api-doc.pdf",
"file_data": "data:application/pdf;base64,JVBER..."
}
}
]
}
],
"reasoning_effort": "medium",
"max_tokens": 1024
}
POST/v1beta/models/{model}:{action}
Gemini (Native Format)
Gemini native endpoint using generateContent / streamGenerateContent.
- Source note: prefer lowerCamelCase field names.
Headers
- Content-Type: application/json
- x-goog-api-key: <API_KEY> (or Bearer auth)
Full URL
https://api.valueapi.ai/v1beta/models/{model}:{action}Body Parameters
| Name | Type | Required | Description |
|---|
| contents | array<object> | Yes | Conversation content list. |
| contents[].role | string | Yes | Role, e.g. user. |
| contents[].parts | array<object> | Yes | Parts array. |
| contents[].parts[].text | string | Yes | Text content. |
Body demo
{
"contents": [
{
"role": "user",
"parts": [
{
"text": "Hello"
}
]
}
],
"generationConfig": {
"temperature": 0.7
}
}
POST/v1/chat/completions
GPTs
Call GPTs by model naming pattern gpt-4-gizmo-(gizmo_id).
Headers
- Authorization: Bearer <API_KEY>
- Content-Type: application/json
Full URL
https://api.valueapi.ai/v1/chat/completionsBody Parameters
| Name | Type | Required | Description |
|---|
| model | string | Yes | Format: gpt-4-gizmo-(id). |
| messages | array<object> | Yes | Conversation messages. |
| max_tokens | number | No | Maximum output tokens. |
| temperature | number | No | Sampling temperature. |
| stream | boolean | No | Streaming switch. |
Body demo
{
"model": "gpt-4-gizmo-g-bo0FiWLY7",
"messages": [
{
"role": "user",
"content": "What is quantum computing potential?"
}
],
"max_tokens": 512,
"stream": false
}