Engineering

OpenClaw API: Complete Developer Guide and Endpoint Reference 2026

Chris DiYanni·Founder & AI/ML Engineer·

OpenClaw exposes an OpenAI-compatible HTTP API on port 18789. This guide covers every endpoint, authentication, streaming, multimodal inputs, and how to secure the gateway for production use.

If you have an OpenClaw agent running, it is already serving an HTTP API. That API lets you send messages, receive responses, invoke tools, and stream output in real time. You can build a custom chat UI, power a Slack bot backend, process webhooks, or call your agent from any application that can make an HTTP request.

This guide covers every endpoint, the authentication model, request and response shapes, streaming, vision inputs, integration patterns, and the security steps you must take before exposing the gateway to any external traffic. For developers using ClawTrust, the final section shows how the managed HTTPS gateway works and how to point your code at it with zero configuration.

The OpenClaw API: What It Is and How It Works

OpenClaw is a local-first AI agent runtime. When it starts, it launches an HTTP server on port 18789 that functions as a gateway to everything the agent can do: conversations, tool calls, skill invocations, memory lookups, and more. That gateway is what the OpenClaw API refers to.

The API design follows the OpenAI HTTP API format. If you have written code to call https://api.openai.com/v1/chat/completions, you already understand the structure. The same request shape works against an OpenClaw gateway. This compatibility is intentional: it lets you swap an OpenClaw agent into existing integrations with minimal friction.

What makes it different from the OpenAI API is what happens on the other end. OpenAI routes your message to a hosted model. OpenClaw routes your message to a running agent process with persistent memory, installed skills, configured channels, and access to external tools. The agent uses whatever model you have configured (Claude, GPT-4o, Gemini, Llama) and has full access to its skill set when generating a response.

The two primary endpoints are:

  • POST /v1/chat/completions - text conversations, tool use, skill invocation
  • POST /v1/responses - vision and multimodal inputs including images

Every request requires an authentication token. By default, the gateway binds to all network interfaces, which means anyone who can reach the server on port 18789 can talk to your agent. Securing that gateway is not optional for production deployments.

OpenClaw API Authentication: Tokens and Headers

Authentication uses a standard Bearer token scheme. Every request to the OpenClaw API must include an Authorization header with a token that matches one of the configured gateway tokens.

Authorization: Bearer {your-token}

Tokens are defined in the OpenClaw configuration file under the gateway section:

gateway:
  auth:
    mode: "token"
    tokens:
      - "your-secret-token-here"

A few rules that matter in practice:

  • Use mode: "token". Never use mode: "none" in production. An unauthenticated gateway means anyone who finds the port can invoke your agent and rack up API costs.
  • Generate tokens with a cryptographically secure random source. A 32-byte hex string is a reasonable minimum.
  • Never expose tokens in client-side JavaScript, browser code, or public repositories.
  • Rotate tokens if a token is ever committed to version control or otherwise exposed.
  • Use separate tokens for separate integrations so you can revoke access to one without affecting others.

On ClawTrust, authentication tokens are generated at provisioning time and stored in the dashboard. You never have to write the gateway configuration manually.

Requests without a valid token receive a 401 Unauthorized response. Requests with a malformed header (no Bearer prefix, missing token) are also rejected.

POST /v1/chat/completions: The Primary Endpoint

This is the main endpoint for sending messages to an OpenClaw agent. The request format mirrors the OpenAI Chat Completions API, which means the messages array, model field, and most standard parameters work the same way.

A complete request using curl against a ClawTrust-hosted gateway:

curl -X POST https://{tenantId}.clawtrust.ai/v1/chat/completions \
  -H "Authorization: Bearer {your-token}" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openclaw",
    "messages": [
      {"role": "user", "content": "Summarize the top 3 items in my GitHub notifications"}
    ]
  }'

The same request against a locally hosted gateway:

curl -X POST http://localhost:18789/v1/chat/completions \
  -H "Authorization: Bearer {your-token}" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openclaw",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant with access to GitHub."},
      {"role": "user", "content": "Summarize the top 3 items in my GitHub notifications"}
    ]
  }'

A typical (non-streaming) response:

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1739900000,
  "model": "openclaw",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Here are the top 3 items in your GitHub notifications:\n\n1. ..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 45,
    "completion_tokens": 120,
    "total_tokens": 165
  }
}

Key fields in the request body:

FieldTypeRequiredNotes
modelstringYesUse "openclaw" or the specific model identifier configured
messagesarrayYesArray of {role, content} objects. Roles: system, user, assistant
streambooleanNoSet to true for SSE streaming. Default false.
temperaturenumberNo0.0 to 1.0. Passed through to the underlying model.
max_tokensnumberNoMaximum tokens in the response.

When the agent makes tool calls to fulfill the request (fetching GitHub notifications in the example above), those happen server-side. The API response contains the final answer after all tool calls complete, not intermediate tool call events. This keeps the integration simple: you send a message and get back a completed response.

Streaming Responses with Server-Sent Events

For real-time output, set "stream": true in your request body. The response switches to Server-Sent Events (SSE) format: a stream of text lines, each starting with data: , containing incremental JSON chunks as the agent generates its response.

A streaming request:

curl -X POST https://{tenantId}.clawtrust.ai/v1/chat/completions \
  -H "Authorization: Bearer {your-token}" \
  -H "Content-Type: application/json" \
  -H "Accept: text/event-stream" \
  -d '{
    "model": "openclaw",
    "stream": true,
    "messages": [
      {"role": "user", "content": "Write a short summary of today'\''s weather in NYC"}
    ]
  }'

The response stream looks like this:

data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"delta":{"role":"assistant"},"index":0}]}

data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"delta":{"content":"Today"},"index":0}]}

data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"delta":{"content":" in"},"index":0}]}

data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"delta":{"content":" NYC"},"index":0}]}

data: [DONE]

Each chunk contains a delta object with a content field. Concatenate the content values as they arrive to reconstruct the full response. The stream ends with a data: [DONE] sentinel.

Here is how to consume a streaming response in Node.js:

import { fetchEventSource } from '@microsoft/fetch-event-source';

await fetchEventSource('https://{tenantId}.clawtrust.ai/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer {your-token}',
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    model: 'openclaw',
    stream: true,
    messages: [{ role: 'user', content: 'Hello' }],
  }),
  onmessage(event) {
    if (event.data === '[DONE]') return;
    const chunk = JSON.parse(event.data);
    const delta = chunk.choices[0]?.delta?.content ?? '';
    process.stdout.write(delta);
  },
});

If you prefer the native fetch API, read the response body as a ReadableStream and decode each chunk with TextDecoder. The pattern is the same: split on newlines, parse data: prefixed lines as JSON, extract delta.content.

Streaming is particularly valuable for chat UI components where you want to show text appearing word by word, and for long-running tasks where you need to keep a connection alive rather than waiting for the full response.

POST /v1/responses: Vision and Multimodal Inputs

The /v1/responses endpoint handles inputs that include images. It uses a slightly different request format from /v1/chat/completions: instead of a messages array, it uses an input array with typed content blocks.

A request sending an image with a question about it:

curl -X POST https://{tenantId}.clawtrust.ai/v1/responses \
  -H "Authorization: Bearer {your-token}" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openclaw",
    "input": [
      {
        "type": "input_text",
        "text": "What is in this image?"
      },
      {
        "type": "input_image",
        "source": {
          "type": "base64",
          "media_type": "image/jpeg",
          "data": "/9j/4AAQSkZJRgABAQAA..."
        }
      }
    ]
  }'

The input array supports these block types:

Block TypeFieldsPurpose
input_texttext (string)Text prompt or question
input_imagesource.type, source.media_type, source.dataBase64-encoded image

For the image source, type must be "base64". The media_type field accepts standard MIME types: image/jpeg, image/png, image/gif, image/webp. The data field is the raw base64-encoded image bytes without a data URI prefix.

The response format from /v1/responses differs from /v1/chat/completions. Rather than a choices array, it returns an output array. Parse accordingly in your integration code.

This endpoint is used by ClawTrust's Slack integration for handling image attachments sent to agents in Slack channels. When a user sends an image in a Slack DM to an agent, the integration downloads the file, base64-encodes it, and sends it to /v1/responses. The agent's vision capabilities (if the configured model supports them) handle the rest.

Keep in mind that vision inputs can be large. A single high-resolution JPEG can be several megabytes when base64-encoded. If you are building an integration that accepts user-uploaded images, resize or compress them before sending to keep latency low and token costs down.

Building Integrations on Top of OpenClaw

The OpenClaw API lets you use your agent as a backend service for a wide range of applications. Because the endpoint is HTTP with standard JSON, any programming language or platform that can make HTTP requests can call it.

Common integration patterns:

Custom chat interfaces. Embed an agent-powered chat widget in your own web application. Your frontend sends user messages to your backend, your backend calls the OpenClaw API (keeping the token server-side), and streams the response back to the browser. The user never touches the OpenClaw gateway directly.

Slack and Discord bot backends. Receive events from Slack/Discord, extract the message text, call /v1/chat/completions, and post the response back. The agent has access to all its installed skills, so it can look things up, call APIs, and respond with real information rather than canned replies.

Webhook processors. Connect third-party services to your agent via webhooks. A new support ticket arrives in Zendesk, your webhook handler sends it to the agent, and the agent drafts a response or routes it to the right team using its configured tools.

Mobile app backends. iOS and Android apps call your server, your server calls the OpenClaw API, and the response flows back to the mobile client. The agent token never leaves your server infrastructure.

Automation platform triggers. n8n, Zapier, and Make can make HTTP requests to custom endpoints. Point them at a lightweight proxy that forwards to the OpenClaw API. This lets you chain agent actions into existing automation workflows without writing code.

Here is a minimal Node.js function that calls the OpenClaw API and returns the agent's response:

async function askAgent(message: string): Promise {
  const response = await fetch(
    'https://{tenantId}.clawtrust.ai/v1/chat/completions',
    {
      method: 'POST',
      headers: {
        'Authorization': 'Bearer {your-token}',
        'Content-Type': 'application/json',
      },
      body: JSON.stringify({
        model: 'openclaw',
        messages: [{ role: 'user', content: message }],
      }),
    }
  );

  if (!response.ok) {
    throw new Error(`OpenClaw API error: ${response.status}`);
  }

  const data = await response.json();
  return data.choices[0].message.content;
}

// Usage
const answer = await askAgent('What meetings do I have tomorrow?');
console.log(answer);

For production integrations, add retry logic with exponential backoff, error handling for 5xx responses, and request timeouts. Agent responses that require multiple tool calls can take several seconds, so set your HTTP client timeout to at least 60 seconds for complex queries.

If your integration needs to maintain conversation history across multiple turns, you are responsible for storing and replaying the message array. Pass the full conversation history on each request:

const messages = [
  { role: 'user', content: 'What is the status of issue #42?' },
  { role: 'assistant', content: 'Issue #42 is currently open...' },
  { role: 'user', content: 'Who was it assigned to?' },
];

const response = await fetch(gatewayUrl, {
  method: 'POST',
  headers: { 'Authorization': `Bearer ${token}`, 'Content-Type': 'application/json' },
  body: JSON.stringify({ model: 'openclaw', messages }),
});

OpenClaw API Security: Protecting Your Gateway

This section covers the most important topic in this guide. Misconfiguring the OpenClaw gateway is the single most common way self-hosted deployments get compromised.

The default is unsafe. Out of the box, OpenClaw binds to 0.0.0.0, which means port 18789 is accessible on every network interface on the server. If your VPS has a public IP address and no firewall blocking port 18789, your agent is reachable by anyone on the internet. Anyone who can reach the port and guess or find your token can invoke your agent, read your conversation history, and run up API costs.

The required steps for a secure self-hosted deployment:

1. Bind to loopback only. Configure the gateway to bind to 127.0.0.1 (localhost) instead of 0.0.0.0. This makes the port inaccessible from outside the server regardless of firewall rules:

gateway:
  bind: "loopback"    # 127.0.0.1 only, never 0.0.0.0
  port: 18789
  auth:
    mode: "token"
    tokens:
      - "your-secret-token"

2. Use a firewall. Even with loopback binding, configure your server's firewall (ufw, iptables, cloud security groups) to explicitly deny external access to port 18789. Defense in depth: both the application bind and the network layer should block access.

3. Add a reverse proxy with TLS for external access. If you need to expose the API outside the server, run a reverse proxy (nginx, Caddy, or Traefik) that terminates TLS and forwards requests to localhost:18789. Never expose port 18789 directly to the internet. The proxy handles HTTPS certificates and can add additional access controls.

4. Require authentication always. Never set auth.mode: "none" in any environment. Even on a loopback-only binding, authentication is a required control. If another process on the same server is compromised, it could call the gateway without the loopback restriction being relevant.

5. Rotate tokens if exposed. If a token is ever committed to a repository, logged in plaintext, or otherwise exposed, generate a new token and update the configuration immediately. Treat OpenClaw tokens with the same care as database passwords.

These steps sound tedious because they are. Every self-hosted OpenClaw deployment requires getting them right. This is one of the core problems ClawTrust solves.

ClawTrust Gateway: OpenClaw API Over HTTPS Without Port Forwarding

On ClawTrust, every agent gets a dedicated HTTPS gateway at https://{tenantId}.clawtrust.ai. No port forwarding, no SSL certificate management, no reverse proxy configuration. The gateway is live the moment provisioning completes.

The architecture uses Cloudflare tunnels. When your agent's VPS boots, it establishes an outbound encrypted connection to Cloudflare's edge network. Inbound API requests arrive at https://{tenantId}.clawtrust.ai, route through Cloudflare's edge, and reach your agent over that tunnel. The VPS has no inbound ports open. Port 18789 is bound to loopback only and is never reachable from the public internet.

This means the security steps described in the previous section are handled automatically:

  • Gateway binds to 127.0.0.1 in every ClawTrust deployment
  • Cloudflare handles TLS termination and certificate renewal
  • No open inbound ports on the VPS
  • Authentication tokens are required and managed through the dashboard
  • Every request is logged and goes through ClawTrust's EDR pipeline

Your base URL for API calls is:

https://{tenantId}.clawtrust.ai

Replace {tenantId} with your agent's ID from the ClawTrust dashboard. All endpoints work at this base:

POST https://{tenantId}.clawtrust.ai/v1/chat/completions
POST https://{tenantId}.clawtrust.ai/v1/responses

Tokens for ClawTrust-hosted agents are found in the agent settings page in the dashboard. Use them exactly as you would with a self-hosted gateway: Authorization: Bearer {token}.

The Cloudflare tunnel also enables features that are difficult to self-configure: Server-Sent Events work reliably through the tunnel, so streaming responses work out of the box. The tunnel handles reconnection automatically. And because the tunnel is per-tenant and per-agent, there is no shared infrastructure between agents - your API traffic does not share a gateway with anyone else.

Get Your OpenClaw API Gateway in Minutes

ClawTrust provisions a fully secured OpenClaw agent with a dedicated HTTPS gateway. No server setup, no SSL configuration, no port forwarding. Start a 5-day free trial and your API endpoint is live before the trial ends.

Start Free Trial

OpenClaw API Quick Reference

This section summarizes everything in one place for quick access during development.

Base URLs:

  • Self-hosted (local): http://localhost:18789
  • ClawTrust managed: https://{tenantId}.clawtrust.ai

Required headers for every request:

Authorization: Bearer {your-token}
Content-Type: application/json

Endpoint reference:

EndpointMethodPurposeNotes
/v1/chat/completionsPOSTText conversations, tool usePrimary endpoint. OpenAI-compatible format.
/v1/responsesPOSTVision and multimodal inputsUse for image inputs. Different request/response shape.
AuthorizationHeaderBearer token authenticationRequired for all requests. Value: Bearer {token}
Content-TypeHeaderJSON body declarationRequired for POST requests. Value: application/json
StreamingBody flagReal-time SSE outputSet "stream": true in request body. Optionally add Accept: text/event-stream header.

Common HTTP status codes:

StatusMeaningCommon Cause
200SuccessRequest processed successfully
401UnauthorizedMissing or invalid Bearer token
400Bad RequestMalformed JSON or missing required fields
500Server ErrorAgent process error or upstream model failure
503UnavailableAgent process not running or tunnel disconnected

Minimal working example (Node.js fetch):

const res = await fetch('https://{tenantId}.clawtrust.ai/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer {your-token}',
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    model: 'openclaw',
    messages: [{ role: 'user', content: 'Hello' }],
  }),
});
const data = await res.json();
console.log(data.choices[0].message.content);

Minimal working example (Python):

import requests

response = requests.post(
    'https://{tenantId}.clawtrust.ai/v1/chat/completions',
    headers={
        'Authorization': 'Bearer {your-token}',
        'Content-Type': 'application/json',
    },
    json={
        'model': 'openclaw',
        'messages': [{'role': 'user', 'content': 'Hello'}],
    },
    timeout=60,
)
data = response.json()
print(data['choices'][0]['message']['content'])

OpenAI SDK compatibility: Because the endpoint follows the OpenAI format, the OpenAI Python and Node.js SDKs work against OpenClaw with a base URL override:

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: '{your-token}',
  baseURL: 'https://{tenantId}.clawtrust.ai/v1',
});

const completion = await client.chat.completions.create({
  model: 'openclaw',
  messages: [{ role: 'user', content: 'What tasks did I complete today?' }],
});

console.log(completion.choices[0].message.content);

This SDK approach means any existing code using the OpenAI Node.js or Python client can be pointed at an OpenClaw gateway by changing two values: the apiKey (to your OpenClaw token) and the baseURL (to your ClawTrust gateway URL or local address). All the streaming helpers, retry logic, and type definitions in the SDK continue to work.

Frequently Asked Questions

What is the OpenClaw API?

The OpenClaw API is an HTTP gateway that exposes your AI agent as a service. It runs on port 18789 by default and uses an OpenAI-compatible format. The primary endpoint is POST /v1/chat/completions for text conversations. A secondary endpoint POST /v1/responses handles vision and multimodal inputs. All requests require a Bearer token in the Authorization header.

Is the OpenClaw API compatible with OpenAI's API?

Yes, the OpenClaw /v1/chat/completions endpoint follows the OpenAI API format. You can use the same request structure: a messages array with role/content pairs, model field, and standard parameters. This means any code built for OpenAI's API can be pointed at an OpenClaw gateway with minimal changes.

How do I authenticate with the OpenClaw API?

OpenClaw API authentication uses a Bearer token in the Authorization header. Generate a token in your OpenClaw configuration (gateway.auth.tokens). Include it in every request as: Authorization: Bearer {your-token}. Never expose this token in client-side code. On ClawTrust, authentication tokens are managed automatically through the dashboard.

Does the OpenClaw API support streaming?

Yes. Set stream: true in your request body and the response will come back as Server-Sent Events (SSE). Each event contains a delta with incremental content. This is the same streaming format used by OpenAI's API and is supported by most HTTP client libraries.

How do I secure the OpenClaw API?

By default, OpenClaw binds to 0.0.0.0 (all interfaces), exposing port 18789 to the internet. For production use: bind the gateway to 127.0.0.1 (loopback only), require authentication tokens, configure a firewall to block external access to port 18789, and use a reverse proxy with TLS for external access. ClawTrust handles all of this automatically using Cloudflare tunnels with zero inbound ports.

What is the ClawTrust OpenClaw gateway URL?

On ClawTrust, your OpenClaw gateway is accessible at https://{tenantId}.clawtrust.ai. This is a Cloudflare tunnel endpoint with HTTPS, no open ports on the VPS, and full authentication required. You do not need to configure port forwarding, SSL certificates, or reverse proxies - it is all handled automatically.

openclawapidevelopergatewayendpointsauthenticationintegrationstreaming

Ready to hire your first AI employee?

Secured and ready in 5 minutes.

Get Started