Server Usage Guide
Starting the Server
From Command Line
To start the server from the command line, use:From Python
To start the server from within a Python script:WebSocket API
Establishing a Connection
Connect to the WebSocket server atws://localhost:8000/
.
Message Format
Open Interpreter uses an extended version of OpenAI’s message format called LMC messages that allow for rich, multi-part messages. Messages must be sent between start and end flags. Here’s the basic structure:Multi-part Messages
You can send complex messages with multiple components:- Start with
{"role": "user", "start": true}
- Add various types of content (message, file, image, etc.)
- End with
{"role": "user", "end": true}
Content Types
You can include various types of content in your messages:- Text messages:
{"role": "user", "type": "message", "content": "Your text here"}
- File paths:
{"role": "user", "type": "file", "content": "path/to/file"}
- Images:
{"role": "user", "type": "image", "format": "path", "content": "path/to/photo"}
- Audio:
{"role": "user", "type": "audio", "format": "wav", "content": "path/to/audio.wav"}
Control Commands
To control the server’s behavior, send the following commands:-
Stop execution:
This stops all execution and message processing.
-
Execute code block:
This executes a generated code block and allows the agent to proceed. Note: If
auto_run
is set toFalse
, the agent will pause after generating code blocks. You must send the “go” command to continue execution.
Completion Status
The server indicates completion with the following message:Error Handling
If an error occurs, the server will send an error message in the following format:Code Execution Review
After code blocks are executed, you’ll receive a review message:content
field of the review message may have two possible formats:
- If the code is deemed completely safe, the content will be exactly
"<SAFE>"
. - Otherwise, it will contain an explanation of why the code might be unsafe or have irreversible effects.
Example WebSocket Interaction
Here’s an example demonstrating the WebSocket interaction:HTTP API
Modifying Settings
To change server settings, send a POST request tohttp://localhost:8000/settings
. The payload should conform to the interpreter object’s settings.
Example:
Retrieving Settings
To get current settings, send a GET request tohttp://localhost:8000/settings/{property}
.
Example:
OpenAI-Compatible Endpoint
The server provides an OpenAI-compatible endpoint at/openai
. This allows you to use the server with any tool or library that’s designed to work with the OpenAI API.
Chat Completions Endpoint
The chat completions endpoint is available at:api_base
in your OpenAI client or configuration to [server_url]/openai
. For example:
/chat/completions
) is implemented. Other OpenAI API endpoints are not available.
When using this endpoint:
- The
model
parameter is required but ignored. - The
api_key
is required by the OpenAI library but not used by the server.
Using Docker
You can also run the server using Docker. First, build the Docker image from the root of the repository:Acknowledgment Feature
When theINTERPRETER_REQUIRE_ACKNOWLEDGE
environment variable is set to "True"
, the server requires clients to acknowledge each message received. This feature ensures reliable message delivery in environments where network stability might be a concern.
How it works
- When this feature is enabled, each message sent by the server will include an
id
field. - The client must send an acknowledgment message back to the server for each received message.
- The server will wait for this acknowledgment before sending the next message.
Client Implementation
To implement this on the client side:- Check if each received message contains an
id
field. - If an
id
is present, send an acknowledgment message back to the server.
Server Behavior
- If the server doesn’t receive an acknowledgment within a certain timeframe, it will attempt to resend the message.
- The server will make multiple attempts to send a message before considering it failed.
Enabling the Feature
To enable this feature, set theINTERPRETER_REQUIRE_ACKNOWLEDGE
environment variable to "True"
before starting the server:
Advanced Usage: Accessing the FastAPI App Directly
The FastAPI app is exposed atasync_interpreter.server.app
. This allows you to add custom routes or host the app using Uvicorn directly.
Example of adding a custom route and hosting with Uvicorn:
Best Practices
- Always handle the “complete” status message to ensure your client knows when the server has finished processing.
- If
auto_run
is set toFalse
, remember to send the “go” command to execute code blocks and continue the interaction. - Implement proper error handling in your client to manage potential connection issues, unexpected server responses, or server-sent error messages.
- Use the AsyncInterpreter class when working with the server in Python to ensure compatibility with asynchronous operations.
- Pay attention to the code execution review messages for important safety and operational information.
- Utilize the multi-part user message structure for complex inputs, including file paths and images.
- When sending file paths or image paths, ensure they are accessible to the server.