To start the server from the command line, use:
To start the server from within a Python script:
Connect to the WebSocket server at ws://localhost:8000/
.
Open Interpreter uses an extended version of OpenAI’s message format called LMC messages that allow for rich, multi-part messages. Messages must be sent between start and end flags. Here’s the basic structure:
You can send complex messages with multiple components:
{"role": "user", "start": true}
{"role": "user", "end": true}
You can include various types of content in your messages:
{"role": "user", "type": "message", "content": "Your text here"}
{"role": "user", "type": "file", "content": "path/to/file"}
{"role": "user", "type": "image", "format": "path", "content": "path/to/photo"}
{"role": "user", "type": "audio", "format": "wav", "content": "path/to/audio.wav"}
To control the server’s behavior, send the following commands:
Stop execution:
This stops all execution and message processing.
Execute code block:
This executes a generated code block and allows the agent to proceed.
Note: If auto_run
is set to False
, the agent will pause after generating code blocks. You must send the “go” command to continue execution.
The server indicates completion with the following message:
Ensure your client watches for this message to determine when the interaction is finished.
If an error occurs, the server will send an error message in the following format:
Your client should be prepared to handle these error messages appropriately.
After code blocks are executed, you’ll receive a review message:
This review provides important information about the safety and potential impact of the executed code. Pay close attention to these messages, especially when dealing with operations that might have significant effects on your system.
The content
field of the review message may have two possible formats:
"<SAFE>"
.Example of a safe code review:
Example of a potentially unsafe code review:
Here’s an example demonstrating the WebSocket interaction:
To change server settings, send a POST request to http://localhost:8000/settings
. The payload should conform to the interpreter object’s settings.
Example:
To get current settings, send a GET request to http://localhost:8000/settings/{property}
.
Example:
The server provides an OpenAI-compatible endpoint at /openai
. This allows you to use the server with any tool or library that’s designed to work with the OpenAI API.
The chat completions endpoint is available at:
To use this endpoint, set the api_base
in your OpenAI client or configuration to [server_url]/openai
. For example:
Note that only the chat completions endpoint (/chat/completions
) is implemented. Other OpenAI API endpoints are not available.
When using this endpoint:
model
parameter is required but ignored.api_key
is required by the OpenAI library but not used by the server.You can also run the server using Docker. First, build the Docker image from the root of the repository:
Then, run the container:
This will expose the server on port 8000 of your host machine.
When the INTERPRETER_REQUIRE_ACKNOWLEDGE
environment variable is set to "True"
, the server requires clients to acknowledge each message received. This feature ensures reliable message delivery in environments where network stability might be a concern.
id
field.To implement this on the client side:
id
field.id
is present, send an acknowledgment message back to the server.Here’s an example of how to handle this in your WebSocket client:
To enable this feature, set the INTERPRETER_REQUIRE_ACKNOWLEDGE
environment variable to "True"
before starting the server:
Or in Python:
The FastAPI app is exposed at async_interpreter.server.app
. This allows you to add custom routes or host the app using Uvicorn directly.
Example of adding a custom route and hosting with Uvicorn:
auto_run
is set to False
, remember to send the “go” command to execute code blocks and continue the interaction.To start the server from the command line, use:
To start the server from within a Python script:
Connect to the WebSocket server at ws://localhost:8000/
.
Open Interpreter uses an extended version of OpenAI’s message format called LMC messages that allow for rich, multi-part messages. Messages must be sent between start and end flags. Here’s the basic structure:
You can send complex messages with multiple components:
{"role": "user", "start": true}
{"role": "user", "end": true}
You can include various types of content in your messages:
{"role": "user", "type": "message", "content": "Your text here"}
{"role": "user", "type": "file", "content": "path/to/file"}
{"role": "user", "type": "image", "format": "path", "content": "path/to/photo"}
{"role": "user", "type": "audio", "format": "wav", "content": "path/to/audio.wav"}
To control the server’s behavior, send the following commands:
Stop execution:
This stops all execution and message processing.
Execute code block:
This executes a generated code block and allows the agent to proceed.
Note: If auto_run
is set to False
, the agent will pause after generating code blocks. You must send the “go” command to continue execution.
The server indicates completion with the following message:
Ensure your client watches for this message to determine when the interaction is finished.
If an error occurs, the server will send an error message in the following format:
Your client should be prepared to handle these error messages appropriately.
After code blocks are executed, you’ll receive a review message:
This review provides important information about the safety and potential impact of the executed code. Pay close attention to these messages, especially when dealing with operations that might have significant effects on your system.
The content
field of the review message may have two possible formats:
"<SAFE>"
.Example of a safe code review:
Example of a potentially unsafe code review:
Here’s an example demonstrating the WebSocket interaction:
To change server settings, send a POST request to http://localhost:8000/settings
. The payload should conform to the interpreter object’s settings.
Example:
To get current settings, send a GET request to http://localhost:8000/settings/{property}
.
Example:
The server provides an OpenAI-compatible endpoint at /openai
. This allows you to use the server with any tool or library that’s designed to work with the OpenAI API.
The chat completions endpoint is available at:
To use this endpoint, set the api_base
in your OpenAI client or configuration to [server_url]/openai
. For example:
Note that only the chat completions endpoint (/chat/completions
) is implemented. Other OpenAI API endpoints are not available.
When using this endpoint:
model
parameter is required but ignored.api_key
is required by the OpenAI library but not used by the server.You can also run the server using Docker. First, build the Docker image from the root of the repository:
Then, run the container:
This will expose the server on port 8000 of your host machine.
When the INTERPRETER_REQUIRE_ACKNOWLEDGE
environment variable is set to "True"
, the server requires clients to acknowledge each message received. This feature ensures reliable message delivery in environments where network stability might be a concern.
id
field.To implement this on the client side:
id
field.id
is present, send an acknowledgment message back to the server.Here’s an example of how to handle this in your WebSocket client:
To enable this feature, set the INTERPRETER_REQUIRE_ACKNOWLEDGE
environment variable to "True"
before starting the server:
Or in Python:
The FastAPI app is exposed at async_interpreter.server.app
. This allows you to add custom routes or host the app using Uvicorn directly.
Example of adding a custom route and hosting with Uvicorn:
auto_run
is set to False
, remember to send the “go” command to execute code blocks and continue the interaction.