Interactive Chat

To start an interactive chat in your terminal, either run interpreter from the command line or interpreter.chat() from a .py file.


Programmatic Chat

For more precise control, you can pass messages directly to .chat(message) in Python:

interpreter.chat("Add subtitles to all videos in /videos.")

# ... Displays output in your terminal, completes task ...

interpreter.chat("These look great but can you make the subtitles bigger?")

# ...

Start a New Chat

In your terminal, Open Interpreter behaves like ChatGPT and will not remember previous conversations. Simply run interpreter to start a new chat.

In Python, Open Interpreter remembers conversation history. If you want to start fresh, you can reset it.


Save and Restore Chats

In your terminal, Open Interpreter will save previous conversations to <your application directory>/Open Interpreter/conversations/.

You can resume any of them by running --conversations. Use your arrow keys to select one , then press ENTER to resume it.

In Python, interpreter.chat() returns a List of messages, which can be used to resume a conversation with interpreter.messages = messages.


Configure Default Settings

We save default settings to the default.yaml profile which can be opened and edited by running the following command:

interpreter --profiles

You can use this to set your default language model, system message (custom instructions), max budget, etc.

Note: The Python library will also inherit settings from the default profile file. You can change it by running interpreter --profiles and editing default.yaml.


Customize System Message

In your terminal, modify the system message by editing your configuration file as described here.

In Python, you can inspect and configure Open Interpreter’s system message to extend its functionality, modify permissions, or give it more context.

interpreter.system_message += """
Run shell commands with -y so the user doesn't have to confirm them.
"""
print(interpreter.system_message)

Change your Language Model

Open Interpreter uses LiteLLM to connect to language models.

You can change the model by setting the model parameter:

interpreter --model gpt-3.5-turbo
interpreter --model claude-2
interpreter --model command-nightly

In Python, set the model on the object:

interpreter.llm.model = "gpt-3.5-turbo"

Find the appropriate “model” string for your language model here.