Language Model

Model Selection

Specifies which language model to use. Check out the models section for a list of available models. Open Interpreter uses LiteLLM under the hood to support over 100+ models.

interpreter --model "gpt-3.5-turbo"

Temperature

Sets the randomness level of the model’s output. The default temperature is 0, you can set it to any value between 0 and 1. The higher the temperature, the more random and creative the output will be.

interpreter --temperature 0.7

Context Window

Manually set the context window size in tokens for the model. For local models, using a smaller context window will use less RAM, which is more suitable for most devices.

interpreter --context_window 16000

Max Tokens

Sets the maximum number of tokens that the model can generate in a single response.

interpreter --max_tokens 100

Max Output

Set the maximum number of characters for code outputs.

interpreter --max_output 1000

API Base

If you are using a custom API, specify its base URL with this argument.

interpreter --api_base "https://api.example.com"

API Key

Set your API key for authentication when making API calls. For OpenAI models, you can get your API key here.

interpreter --api_key "your_api_key_here"

API Version

Optionally set the API version to use with your selected model. (This will override environment variables)

interpreter --api_version 2.0.2

LLM Supports Functions

Inform Open Interpreter that the language model you’re using supports function calling.

interpreter --llm_supports_functions

LLM Does Not Support Functions

Inform Open Interpreter that the language model you’re using does not support function calling.

interpreter --no-llm_supports_functions

LLM Supports Vision

Inform Open Interpreter that the language model you’re using supports vision. Defaults to False.

interpreter --llm_supports_vision

Interpreter

Vision Mode

Enables vision mode, which adds some special instructions to the prompt and switches to gpt-4-vision-preview.

interpreter --vision

OS Mode

Enables OS mode for multimodal models. Currently not available in Python. Check out more information on OS mode here.

interpreter --os

Version

Get the current installed version number of Open Interpreter.

interpreter --version

Open Local Models Directory

Opens the models directory. All downloaded Llamafiles are saved here.

interpreter --local_models

Open Profiles Directory

Opens the profiles directory. New yaml profile files can be added to this directory.

interpreter --profiles

Select Profile

Select a profile to use. If no profile is specified, the default profile will be used.

interpreter --profile local.yaml

Help

Display all available terminal arguments.

interpreter --help

Force Task Completion

Runs Open Interpreter in a loop, requiring it to admit to completing or failing every task.

interpreter --force_task_completion

Verbose

Run the interpreter in verbose mode. Debug information will be printed at each step to help diagnose issues.

interpreter --verbose

Safe Mode

Enable or disable experimental safety mechanisms like code scanning. Valid options are off, ask, and auto.

interpreter --safe_mode ask

Auto Run

Automatically run the interpreter without requiring user confirmation.

interpreter --auto_run

Max Budget

Sets the maximum budget limit for the session in USD.

interpreter --max_budget 0.01

Local Mode

Run the model locally. Check the models page for more information.

interpreter --local

Fast Mode

Sets the model to gpt-3.5-turbo and encourages it to only write code without confirmation.

interpreter --fast

Custom Instructions

Appends custom instructions to the system message. This is useful for adding information about your system, preferred languages, etc.

interpreter --custom_instructions "This is a custom instruction."

System Message

We don’t recommend modifying the system message, as doing so opts you out of future updates to the core system message. Use --custom_instructions instead, to add relevant information to the system message. If you must modify the system message, you can do so by using this argument, or by changing a profile file.

interpreter --system_message "You are Open Interpreter..."

Disable Telemetry

Opt out of telemetry.

interpreter --disable_telemetry

Offline

This boolean flag determines whether to enable or disable some offline features like open procedures. Use this in conjunction with the model parameter to set your language model.

interpreter.offline = True

Messages

This property holds a list of messages between the user and the interpreter.

You can use it to restore a conversation:

interpreter.chat("Hi! Can you print hello world?")

print(interpreter.messages)

# This would output:

# [
#    {
#       "role": "user",
#       "message": "Hi! Can you print hello world?"
#    },
#    {
#       "role": "assistant",
#       "message": "Sure!"
#    }
#    {
#       "role": "assistant",
#       "language": "python",
#       "code": "print('Hello, World!')",
#       "output": "Hello, World!"
#    }
# ]

#You can use this to restore `interpreter` to a previous conversation.
interpreter.messages = messages # A list that resembles the one above

Computer

The computer object in interpreter.computer is a virtual computer that the AI controls. Its primary interface/function is to execute code and return the output in real-time.

Offline

Running the computer in offline mode will disable some online features, like the hosted Computer API. Inherits from interpreter.offline.

interpreter.computer.offline = True

Verbose

This is primarily used for debugging interpreter.computer. Inherits from interpreter.verbose.

interpreter.computer.verbose = True

Emit Images

The emit_images attribute in interpreter.computer controls whether the computer should emit images or not. This is inherited from interpreter.llm.supports_vision.

This is used for multimodel vs. text only models. Running computer.display.view() will return an actual screenshot for multimodal models if emit_images is True. If it’s False, computer.display.view() will return all the text on the screen.

Many other functions of the computer can produce image/text outputs, and this parameter controls that.

interpreter.computer.emit_images = True