Arguments
New: Streaming responses in Python
Learn how to build Open Interpreter into your application.
messages
This property holds a list of messages
between the user and the interpreter.
You can use it to restore a conversation:
interpreter.chat("Hi! Can you print hello world?")
print(interpreter.messages)
# This would output:
[
{
"role": "user",
"message": "Hi! Can you print hello world?"
},
{
"role": "assistant",
"message": "Sure!"
}
{
"role": "assistant",
"language": "python",
"code": "print('Hello, World!')",
"output": "Hello, World!"
}
]
You can use this to restore interpreter
to a previous conversation.
interpreter.messages = messages # A list that resembles the one above
offline
interpreter.local
in the New Computer Update (0.2.0
).This boolean flag determines whether to enable or disable some offline features like open procedures.
interpreter.offline = True # Check for updates, use procedures
interpreter.offline = False # Don't check for updates, don't use procedures
Use this in conjunction with the model
parameter to set your language model.
auto_run
Setting this flag to True
allows Open Interpreter to automatically run the generated code without user confirmation.
interpreter.auto_run = True # Don't require user confirmation
interpreter.auto_run = False # Require user confirmation (default)
verbose
Use this boolean flag to toggle verbose mode on or off. Verbose mode will print information at every step to help diagnose problems.
interpreter.verbose = True # Turns on verbose mode
interpreter.verbose = False # Turns off verbose mode
max_output
This property sets the maximum number of tokens for the output response.
interpreter.max_output = 2000
conversation_history
A boolean flag to indicate if the conversation history should be stored or not.
interpreter.conversation_history = True # To store history
interpreter.conversation_history = False # To not store history
conversation_filename
This property sets the filename where the conversation history will be stored.
interpreter.conversation_filename = "my_conversation.json"
conversation_history_path
You can set the path where the conversation history will be stored.
import os
interpreter.conversation_history_path = os.path.join("my_folder", "conversations")
model
Specifies the language model to be used.
interpreter.llm.model = "gpt-3.5-turbo"
temperature
Sets the randomness level of the model’s output.
interpreter.llm.temperature = 0.7
system_message
This stores the model’s system message as a string. Explore or modify it:
interpreter.system_message += "\nRun all shell commands with -y."
context_window
This manually sets the context window size in tokens.
We try to guess the right context window size for you model, but you can override it with this parameter.
interpreter.llm.context_window = 16000
max_tokens
Sets the maximum number of tokens the model can generate in a single response.
interpreter.llm.max_tokens = 100
api_base
If you are using a custom API, you can specify its base URL here.
interpreter.llm.api_base = "https://api.example.com"
api_key
Set your API key for authentication.
interpreter.llm.api_key = "your_api_key_here"
max_budget
This property sets the maximum budget limit for the session in USD.
interpreter.max_budget = 0.01 # 1 cent