model
, api_key
, temperature
, etc.system_message
, set your interpreter to run offline
, etc.interpreter.computer
, which handles code execution.llm.supports_functions
is False
, this value will be added to the system message. This parameter tells language models how to execute code. This can be set to an empty string or to False
if you don’t want to tell the LLM how to do this.
False
.
gpt-4o
.
off
, ask
, and auto
.
--custom_instructions
instead, to add relevant information to the system message. If you must modify the system message, you can do so by using this argument, or by changing a profile file.
model
parameter to set your language model.
messages
between the user and the interpreter.
You can use it to restore a conversation:
{content}
will be replaced with the user’s message, then sent to the language model.
{content}
will be replaced with the computer’s output, then sent to the language model.
computer
object in interpreter.computer
is a virtual computer that the AI controls. Its primary interface/function is to execute code and return the output in real-time.
computer
in offline mode will disable some online features, like the hosted Computer API. Inherits from interpreter.offline
.
interpreter.computer
. Inherits from interpreter.verbose
.
emit_images
attribute in interpreter.computer
controls whether the computer should emit images or not. This is inherited from interpreter.llm.supports_vision
.
This is used for multimodel vs. text only models. Running computer.display.view()
will return an actual screenshot for multimodal models if emit_images
is True. If it’s False, computer.display.view()
will return all the text on the screen.
Many other functions of the computer can produce image/text outputs, and this parameter controls that.