Open Interpreter can be run fully locally.

Users need to install software to run local LLMs. Open Interpreter supports multiple local model providers such as Ollama, Llamafile, Jan, and LM Studio.

Local models perform better with extra guidance and direction. You can improve performance for your use-case by creating a new Profile.

Terminal Usage

Local Explorer

A Local Explorer was created to simplify the process of using OI locally. To access this menu, run the command interpreter --local.

Select your chosen local model provider from the list of options.

Most providers will require the user to state the model they are using. Provider specific instructions are shown to the user in the menu.

Custom Local

If you want to use a provider other than the ones listed, you will set the --api_base flag to set a custom endpoint.

You will also need to set the model by passing in the --model flag to select a model.

interpreter --api_base "http://localhost:11434" --model ollama/codestral

Other terminal flags are explained in Settings.

Python Usage

In order to have a Python script use Open Interpreter locally, some fields need to be set

from interpreter import interpreter

interpreter.offline = True
interpreter.llm.model = "ollama/codestral"
interpreter.llm.api_base = "http://localhost:11434"

interpreter.chat("how many files are on my desktop?")

Other configuration settings are explained in Settings.