Guides
Running Locally
In this video, Mike Bird goes over three different methods for running Open Interpreter with a local language model:
How to Use Open Interpreter Locally
Ollama
- Download Ollama from https://ollama.ai/download
- Run the command:
ollama run dolphin-mixtral:8x7b-v2.6
- Execute the Open Interpreter:
interpreter --model ollama/dolphin-mixtral:8x7b-v2.6
Jan.ai
- Download Jan from http://jan.ai
- Download the model from the Hub
- Enable API server:
- Go to Settings
- Navigate to Advanced
- Enable API server
- Select the model to use
- Run the Open Interpreter with the specified API base:
interpreter --api_base http://localhost:1337/v1 --model mixtral-8x7b-instruct
Llamafile
⚠ Ensure that Xcode is installed for Apple Silicon
- Download or create a llamafile from https://github.com/Mozilla-Ocho/llamafile
- Make the llamafile executable:
chmod +x mixtral-8x7b-instruct-v0.1.Q5_K_M.llamafile
- Execute the llamafile:
./mixtral-8x7b-instruct-v0.1.Q5_K_M.llamafile
- Run the interpreter with the specified API base:
interpreter --api_base https://localhost:8080/v1