Open Interpreter uses LM Studio to connect to local language models (experimental).

Simply run interpreter in local mode from the command line:

interpreter --local

You will need to run LM Studio in the background.

  1. Download then start it.
  2. Select a model then click ↓ Download.
  3. Click the ↔️ button on the left (below 💬).
  4. Select your model at the top, then click Start Server.

Once the server is running, you can begin your conversation with Open Interpreter.

(When you run the command interpreter --local, the steps above will be displayed.)

Local mode sets your context_window to 3000, and your max_tokens to 1000. If your model has different requirements, set these parameters manually.