Local Setup
Setup
Open Interpreter uses LM Studio to connect to local language models (experimental).
Simply run interpreter
in local mode from the command line:
interpreter --local
You will need to run LM Studio in the background.
- Download https://lmstudio.ai/ then start it.
- Select a model then click ↓ Download.
- Click the ↔️ button on the left (below 💬).
- Select your model at the top, then click Start Server.
Once the server is running, you can begin your conversation with Open Interpreter.
(When you run the command interpreter --local
, the steps above will be displayed.)
Local mode sets your
context_window
to 3000, and your max_tokens
to 1000. If your model has different requirements, set these parameters manually.