interpreter
with the api_base URL of your inference server (for LM studio it is http://localhost:1234/v1
by default):
- Download https://lmstudio.ai/ then start it.
- Select a model then click ↓ Download.
- Click the ↔️ button on the left (below 💬).
- Select your model at the top, then click Start Server.
interpreter --local
and select LMStudio, these steps will be displayed.)
Local mode sets your
context_window
to 3000, and your max_tokens
to 1000.
If your model has different requirements, set these parameters
manually.Python
Compared to the terminal interface, our Python package gives you more granular control over each setting. You can pointinterpreter.llm.api_base
at any OpenAI compatible server (including one running locally).
For example, to connect to LM Studio, use these settings:
api_base
.