interpreter
with the api_base URL of your inference server (for LM studio it is http://localhost:1234/v1
by default):
interpreter --local
and select LMStudio, these steps will be displayed.)
context_window
to 3000, and your max_tokens
to 1000.
If your model has different requirements, set these parameters
manually.interpreter.llm.api_base
at any OpenAI compatible server (including one running locally).
For example, to connect to LM Studio, use these settings:
api_base
.