Local Providers
LlamaFile
The easiest way to get started with local models in Open Interpreter is to run interpreter --local
in the terminal, select LlamaFile, then go through the interactive set up process. This will download the model and start the server for you. If you choose to do it manually, you can follow the instructions below.
To use LlamaFile manually with Open Interpreter, you’ll need to download the model and start the server by running the file in the terminal. You can do this with the following commands:
Please note that if you are using a Mac with Apple Silicon, you’ll need to have Xcode installed.