To use Open Interpreter with a model from OpenAI, simply run:

interpreter

This will default to gpt-4, which is the most capable publicly available model for code interpretation (Open Interpreter was designed to be used with gpt-4).

Trouble accessing gpt-4? Read our gpt-4 setup article.

To run a specific model from OpenAI, set the model flag:

interpreter --model gpt-3.5-turbo

Supported Models

We support any model on OpenAI’s models page:

interpreter --model gpt-4
interpreter --model gpt-4-32k
interpreter --model gpt-3.5-turbo
interpreter --model gpt-3.5-turbo-16k

Required Environment Variables

Set the following environment variables (click here to learn how) to use these models.

Environment VariableDescriptionWhere to Find
OPENAI_API_KEYThe API key for authenticating to OpenAI’s services.OpenAI Account Page