To use Open Interpreter with a model from AWS Sagemaker, set the model flag:

interpreter --model sagemaker/<model-name>

Supported Models

We support the following completion models from AWS Sagemaker:

  • Meta Llama 2 7B
  • Meta Llama 2 7B (Chat/Fine-tuned)
  • Meta Llama 2 13B
  • Meta Llama 2 13B (Chat/Fine-tuned)
  • Meta Llama 2 70B
  • Meta Llama 2 70B (Chat/Fine-tuned)
  • Your Custom Huggingface Model

interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-7b
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-7b-f
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b-f
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b-b-f
interpreter --model sagemaker/<your-hugginface-deployment-name>

Required Environment Variables

Set the following environment variables (click here to learn how) to use these models.

Environment VariableDescriptionWhere to Find
AWS_ACCESS_KEY_IDThe API access key for your AWS account.AWS Account Overview -> Security Credintials
AWS_SECRET_ACCESS_KEYThe API secret access key for your AWS account.AWS Account Overview -> Security Credintials
AWS_REGION_NAMEThe AWS region you want to useAWS Account Overview -> Navigation bar -> Region