Safety is a top priority for us at Open Interpreter. Running LLM generated code on your computer is inherently risky, and we have taken steps to make it as safe as possible. One of the primary safety ‘mechanisms’, is the alignment of the LLM itself. GPT-4 refuses to run dangerous code like rm -rf /, it understands what that command will do, and won’t let you footgun yourself. This is less applicable when running local models like Mistral, that have little or no alignment, making our other safety measures more important.

Safety Measures

  • Safe mode enables code scanning, as well as the ability to scan packages with guarddog with a simple change to the system message. See the safe mode docs for more information.

  • Requiring confirmation with the user before the code is actually run. This is a simple measure that can prevent a lot of accidents. It exists as another layer of protection, but can be disabled with the --auto-run flag if you wish.

  • Sandboxing code execution. Open Interpreter can be run in a sandboxed envirnoment using Docker. This is a great way to run code without worrying about it affecting your system. Docker support is currently experimental, but we are working on making it a core feature of Open Interpreter. Another option for sandboxing is E2B, which overrides the default python language with a sandboxed, hosted version of python through E2B. Follow this guide to set it up.

Notice

Open Interpreter is not responsible for any damage caused by using the package. These safety measures provide no guarantees of safety or security. Please be careful when running code generated by Open Interpreter, and make sure you understand what it will do before running it.