You can stream messages, code, and code outputs out of Open Interpreter by settingDocumentation Index
Fetch the complete documentation index at: https://docs.openinterpreter.com/llms.txt
Use this file to discover all available pages before exploring further.
stream=True in an interpreter.chat(message) call.
display=True won’t change the behavior of the streaming response, it will just render a display in your terminal.
Anatomy
Each chunk of the streamed response is a dictionary, that has a “role” key that can be either “assistant” or “computer”. The “type” key describes what the chunk is. The “content” key contains the actual content of the chunk. Every ‘message’ is made up of chunks, and begins with a “start” chunk, and ends with an “end” chunk. This helps you parse the streamed response into messages. Let’s break down each part of the streamed response.Code
In this example, the LLM decided to start writing code first. It could have decided to write a message first, or to only write code, or to only write a message. Every streamed chunk of type “code” has a format key that specifies the language. In this case it decided to writepython.
This can be any language defined in our languages directory.

