To use vision (highly experimental), run the following command:

interpreter --vision

If a file path to an image is found in your input, it will be loaded into the vision model (gpt-4-vision-preview for now).