The LLM automatically decides when to use code execution based on the user’s query—no manual trigger is required. It can also be integrated with custom Agents, allowing the model to use it whenever appropriate.
Features
Code runs inside a secure, sandboxed Python environment with dangerous functionality disabled — including network access and filesystem access outside the sandbox.| Feature | Description |
|---|---|
| Libraries | Comes pre-loaded with popular libraries including NumPy, Pandas, SciPy, Matplotlib, and more |
| File Input | Pass any file type to be processed and analyzed by the code |
| File Output | Generated files are returned directly to the user |
| STDIN/STDOUT Capture | All program output is captured and displayed in the conversation |
| Graph Rendering | Charts and visualizations are rendered inline within the ASSIST AI Chat UI |
