CODIEE is a Flask-based code explanation tool that helps users understand code line-by-line with LLM support, plus a separate architecture view rendered with Mermaid.
It is designed for practical learning and fast code comprehension:
- Analyze one line at a time
- Navigate with Previous/Next across non-empty lines
- Generate architecture diagrams in a separate page
- Show rich explanation formatting (including markdown tables)
- Line-by-line explanation pipeline
- What the line is
- Part-by-part breakdown
- Why it matters in context
- Where the concept comes from
- Single-line mode by line number to reduce latency and token usage
- Architecture view in a separate screen
- Mermaid diagram rendering with normalization for escaped newlines
- Persistent architecture snapshots written to build output files
- Sprite-based interactive assistant UI
- Rate-limit-aware LLM fallback strategy
- Backend: Flask
- LLM: LangChain + Groq
- Frontend: HTML, CSS, JavaScript
- Diagram: Mermaid
- app.py: Flask routes and API endpoints
- agents.py: Multi-agent analysis pipeline
- llm_client.py: LLM integration and fallback handling
- templates/editor.html: Main editor UI
- templates/architecture.html: Architecture view UI
- static/image/: Sprite animation frames used by UI
- build/: Generated architecture artifacts
- Python 3.10+
- A Groq API key
- Optional: uv for environment and execution convenience
- Install dependencies:
uv pip install -r requirements.txt- Create environment file:
cp .env.example .envIf .env.example does not exist yet, create .env manually as shown below in Configuration.
- Run the app:
uv run python app.py- Create and activate virtual environment:
python3 -m venv .venv
source .venv/bin/activate- Install dependencies:
pip install -r requirements.txt- Run:
python app.pyCreate a .env file in project root with:
GPT_OSS=your_groq_api_key_hereNotes:
- The app reads GPT_OSS for LLM access.
- LLM model selection and fallback are handled in llm_client.py.
Default local URL:
Main pages:
- Editor: /
- Architecture: /architecture
- Open the editor page.
- Paste code or upload a file.
- Click Analyze (in the explanation box) to explain the selected line.
- Use Previous/Next to move through non-empty lines.
- Open Architecture to view diagram and layers.
Analyzes code using the multi-agent pipeline.
Single-line mode:
- Provide line_number to analyze only that line.
Example request:
{
"code": "import os\nprint('x')",
"filename": "sample.py",
"line_number": 1
}Form upload endpoint.
- Field name: code_file
Returns latest persisted architecture JSON and generated file names.
Generated under build/:
- latest_architecture.json
- latest_architecture.mmd
These files are runtime outputs and can be regenerated.
If startup fails due to port conflicts, stop existing process and rerun:
pkill -f "uv run python app.py"
uv run python app.py- Verify .env has valid GPT_OSS key.
- Check API quota and model availability.
- Retry with shorter input.
The app normalizes escaped newlines before rendering. If you still see errors, rerun analysis and refresh architecture page.
- Use .gitignore to keep local, generated, test, and debug artifacts out of commits.
- Keep static/image folder: it is required for sprite UI rendering.
For production deployment:
- Disable Flask debug mode
- Run behind a production WSGI server (for example, gunicorn)
- Store secrets using a secure secret manager
- Add request rate limiting and error monitoring