Management Console¶
The DRLS management console is a React-based UI for visual lakehouse management. It provides table exploration, pipeline building, streaming monitoring, and AI agent chat.
Setup¶
Development¶
# Terminal 1: Rust backend (embeds Python via PyO3)
cd ui/backend && cargo run
# Terminal 2: React dev server
cd ui/frontend && npm run dev
Open http://localhost:5173.
Production¶
Open http://localhost:3000.
Setup Wizard¶
On first launch, a 4-step wizard guides you through:
- Catalog Type — Select your Iceberg catalog (Hadoop, Hive, REST, Polaris, Glue, Nessie)
- Connection — Configure warehouse path and catalog URI
- Test — Verify connectivity to the catalog
- Confirm — Save configuration
Table Explorer¶
Browse and inspect all tables in your catalog:
- Table list — All tables grouped by namespace
- Table detail — Schema, partition spec, snapshot count, file stats
- Health indicator — Visual health status with recommendations
- Operations panel — Quick actions: compact, expire snapshots, remove orphans
Pipeline Builder¶
Build streaming pipelines visually using React Flow:
- Drag-and-drop nodes from the palette
- 4 node types: Source, Transform, Sink, Monitor
- Connect nodes to define data flow
- Save pipelines as versioned definitions
- Read-only mode — Pipelines in UAT, production, or retired environments cannot be edited. A banner indicates the read-only state and offers a "Create New Version" button to clone into development.
Pipeline List¶
Browse and manage all pipeline definitions:
- Environment filter tabs — All, Development, QA, UAT, Production, Retired
- New pipeline — Create a new pipeline definition from scratch
- Import — Import a pipeline definition from a JSON export file
- Delete — Remove definitions that have no versions beyond QA (DataEngineer+)
Pipeline Versioning¶
Each pipeline definition tracks immutable version snapshots:
- Version history — Slide-out panel listing all versions with environment badges
- Format version — Internal schema version for import/export compatibility
- New versions always start in the development environment
Environment Lifecycle¶
Pipeline versions progress through a managed lifecycle:
graph LR
dev["Development"] --> qa["QA"]
qa --> uat["UAT"]
uat --> prod["Production"]
prod --> retired["Retired"]
uat -.->|demote| qa
| Transition | Who | Approval Required |
|---|---|---|
| Development → QA | DataEngineer+ | No |
| QA → UAT | DataEngineer submits, Lead approves | Yes |
| UAT → Production | DataEngineer submits, Lead approves | Yes |
| Production → Retired | LeadDataEngineer+ | No |
| UAT → QA (demote) | LeadDataEngineer+ | No |
Read-only enforcement: UAT, Production, and Retired versions cannot be edited. To modify a read-only pipeline, create a new version in development.
Import / Export¶
Pipeline versions can be exported and imported as JSON:
- Export — Download any version as a JSON file (includes format version, nodes, edges, metadata)
- Import — Upload a JSON file to create a new pipeline definition in development
- Compatibility — The system checks the
format_versionfield on import and rejects files from newer system versions
Approval Workflow¶
Promotions to UAT and Production require Lead Data Engineer approval:
- Submit — A DataEngineer promotes a version from QA → UAT or UAT → Production, creating a pending approval request
- Review — Lead Data Engineers see pending requests in the Approval Queue
- Approve/Reject — Leads can approve (moves version to target environment) or reject (with required reason)
- Comments — Both submitters and reviewers can add comments to approval requests
- Notifications — A badge on the Approvals sidebar item shows the count of pending requests for Lead+ users
Notebook Service¶
Launch isolated marimo notebook environments directly from the console (requires Kubernetes):
- Launch — Click "Launch Notebook" to create a dedicated marimo pod
- Status polling — The UI polls pod status until the notebook is ready (Pending → Running)
- Embedded IDE — marimo is embedded via iframe once the pod is running
- Stop — Click "Stop Notebook" to delete the pod and service (notebook files persist on PVC)
- Isolation — Each user gets their own pod in the
notebooksK8s namespace - Auth — Only Data Engineer+ users can access notebooks; the
{user_id}in the proxy path must match the authenticated user
Note
The notebook service requires a Kubernetes cluster. When no cluster is available, the backend logs a warning and disables notebook endpoints gracefully.
Workspace Management¶
Manage multi-tenant workspaces from the System Settings page (Admin or Lead Data Engineer role required):
- Create — Provision a new workspace with dedicated K8s namespaces and network policies
- IAM Role — Set the IRSA role ARN and sync annotations to all ServiceAccounts
- Retry — Re-run provisioning for workspaces in error state
- Delete — Remove workspaces (only when no users are assigned)
For full details on workspace isolation, IAM configuration, and Terraform provisioning, see Workspaces.
Streaming Monitor¶
Real-time pipeline monitoring via WebSocket:
- Live metrics — Batches processed, rows processed, status
- Pipeline status — Running, stopped, error states
- Auto-refresh — Metrics update in real-time via WebSocket connection
Agent Chat¶
Interact with TRex through a chat interface:
- LLM selector — Choose provider and model
- Chat panel — Send natural language commands
- Tool call display — See which tools the agent invoked and their results
- Streaming responses — Real-time response streaming via server-sent events