This repository contains two marimo dashboards built on top of the SLAC
virtual_accelerator staged model for the CU injector / HXR line.
Path: lume_visualizations/live_stream_monitor.py
This app has two tabs:
- Live monitoring — continuously reads the staged model input PVs from EPICS, pushes them through the model, and plots scalar histories versus time. The timeseries x-axis shows Pacific time.
- Interactive offline changes — lets you set model inputs via number boxes and evaluate the model on demand. The dashboard updates reactively whenever you change a value. A "Apply current machine values" button fetches the current EPICS PV values and loads them into the input boxes.
The live and interactive tabs use separate model instances so they never interfere with each other.
The EPICS input layer lives in lume_visualizations/epics_controls.py so it
can be swapped out without changing the marimo UI code.
Path: lume_visualizations/quad_scan_monitor.py
This app runs a quadrupole scan on QUAD:IN20:525:BCTRL and lets you switch the
display between OTR2, OTR3, and OTR4.
OTR2usesinput_beamfor thex-pxphase-space plot and scalar PV outputs forsigma_x,sigma_y,sigma_z,norm_emit_x, andnorm_emit_y.OTR3usesOTR3_beamandOTRS:IN20:621:Image:ArrayData.OTR4usesOTR4_beamandOTRS:IN20:711:Image:ArrayData.
Path: lume_visualizations/fake_epics_ioc.py
This is a standalone fake EPICS IOC for testing the live-streaming tab without access to the real machine. It serves the same staged-model input PV names used by the live monitor and updates them continuously with smooth synthetic motion.
Create a conda env, install this package as editable, then clone and install the virtual-accelerator
repo from the clone (workaround until lume-torch models are installable).
conda activate va-dev
export LCLS_LATTICE=/path/to/lcls-lattice/
pip install ../virtual-accelerator
pip install -e .caproto is included so the fake IOC can be run from the same environment.
You can run the apps directly with marimo:
marimo run lume_visualizations/live_stream_monitor.pyOr via the packaged entry points:
lume-fake-epics-ioc --list-pvs
lume-quad-scan --host 127.0.0.1 --port 2718
lume-live-monitor --host 127.0.0.1 --port 2719For the fake IOC specifically, there is also a repo-local launcher script:
./start-fake-epics-ioc.sh --update-period 0.5By default, the fake IOC binds to 127.0.0.1 and disables CA beacon
broadcasting. This avoids caproto's fallback beacon target
255.255.255.255, which can fail on macOS and some locked-down networks.
If you need LAN-visible discovery, pass --interfaces 0.0.0.0 --broadcast-auto-beacons or set the EPICS_CAS_* server environment variables
explicitly.
To test the live streaming app against a local fake IOC instead of the machine:
- Start the fake IOC in one terminal:
conda activate va-dev
./start-fake-epics-ioc.sh --update-period 0.5- In a second terminal, start the app:
conda activate va-dev
export LCLS_LATTICE=/path/to/lcls-lattice/
marimo run lume_visualizations/live_stream_monitor.pyThe app automatically defaults to EPICS_CA_ADDR_LIST=127.0.0.1 when no EPICS
environment variables are set, so no manual export is needed for a purely local
fake-IOC run. If you are in an environment that already has EPICS env vars
pointing elsewhere, override them explicitly before launching:
export EPICS_CA_AUTO_ADDR_LIST=NO
export EPICS_CA_ADDR_LIST=127.0.0.1The included Dockerfile builds a runnable image that:
- installs this package,
- installs the
virtual_acceleratordependency from GitHub, - clones the
lcls-latticerepository into/opt/lcls-lattice(at pinned commit), and - defaults to serving the live monitor on port
2719viamarimo run.
Build and run it locally:
docker build -t lume-visualizations .
docker run --rm -p 2718:2718 -p 2719:2719 lume-visualizationsTo run the live monitor in containerized test mode with the fake IOC started in the same container:
docker run --rm -e LUME_START_FAKE_EPICS=1 -p 2719:2719 lume-visualizationsThis automatically sets the EPICS client to use 127.0.0.1 inside the container,
launches the fake IOC, waits up to 15 s for it to become ready, then starts the
app. No additional EPICS_CA_* flags are needed.
To run the quad scan app instead:
docker run --rm -p 2718:2718 lume-visualizations \
lume-quad-scan --host 0.0.0.0 --port 2718 --headlessKubernetes manifests live under deploy/kubernetes/.
The live monitor runs as a 5-worker StatefulSet behind a lightweight
allocator that assigns each browser session to a dedicated worker. This is
required because marimo in run mode cannot handle multiple concurrent
sessions (torch double-load causes segfaults).
Browser → /live-monitor/ → nginx-ingress → Allocator (307 redirect)
Browser → /live-monitor/wN/ → nginx-ingress → Worker N (marimo directly)
Each worker runs marimo run with --base-url /live-monitor/wN. The allocator
tracks worker occupancy via heartbeats sent from client-side JavaScript.
There is documentation on how to update the number of workers in live-monitor.yaml.
Some files are mounted via ConfigMap to allow rapid iteration without rebuilding the Docker image:
| File | In ConfigMap? | Why |
|---|---|---|
live_stream_monitor.py |
Yes | UI/dashboard code changes frequently |
live_stream_monitor.css |
Yes | Styling tweaks |
live_stream_monitor.head.html |
Yes | Session JS, heartbeat, error-recovery |
dashboard.py |
Yes | Plot layout, timezone, axis labels |
live_monitor_allocator.py |
Yes | Allocator routing logic |
beam_monitor.py |
No | Stable model interface, in Docker image |
config.py |
No | PV definitions, in Docker image |
epics_controls.py |
No | EPICS reader, in Docker image |
Trade-off: ConfigMap-mounted files can be updated with kubectl apply -k .
without a Docker rebuild (seconds vs minutes). The downside is the source must
be kept in sync between lume_visualizations/ and deploy/kubernetes/live-monitor-ui/.
Files that rarely change stay in the Docker image for simplicity.
| File | Purpose |
|---|---|
namespace.yaml |
Creates the lume-visualizations namespace |
configmap.yaml |
Provides the LCLS_LATTICE path |
configmap-epics-fake.yaml |
EPICS config for the in-pod fake IOC |
configmap-epics-real.yaml |
EPICS config for the real CA gateway |
live-monitor.yaml |
Allocator Deployment, per-pod Services (w0–w4), StatefulSet |
ingress.yaml |
Per-worker paths + catch-all to allocator |
quad-scan.yaml |
Quad scan app Deployment |
quad-scan-ingress.yaml |
Ingress for the quad scan app |
kustomization.yaml |
Applies everything, ConfigMapGenerator, image tag override |
The Docker image container sees all host CPUs but is cgroup-limited to 2 cores.
Without thread pinning, torch and OpenMP spawn too many threads and contend
heavily. The StatefulSet sets OMP_NUM_THREADS=2, MKL_NUM_THREADS=2,
OPENBLAS_NUM_THREADS=2, and TORCH_NUM_THREADS=2 to match the CPU limit.
The app also calls torch.set_num_threads() at import time. Without this fix,
model evaluation takes ~5–10 s per shot; with it, ~180 ms.
Open kustomization.yaml and change the EPICS ConfigMap resource:
resources:
- configmap-epics-fake.yaml # in-pod fake IOC (local/staging)
# - configmap-epics-real.yaml # LCLS CA gateway, not pushed to public repocd deploy/kubernetes
kubectl apply -k .
kubectl rollout status statefulset/lume-live-monitor-worker -n lume-visualizationsClean up stale ConfigMaps after deploy (if needed):
kubectl get configmaps -n lume-visualizations | grep live-monitor-ui
kubectl delete configmap -n lume-visualizations <old-hash>To update after image rebuild:
kubectl rollout status statefulset/lume-live-monitor-worker -n lume-visualizations
kubectl rollout restart deployment/lume-live-monitor-allocator -n lume-visualizationsThere is documentation on how to update the number of workers in live-monitor.yaml.
- In
live-monitor.yaml: changeLUME_WORKER_COUNT, StatefulSetreplicas, add/remove per-pod Service blocks. - In
ingress.yaml: add/remove/live-monitor/wNpath entries. - Apply:
kubectl apply -k .
.github/workflows/build-container.yml builds the Docker image and pushes it to
GitHub Container Registry as ghcr.io/slaclab/lume-visualizations on pushes to
main, tags matching v*, and manual workflow dispatches.
- The "Apply current machine values" button looks off when running locally, but looks fine in the k8s deployment. The settings are optimized for k8s. Not sure why it's different.
- May need to look into modularizing more to support easier swaps between
LUMEModelinstances (dynamically generate fromLUMEModels, and/or make configs external and only import at consumer level) - Double check all units and scales
- Add info in UI as to what model is running
- Right now it only is settings the injector model PVs. We should also pull in all
cu_hxr_bmad_modelwritable PVs and set them in the model.- add bmad writable PVs to interactive sliders (can we dynamically generate this instead of hardcoding configs?)
- may need to get mix/max of Bmad inputs from PV info
- Later:
- add support to swap models: staged model, or
cu_hxr_bmad_modelonly - add bmad writable PVs to interactive sliders (can we dynamically generate this instead of hardcoding configs?)
- Make tracking start and end a config that users can change? (need to think about this)
- add support to swap models: staged model, or