First of all, thanks a lot for the great l4acados project!
I’m using ResidualLearningMPC on a pendulum-on-cart model. The MLP residual model has 2 hidden layers with 32 neurons each. During closed-loop runs, I see logs like:
Time total: 87.915 ms, feedback: 0.154 ms, preparation: 0.727 ms,to_tensor: 5.723 ms, residual: 0.374 ms, nominal: 0.114 ms
The time_total (~87.9 ms) is far larger than the sum of the sub-timings (~7.1 ms). Could you clarify:
What’s included in time_total that isn’t in feedback/preparation/to_tensor/residual/nominal?
Are there additional phases (overhead, Python↔C crossings, memory copies) contributing to time_total?
Any tips to measure or reduce this gap?
First of all, thanks a lot for the great l4acados project!
I’m using ResidualLearningMPC on a pendulum-on-cart model. The MLP residual model has 2 hidden layers with 32 neurons each. During closed-loop runs, I see logs like:
Time total: 87.915 ms, feedback: 0.154 ms, preparation: 0.727 ms,to_tensor: 5.723 ms, residual: 0.374 ms, nominal: 0.114 msThe time_total (~87.9 ms) is far larger than the sum of the sub-timings (~7.1 ms). Could you clarify:
What’s included in time_total that isn’t in feedback/preparation/to_tensor/residual/nominal?
Are there additional phases (overhead, Python↔C crossings, memory copies) contributing to time_total?
Any tips to measure or reduce this gap?