LPD: lessen overhead caused by changing number of frames from detector
Currently, the correction devices are written assuming that the number of frames per train we get from the detector will not vary greatly. This assumption may turn out to be false - either because the LPD will start getting very frequent veto pattern updates or due to the DAQ (with any detector) applying frame masking in an online data reduction scenario (at least for now, monitoring output is filtered the same as the file writing).
Some things that happen when the number of frames changes:
- The correction runner is replaced, releasing and re-acquiring buffers
- Have not benchmarked, would expect it to be << 1 s of blocking, but could an issue if it's happening on a per-train basis
- Potential solution: keep permanent buffers for maximum number of frames, only use the first n entries for actual data
- The numpy array view of the shmem memory allocation is replaced with the "new"
ndarray
starting over at index 0- This will break terribly; if changing on every train, we would do a lot of overwriting data that's still fairly fresh
- Potential solution: make the handler keep track of last written "byte position" in the underlying memory and start writing from there (+ rounding up to some alignment boundary) instead of starting from 0
- Alternative solution: same as for correction buffers, just waste a bunch of RAM by leaving the
ndarray
shape at maximum frames per train, only actually using the first n frames in each (probably add extra field to the shmem handle string to specify this n).