[JUNGFRAU][DARK][BURST] Convert to float only within each process to save memory for > 4800 trains
Fixing error with memory allocation for dark processing of adaptive burst mode runs.
Description
@mramilli contacted me regarding dark processing failure for SPB adaptive burst mode dark processing.
For normal operation in adaptive burst mode, medium and low runs are of 300 trains X 16 memory cells.
I am not sure why I didn't face this issue while developing this part, but I assume that these runs were not as big as we normally operate.
For the FXE reference run it had the same issue and it failed in the automated tests but I mistakenly overlooked it as all jungfraus failed due to h5file validation as expected.
To avoid this issue I am basically converting the images type inside the memory cells parallel processes. This leads to using half of the previously used memory.
FXE_XAD_JF1M-DARK-BURST_220303_151642.pdf
How Has This Been Tested?
Test with automated test against for all reference runs at: https://git.xfel.eu/detectors/pycalibration/-/merge_requests/610
Relevant Documents (optional)
Types of changes
- Bug fix (non-breaking change which fixes an issue)