Skip to content

[JUNGFRAU][DARK][BURST] Convert to float only within each process to save memory for > 4800 trains

Karim Ahmed requested to merge fix/Jungfrau_dark_process_4800_trains into master

Fixing error with memory allocation for dark processing of adaptive burst mode runs.

http://max-exfl016.desy.de:8008//gpfs/exfel/d/cal/caldb_store/xfel/reports/SPB/SPB_IRDA_JF4M/dark/dark_900258_r46_r47_r48_220303_134704.pdf

Description

@mramilli contacted me regarding dark processing failure for SPB adaptive burst mode dark processing.

For normal operation in adaptive burst mode, medium and low runs are of 300 trains X 16 memory cells.

I am not sure why I didn't face this issue while developing this part, but I assume that these runs were not as big as we normally operate.

For the FXE reference run it had the same issue and it failed in the automated tests but I mistakenly overlooked it as all jungfraus failed due to h5file validation as expected.

To avoid this issue I am basically converting the images type inside the memory cells parallel processes. This leads to using half of the previously used memory.

FXE_XAD_JF1M-DARK-BURST_220303_151642.pdf

How Has This Been Tested?

Test with automated test against for all reference runs at: https://git.xfel.eu/detectors/pycalibration/-/merge_requests/610

Relevant Documents (optional)

Types of changes

  • Bug fix (non-breaking change which fixes an issue)

Checklist:

Reviewers

@calibration

Edited by Karim Ahmed

Merge request reports