Fix stacking buffer shape issue
3 unresolved threads
3 unresolved threads
Compare changes
+ 117
− 76
@@ -329,30 +329,94 @@ class ShmemTrainMatcher(TrainMatcher.TrainMatcher):
Zeros or something configurable (could even add a column to the stacking configuration table for what missing entries should be in each stacking group). In my own test environment, I've at some point hotpatched to zero-initialize for testing LPD + CrystFEL because geometry expects shape for 16 modules worth of data - with only 15 modules, there would be a lot of arbitrarily initialized data coming through. But generally, yes, we should handle missing data per train.
@@ -364,76 +428,42 @@ class ShmemTrainMatcher(TrainMatcher.TrainMatcher):
@@ -477,10 +507,20 @@ class ShmemTrainMatcher(TrainMatcher.TrainMatcher):
@@ -492,6 +532,7 @@ class ShmemTrainMatcher(TrainMatcher.TrainMatcher):
Currently,
orig_size
is not used (I see commented out check - should it be assertion?)Ahhh.. And I think, why did I comment on this line: https://git.xfel.eu/calibration/calng/-/blob/fix/stacking-buffer-shape/src/calng/ShmemTrainMatcher.py#L364
I don't know what to do, because it depends on the filtering part. And now there is no check, and filtering should raise exception if the selection mask doesn't match the train size.
I've change filtering to ignore datasets if they don't match selection mask. Now I use
origin_size
to deduce the stacking buffer size. Let's try without exceptions