Currently we are doing Azimuthal integration based on just sum of pixels in which binning does not take care of partial contribution from neighboring pixels which can be improved by including sub-pixel techniques based codes such as pyFAI. It is already done in EXtra-foam (previously called KaraboFAI) by Jun . This is supposed to increase data analysis specially for detectors with large pixels such as DSSC.
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
While for the FastCCD detector this is probably to implement, for the DSSC there will be the problem that the pixels are hexagonal so the pixel splitting would have to take this into account to get a correct result. Maybe @zhujun could comment ?
There is no consideration of hexagonal pixel in both EXtra-foam and EXtra-data. The geometry code simply treat the hexagonal pixel as a 0.236 mm by 0.204 mm rectangular pixel.
pyFAI can store the detector information in the form of the distortion array, i.e.
(Ny, Nx, Nc, 3) where Nx and Ny are the dimensions of the detector, Nc is the number of corners of each pixel,
Apparently this array is way more larger than the two dimensional image array (Nx, Ny).
More than one year ago, we tried to use the above distortion array to do the azimuthal integration for AGIPD/LPD, but we did not see any benefit. Indeed, it is slow.
It would be interesting if any of you could check whether it is possible to do azimuthal integration with a hexagonal-pixel detector using pyFAI and let us know the difference. If there is a big enough gain, then we can start to integrate it in EXtra-foam/EXtra-data.
I was trying to implement sub-pixel integration using pyFAI. As a check, I am trying to calculate the azimuthal integration by simple method and pyFAI over mean image ( mean over pulseId and scan varaible). Simple integration is working as expected but not PyFAI. I am not sure where is the problem. Is it improper handling of nan values ? or detector parameters? It is not giving correct result. I think definition of q is bit different in pyFAI but that should not be concern
For notebook: I am still using my toolbox for intra-train analysis (based on karabo-data) and usinf python3 kernel . For some reason exfel kernel does not work for this, giving netcdf OS error.
Toolbox:
/exp/SCS/201922/p002565/usr/Software/Naman-FeRh-DSSC/XAS_SAXS-decoupling/ToolBox
The current DSSC toolbox doesn't support data pulse resolved https://git.xfel.eu/gitlab/SCS/ToolBox/issues/7 so I average over pulse_id after loading them. I also copied the .h5 data into .nc in a 'test' subfolder.
With this I can do azimuthal integration with and without pyFAI I think which looks like some scattering but somewhat different.
Thanks. It looks better than earlier but it still does not solve the problem. Somehow all the azimuthal curves (pumped, unpumped, difference ) look same for pyFAI integration. I am missing something. Could you please verify that definitions of poni and center are correct in my implementation. I have taken them as mentioned in EXtra-foam.
So it looks like I mixed up x and y dimensions in pyfai integration definitions ( poni1 and poni2 and pixel1 and pixel2).
Now all analytical, direct and pyfai integrations match.
There is some scaling factor for q range ( x axis) of 204/236 ( = aspect ration of dssc) which is probably because q calculation in pyfai may be done differently.
you are calculating a Airy pattern on a regular square lattice, not on hexagonal shape pixel on an hexagonal lattice organized in modules and ladders
you should compute the residual between the analytical model and the calculation result to better see the difference
Also, I was thinking that Airy pattern might not be the best test pattern since it smoothly varies with radius. Azimuthal integration of an annular ring should be better. Error in the azimuthal integration would result in blurred edges.
you are calculating a Airy pattern on a regular square lattice, not on hexagonal shape pixel on an hexagonal lattice organized in modules and ladders
I think it would be better to convert hexagonal pixel/lattice to rectangular coordinates and then calculate Airy pattern/other shapes. Calculating the pattern directly on hexagonal lattice might be computationally expensive.
Also while plotting or doing normal integration , we would need to convert to regular coordinates anyway.
In any case, do you see any particular difference( azimuthal integration wise) if patterns are directly calculated on A: square lattice B: hexagonal pixel with hexagonal lattice C: Square lattice obtained from hexagonal lattice by conversion?
For converting hexagonal to rectangular coordinates conversion, we can use the code by Andreas which is in progress.
Calculated the Azimuhtal integration for annular ring
1: Normal and PyFAI integration match well
2: Analytical and Numerical integrations deviate a lot at sharp edges
3: Probably analytical function itself is not so accurate?
I think the problem is in the pixel size you assume. The annular ring assumes square pixel but not your azimuthal integration (pix_size_x = 204e-6, pix_size_y = 236e-6) so the perfect ring shape is distorted.
When I correct the for pixel size, all integrations almost match except that pyfai integration has delta function type residual at edges. This is with in +/-1 pixels . I think this might be because analytical function and direct integration don't assume pixel splitting and circle is not perfect within +/- 1 pixel.10-06-2020-Azimuthal-Integral-annular-ring-Corrected-Naman.ipynb