"It is a known behaviour that cells around cell Id higher than 300 show peculiar CS signal, hence fitting procedure fails, leading to almost 100% of bad pixels. These cells are later on filled with median values calculated over all memory cells."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# This is needed to determine how many ASICs are not working properly to exclude them from BP counts\n",
"counter = 0\n",
"for row in range(0,128,64):\n",
" for col in range(0, 512, 64):\n",
" mins = np.nanmedian(median_m[0][1, row:row+64, col:col+64])\n",
" if mins == -1.:\n",
" counter += 1\n",
"print(f'Number of not working ASICs: {counter}')"
" fres = copy.deepcopy(fres_copy) # this is needed to have raw fits without sanitization"
]
},
{
...
...
%% Cell type:markdown id: tags:
# Characterize AGIPD Current Source Data
Author: Detector Group, Version 1.0
The following notebook characterises the AGIPD data taken with the current source (CS) and is used to produce constants to allow for relative gain correction.
The current source allows to scan through all the three gain stages of the AGIPD: high, medium, and low. The amount of injected charge during the scan increases by applying current to the pixel wih increasing integration time, which is proportional to the amount of charge collected by a pixel.
The current source data can be taken with constant step to increase integration time (clk) throughout the whole scan or with several so-called loops, covering the dynamic range, with different integration time steps in each loop.
Due to differences in ASICs, to characterise all pixels, datasets taken with seven different settings of ITEST current would be needed. This is not feasible. Hence, we use looping strategy, in each loop having different steps between integration times and different number of increments within the loop to cover the whole dynamic range of the AGIPD pixels.
The signal is injected in a column-wise fashion, having signal in every 4th column, hence 4 files have to be merged to create a full image. Merging is done in separate notebook: [CS_parallelMerging_NBC.ipynb](./CS_parallelMerging_NBC.ipynb) and saved as .h5 files. This notebook loads the merged files. Raw data is only used to retrieve operation conditions.
First, the algorithm labels the individual gain stages which are then fitted with a linear function, leading to three sets of slopes (m) and intercepts (b). Prior to fitting, selection of the fit range is performed to exclude non-linear regions of the injected signal. Dark data are collected in the same fashion as current source data, i.e. for each integration time step there is one dark signal data point. No averaging is performed. Offset correction is done only for the high gain as for medium and low gain this led to worse fitting results.
The fitted slopes and intercepts are sanitised in the next steps; problematic fits and pixels are marked as bad and median values are used to replace these values.
Constants are saved wit the following notation:
- Slopes:
- mH: high gain slope
- mM: medium gain slope
- mL: low gain slope
- Intercepts:
- bH: high gain intercept
- bM: medium gain intercept
- bL: low gain intercept
- Ratios:
- H-M: ratio of high gain and medium gain slope
- M-L: ratio of medium gain and low gain slope
%% Cell type:code id: tags:
``` python
in_folder='/gpfs/exfel/exp/SPB/202330/p900340/scratch/CSmergedFiles/19012023/'# path to input data, required
out_folder="/gpfs/exfel/exp/SPB/202330/p900340/scratch/CS_Processing/test"# path to output to, required
raw_folder='/gpfs/exfel/exp/SPB/202330/p900340/raw/'# path to raw folder, required
metadata_folder=""# Directory containing calibration_metadata.yml when run by xfel-calibrate
dark_run=5# run containning CS specific darks, required
modules=[7]# modules to work on, required, range allowed
modules=[8]# modules to work on, required, range allowed
karabo_da=["all"]
karabo_id_control="SPB_IRU_AGIPD1M1"# karabo-id for the control device e.g. "MID_EXP_AGIPD1M1", or "SPB_IRU_AGIPD1M1"
karabo_id="SPB_DET_AGIPD1M-1"
ctrl_source_template='{}/MDL/FPGA_COMP'# path to control information
instrument_source_template='{}/DET/{}:xtdf'# path in the HDF5 file to images
creation_time=""# To overwrite the measured creation_time. Required Format: YYYY-MM-DD HR:MN:SC e.g. "2022-06-28 13:00:00"
creation_date_offset="00:00:00"# add an offset to creation date, e.g. to get different constants
cal_db_interface="tcp://max-exfl-cal002:8015#8045"# the database interface to use
local_output=True# output constants locally
db_output=False# output constants to database
bias_voltage=-1# detector bias voltage, use -1 to use value stored in slow data.
mem_cells=-1# number of memory cells used, use -1 to use value stored in slow data.
acq_rate=-1.# the detector acquisition rate, use -1. to use value stored in slow data.
gain_setting=-1# the gain setting, use -1 to use value stored in slow data.
sigma_dev_cut=5# parameters outside the range median +- sigma_dev_cut*MAD are replaced with the median
chi2_lim=7# limit of chi2 of the fit
relgain_lim=[0.7,1.3]# limit for the relative gain
steps=[1,10,75]# spacing between integration time steps for each loop
increments=[300,400,200]# number of steps within a loop
It is a known behaviour that cells around cell Id higher than 300 show peculiar CS signal, hence fitting procedure fails, leading to almost 100% of bad pixels. These cells are later on filled with median values calculated over all memory cells.
%% Cell type:code id: tags:
``` python
# This is needed to determine how many ASICs are not working properly to exclude them from BP counts