" mem_cell_order = \",\".join([str(c) for c in cellid_pattern]) + \",\"\n",
" if inject_cell_order:\n",
" mem_cell_order = \",\".join([str(c) for c in cellid_pattern]) + \",\"\n",
" else:\n",
" mem_cell_order = None\n",
"\n",
" # Do not store empty constants\n",
" # In case of 0 trains data_g is initiated with nans and never refilled.\n",
...
...
%% Cell type:markdown id: tags:
# LPD Offset, Noise and Dead Pixels Characterization #
Author: M. Karnevskiy, S. Hauf
This notebook performs re-characterize of dark images to derive offset, noise and bad-pixel maps. All three types of constants are evaluated per-pixel and per-memory cell.
The notebook will correctly handle veto settings, but note that if you veto cells you will not be able to use these offsets for runs with different veto settings - vetoed cells will have zero offset.
The evaluated calibration constants are stored locally and injected in the calibration data base.
%% Cell type:code id: tags:
``` python
in_folder="/gpfs/exfel/exp/FXE/202030/p900121/raw"# path to input data, required
out_folder="/gpfs/exfel/data/scratch/ahmedk/test/LPD/"# path to output to, required
metadata_folder=""# Directory containing calibration_metadata.yml when run by xfel-calibrate
sequence=0# sequence files to evaluate
modules=[-1]# list of modules to evaluate, RANGE ALLOWED
run_high=120# run number in which high gain data was recorded, required
run_med=121# run number in which medium gain data was recorded, required
run_low=122# run number in which low gain data was recorded, required
karabo_id="FXE_DET_LPD1M-1"# karabo karabo_id
karabo_da=['-1']# a list of data aggregators names, Default [-1] for selecting all data aggregators
receiver_id="{}CH0"# inset for receiver devices
path_template='RAW-R{:04d}-{}-S{:05d}.h5'# the template to use to access data
h5path='/INSTRUMENT/{}/DET/{}:xtdf/image'# path in the HDF5 file to images
h5path_idx='/INDEX/{}/DET/{}:xtdf/image'# path in the HDF5 file to images
use_dir_creation_date=True# use the creation date of the directory for database time derivation
cal_db_interface="tcp://max-exfl016:8015#8025"# the database interface to use
cal_db_timeout=300000# timeout on caldb requests"
local_output=True# output constants locally
db_output=False# output constants to database
capacitor_setting=5# capacitor_setting for which data was taken
mem_cells=512# number of memory cells used
bias_voltage=250# detector bias voltage
thresholds_offset_sigma=3.# bad pixel relative threshold in terms of n sigma offset
thresholds_offset_hard=[400,1500]# bad pixel hard threshold
thresholds_noise_sigma=7.# bad pixel relative threshold in terms of n sigma noise
thresholds_noise_hard=[1,35]# bad pixel hard threshold
skip_first_ntrains=10# Number of first trains to skip
instrument="FXE"# instrument name
ntrains=100# number of trains to use
high_res_badpix_3d=False# plot bad-pixel summary in high resolution
test_for_normality=False# permorm normality test
inject_cell_order=True# Include memory cell order as part of the detector condition
Distribution of a pedestal (ADUs) over trains for the pixel (12,12), memory cell 12. A median of the distribution is shown in yellow. A standard deviation is shown in red. The green line shows average over all pixels for a given memory cell and gain stage.
Distributions of raw pedestal values have been tested if they are normally distributed. A normality test have been performed for each pixel and each memory cell. Plots below show histogram of p-Values and a 2D distribution for the memory cell 12.
%% Cell type:code id: tags:
``` python
# Loop over capacitor settings, modules, constants
forcapincapacitor_settings:
ifnottest_for_normality:
print('Normality test was not requested. Flag `test_for_normality` False')
Single cell overviews allow to identify potential effects on all memory cells, e.g. on a sensor level. Additionally, they should serve as a first sanity check on expected behaviour, e.g. if structuring on the ASIC level is visible in the offsets, but otherwise no immediate artifacts are visible.
Plots give an overview of calibration constants averaged across tiles. A bad pixel mask is applied. Constants are compared with pre-existing constants retrieved from the calibration database. Differences $\Delta$ between the old and new constants is shown.
%% Cell type:code id: tags:
``` python
time_summary=[]
forcap,cap_datainold_mdata.items():
time_summary.append(f"The following pre-existing constants are used for comparison for capacitor setting **{cap}**:")
forqm,qm_dataincap_data.items():
time_summary.append(f"- Module {qm}")
forconst,const_datainqm_data.items():
time_summary.append(f" - {const} created at {const_data['timestamp']}")
display(Markdown("\n".join(time_summary)))
```
%% Cell type:code id: tags:
``` python
# Loop over capacitor settings, modules, constants
forcapinres:
forqminres[cap]:
forgaininrange(3):
display(Markdown('### Summary across tiles - {} gain'.format(gain_names[gain])))
## Variation of offset and noise across Tiles and ASICs ##
The following plots show a standard deviation $\sigma$ of the calibration constant. The plot of standard deviations across tiles show pixels of one tile ($128 \times 32$). Value for each pixel shows a standard deviation across 16 tiles. The standard deviation across ASICs are shown overall tiles. The plot shows pixels of one ASIC ($16 \times 32$), where the value shows a standard deviation across all ACIS of the module.
%% Cell type:code id: tags:
``` python
# Loop over capacitor settings, modules, constants
forcapinres:
forqminres[cap]:
forgaininrange(3):
display(Markdown('### Variation of offset and noise across ASICs - {} gain'.format(gain_names[gain])))
fig=plt.figure(figsize=(15,6))
foriconst,constinenumerate(['Offset','Noise']):
data=np.copy(res[cap][qm][const][:,:,:,gain])
data[badpix_g[cap][qm][:,:,:,gain]>0]=np.nan
label='$\sigma$ {} [ADU]'.format(const)
dataA=np.nanmean(data,axis=2)# average over cells
dataA=dataA.reshape(8,32,16,16)
dataA=np.nanstd(dataA,axis=(0,2))# average across ASICs
ax=fig.add_subplot(121+iconst)
_=heatmapPlot(dataA,add_panels=True,
y_label='rows',x_label='columns',
lut_label=label,use_axis=ax,
panel_y_label=label,panel_x_label=label,
cmap='viridis'
)
plt.show()
```
%% Cell type:code id: tags:
``` python
# Loop over capacitor settings, modules, constants
forcapinres:
forqminres[cap]:
forgaininrange(3):
display(Markdown('### Variation of offset and noise across tiles - {} gain'.format(gain_names[gain])))
fig=plt.figure(figsize=(15,6))
foriconst,constinenumerate(['Offset','Noise']):
data=np.copy(res[cap][qm][const][:,:,:,gain])
data[badpix_g[cap][qm][:,:,:,gain]>0]=np.nan
label='$\sigma$ {} [ADU]'.format(const)
dataT=data.reshape(
int(data.shape[0]/32),
32,
int(data.shape[1]/128),
128,
data.shape[2])
dataT=np.nanstd(dataT,axis=(0,2))
dataT=np.nanmean(dataT,axis=2)
ax=fig.add_subplot(121+iconst)
_=heatmapPlot(dataT,add_panels=True,
y_label='rows',x_label='columns',
lut_label=label,use_axis=ax,
panel_y_label=label,panel_x_label=label,
cmap='viridis')
plt.show()
```
%% Cell type:raw id: tags:
.. raw:: latex
\newpage
%% Cell type:markdown id: tags:
## Aggregate values and per cell behaviour ##
The following tables and plots give an overview of statistical aggregates for each constant, as well as per-cell behavior, averaged across pixels.
%% Cell type:code id: tags:
``` python
# Loop over capacitor settings, modules, constants
forcapinres:
forqminres[cap]:
forgaininrange(3):
display(Markdown('### Mean over pixels - {} gain'.format(gain_names[gain])))
The following tables show summary information for the evaluated module. Values for currently evaluated constants are compared with values for pre-existing constants retrieved from the calibration database.