Author: European XFEL Detector Group, Version: 2.0
Author: European XFEL Detector Group, Version: 2.0
The following notebook provides dark image analysis and calibration constants of the ePix100 detector.
The following notebook provides dark image analysis and calibration constants of the ePix100 detector.
Dark characterization evaluates offset and noise of the detector and gives information about bad pixels.
Dark characterization evaluates offset and noise of the detector and gives information about bad pixels.
Noise and bad pixels maps are calculated independently for each of the 4 ASICs of ePix100, since their noise behaviour can be significantly different.
Noise and bad pixels maps are calculated independently for each of the 4 ASICs of ePix100, since their noise behaviour can be significantly different.
Common mode correction can be applied to increase sensitivity to noise related bad pixels. Common mode correction is achieved by subtracting the median of all pixels that are read out at the same time along a row/column. This is done in an iterative process, by which a new bad pixels map is calculated and used to mask data as the common mode values across the rows/columns is updated.
Common mode correction can be applied to increase sensitivity to noise related bad pixels. Common mode correction is achieved by subtracting the median of all pixels that are read out at the same time along a row/column. This is done in an iterative process, by which a new bad pixels map is calculated and used to mask data as the common mode values across the rows/columns is updated.
Resulting maps are saved as .h5 files for a later use and injected to calibration DB.
Resulting maps are saved as .h5 files for a later use and injected to calibration DB.
The bad pixel map is deduced by comparing offset and noise of each pixel ($v_i$) against the median value of the respective maps for each ASIC ($v_k$):
The bad pixel map is deduced by comparing offset and noise of each pixel ($v_i$) against the median value of the respective maps for each ASIC ($v_k$):
$$
$$
v_i > \mathrm{median}(v_{k}) + n \sigma_{v_{k}}
v_i > \mathrm{median}(v_{k}) + n \sigma_{v_{k}}
$$
$$
or
or
$$
$$
v_i < \mathrm{median}(v_{k}) - n \sigma_{v_{k}}
v_i < \mathrm{median}(v_{k}) - n \sigma_{v_{k}}
$$
$$
Values are encoded in a 32 bit mask, where for the dark image deduced bad pixels the following non-zero entries are relevant:
Values are encoded in a 32 bit mask, where for the dark image deduced bad pixels the following non-zero entries are relevant:
np.exp(np.nan_to_num(np.log(constant_maps['BadPixelsDark']),neginf=np.nan)).squeeze(),# convert zeros to NaN
np.exp(np.nan_to_num(np.log(constant_maps['BadPixelsDark']),neginf=np.nan)).squeeze(),# convert zeros to NaN
lut_label='Bad pixel value [ADU]',# for plotting purposes
lut_label='Bad pixel value [ADU]',# for plotting purposes
x_label='Column',
x_label='Column',
y_label='Row',
y_label='Row',
x_range=(0,sensor_size[0]),
x_range=(0,sensor_size[0]),
y_range=(0,sensor_size[1])
y_range=(0,sensor_size[1])
)
)
fig.suptitle('Initial Bad Pixels Map',x=.5,y=.9,fontsize=16)
fig.suptitle('Initial Bad Pixels Map',x=.5,y=.9,fontsize=16)
fig.set_size_inches(h=15,w=15)
fig.set_size_inches(h=15,w=15)
step_timer.done_step('Initial Bad Pixels Map created. Elapsed Time')
step_timer.done_step('Initial Bad Pixels Map created. Elapsed Time')
print(bp_analysis_table(constant_maps['BadPixelsDark'],title='Initial Bad Pixel Analysis'))
print(bp_analysis_table(constant_maps['BadPixelsDark'],title='Initial Bad Pixel Analysis'))
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
## Common Mode Correction
## Common Mode Correction
Common mode correction is applied here to increase sensitivy to bad pixels. This is done in an iterative process. Each iteration is composed by the follwing steps:
Common mode correction is applied here to increase sensitivy to bad pixels. This is done in an iterative process. Each iteration is composed by the follwing steps:
1. Common mode noise is calculated and subtracted from data.
1. Common mode noise is calculated and subtracted from data.
2. Noise map is recalculated.
2. Noise map is recalculated.
3. Bad pixels are recalulated based on the new noise map.
3. Bad pixels are recalulated based on the new noise map.
4. Data is masked based on the new bad pixels map.
4. Data is masked based on the new bad pixels map.
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
step_timer.start()
step_timer.start()
data=data.astype(float)# This conversion is needed for offset subtraction
data=data.astype(float)# This conversion is needed for offset subtraction