Skip to content
Snippets Groups Projects

Available Notebooks

The following notebooks are currently integrated into European XFEL Offline Calibration tool chain.

AGIPD

Combine Constants

This notebook combines constants from various evaluations

  • Dark image analysis, yielding offset and noise
  • Flat field analysis, yielding X-ray gain
  • Pulse capacitor analysis, yielding medium gain stage slopes and thresholds
  • Charge injection analysis, yielding low gain stage slopes and thresholds into a single set of calibration constants.

These constants do not include offset and noise as they need to be reevaluated more frequently.

Additionally, a bad pixel mask for all gain stages is deduced from the input. The mask contains dedicated entries for all pixels and memory cells as well as all three gains stages.

AGIPD Correction

Offline Correction for AGIPD Detector

AGIPD Characterize Dark Images

This notebook analyzes a set of dark images taken with AGIPD detector to deduce detector offsets (pedestal), noise, bad-pixel maps and thresholds. All four types of constants are evaluated per-pixel and per-memory cell. Data for the detector’s three gain stages needs to be present and separated into separate runs.

The evaluated calibration constants are stored locally and injected in the calibration database.

Gain Characterization

This notebook is used to produce AGIPD Flat-Field constants.

Histogramming of AGIPD FF data

!!! warning

TODO: Add description for this notebook

Characterize AGIPD Pulse Capacitor Data

The following code characterizes AGIPD gain via data take with the pulse capacitor source (PCS). The PCS allows scanning through the high and medium gains of AGIPD, by subsequently increasing the number of charge pulses from on-ASIC capacitor, thus increasing the charge a pixel sees in a given integration time.

Because induced charge does not originate from X-rays on the sensor, the gains evaluated here will later need to be rescaled with gains deduced from X-ray data.

PCS data is organized into multiple runs, as the on-ASIC current source cannot supply all pixels of a given module with charge at the same time. Hence, only certain pixel rows will have seen charge for a given image. These rows then first need to be combined into single module images again. This script uses new style of merging, which does not support old data format.

We then use a K-means clustering algorithm to identify components in the resulting per-pixel data series, matching to three general regions:

  • High gain slope
  • Transition region, where gain switching occurs
  • Medium gain slope.

The same regions are present in the gain-bit data and are used to deduce the switching threshold.

The resulting slopes are then fitted with a linear function and a combination of a linear and exponential decay function to determine the relative gains of the pixels with respect to the module. Additionally, we deduce masks for bad pixels form the data.

DSSC

DSSC Offline Correction

Offline Correction for DSSC Detector

DSSC Characterize Dark Images

The following code analyzes a set of dark images taken with the DSSC detector to deduce detector offsets and noise. Data for the detector is presented in one run and don’t acquire multiple gain stages.

The notebook explicitly does what pyDetLib provide in its offset calculation method for streaming data.

EPIX100

ePix100 Data Correction

The following notebook provides data correction of images acquired with the ePix100 detector.

The sequence of correction applied are: Offset –> Common Mode Noise –> Relative Gain –> Charge Sharing –> Absolute Gain.

Offset, common mode and gain corrected data is saved to /data/image/pixels in the CORR files.

If pattern classification is applied (charge sharing correction), this data will be saved to /data/image/pixels_classified, while the corresponding patterns will be saved to /data/image/patterns in the CORR files.

ePix100 Dark Characterization

The following notebook provides dark image analysis and calibration constants of the ePix100 detector.

Dark characterization evaluates offset and noise of the detector and gives information about bad pixels.

Noise and bad pixels maps are calculated independently for each of the 4 ASICs of ePix100, since their noise behavior can be significantly different.

Common mode correction can be applied to increase sensitivity to noise related bad pixels. Common mode correction is achieved by subtracting the median of all pixels that are read out at the same time along a row/column. This is done in an iterative process, by which a new bad pixels map is calculated and used to mask data as the common mode values across the rows/columns is updated.

Resulting maps are saved as .h5 files for a later use and injected to calibration DB.

references:

EPIX10K

ePIX10K Data Correction

The following notebook provides Offset correction of images acquired with the ePix10K detector.

ePix10K Dark Characterization

The following notebook provides dark image analysis of the ePix10K detector.

Dark characterization evaluates offset and noise of the detector and gives information about bad pixels. Resulting maps are saved as .h5 files for a latter use and injected to the calibration DB.

FASTCCD

FastCCD Data Correction

The following notebook provides correction of images acquired with the FastCCD.

FastCCD Dark Characterization

The following notebook provides dark image analysis of the FastCCD detector.

Dark characterization evaluates offset and noise of the FastCCD detector, corrects the noise for Common Mode (CM), and defines bad pixels relative to offset and CM corrected noise. Bad pixels are then excluded and CM corrected noise is recalculated excluding the bad pixels. Resulting offset and CM corrected noise maps, as well as the bad pixel map are sent to the calibration database.

GENERIC

Constants from DB to HDF5

Currently, available instances are LPD1M1 and AGIPD1M1

Statistical analysis of calibration factors

Plot calibration constants retrieved from the cal. DB.

To be visualized, calibration constants are averaged per group of pixels. Plot shows calibration constant over time for each constant. Values shown in plots are saved in h5 files.

GOTTHARD2

Gotthard2 Offline Correction

Offline Correction for Gotthard2 Detector.

This notebook is able to correct 25um and 50um GH2 detectors using the same correction steps:

  • Convert 12bit raw data into 10bit, offset subtraction, then multiply with gain constant.
Correction constants boolean to enable/disable
12bit to 10bit LUTGotthard2
Offset OffsetGotthard2 offset_correction
Relative gain RelativeGainGotthard2 + BadPixelsFFGotthard2 gain_correction

Beside the corrected data, a mask is stored using the badpixels constant of the same parameter conditions and time.

  • BadPixelsDarkGotthard2
  • BadPixelsFFGotthard2, if relative gain correction is requested.

The correction is done per sequence file. If all selected sequence files have no images to correct the notebook will fail. The same result would be reached in case the needed dark calibration constants were not retrieved for all modules and offset_correction is True. In case one of the gain constants were not retrieved gain_correction is switched to False and gain correction is disabled.

Gotthard2 Dark Image Characterization

The following is a processing for the dark constants (Offset, Noise, and BadPixelsDark) maps using dark images taken with Gotthard2 detector (GH2 50um or 25um). All constants are evaluated per strip, per pulse, and per memory cell. The maps are calculated for each gain stage that is acquired in 3 separate runs.

The three maps can be injected to the database and/or stored locally.

JUNGFRAU

Jungfrau Offline Correction

Offline Correction for Jungfrau Detector

Jungfrau Dark Image Characterization

Analyzes Jungfrau dark image data to deduce offset, noise and resulting bad pixel maps

References:

LPD

LPD Offline Correction

!!! warning

TODO: Add description for this notebook

LPD Offset, Noise and Dead Pixels Characterization

This notebook performs re-characterize of dark images to derive offset, noise and bad-pixel maps. All three types of constants are evaluated per-pixel and per-memory cell.

The notebook will correctly handle veto settings, but note that if you veto cells you will not be able to use these offsets for runs with different veto settings - vetoed cells will have zero offset.

The evaluated calibration constants are stored locally and injected in the calibration database.

LPD Radial X-ray Gain Evaluation

Taking proper flat field for LPD can be difficult, as air scattering will always be present. Additionally, the large detector mandates a large distance to the source, in order to reduce :math:1/r effects.

Because of this a radial evaluation method is used, which assumes that pixels are the same radial distance :math:r should on average have the same signal :math:S(r).

Injecting calibration constant data to the database

Reading h5files of calibration constants to inject them to the database. Used for LPD

LPD Gain Characterization (Charge Injection)

The following code characterizes the gain of the LPD detector from charge injection data, i.e. data with charge injected into the amplifiers, bypassing the sensor. The data needs to fulfil the following requirements:

  • Each file should represent one scan point for one muddle, defined by detector gain setting and charge injections setting
  • Settings need to overlap at least one point for two neighboring gain ranges
  • 100 samples or more per pixel and memory cell should be present for each setting.

The data is then analyzed by calculating the per-pixel, per memory cell mean of the samples for each setting. These means are then normalized to the median peak position of all means of the first module. Overlapping settings in neighboring gain ranges are used to deduce the slopes of the different gains with respect to the high gain setting.

Mine XGM Data and LPD Profile

!!! warning

TODO: Add description for this notebook

PNCCD

pnCCD Data Correction

The following notebook provides offset, common mode, relative gain, split events and pattern classification corrections of images acquired with the pnCCD. This notebook does not yet correct for charge transfer inefficiency.

pnCCD Dark Characterization

The following notebook provides dark image analysis of the pnCCD detector. Dark characterization evaluates offset and noise of the detector and gives information about bad pixels.

On the first iteration, the offset and noise maps are generated. Initial bad pixels map is obtained based on the offset and initial noise maps. Edge pixels are also added to the bad pixels map.

On the second iteration, the noise map is corrected for common mode. A second bad pixel map is generated based on the offset map and offset-and-common-mode-corrected noise map. Then, the hole in the center of the CCD is added to the second bad pixel map.

On the third and final iteration, the pixels that show up on the above-mentioned bad pixels map are masked. Possible events due to cosmic rays are found and masked. The data are then again offset and common mode corrected and a new final noise and bad pixels maps are generated.

These latter resulting maps together with the offset map are saved as .h5 files to a local path for a later use. These dark constants are not automatically sent to the database.

pnCCD Gain Characterization

The following notebook provides gain characterization for the pnCCD. It relies on data which are previously corrected using the Meta Data Catalog web service interface or by running the Correct_pnCCD_NBC.ipynb notebook. Prior to running this notebook, the corrections which should be applied by the web service or the aforementioned notebook are as follows:

  • Offset correction
  • Common mode correction
  • Split pattern classification

pnCCD references:

REMI

Transformation parameters

!!! warning

TODO: Add description for this notebook

TUTORIAL

Tutorial Calculation

A small example how to adapt a notebook to run with the offline calibration package “pycalibation”.

The first cell contains all parameters that should be exposed to the command line.

To run this notebook with different input parameters in parallel by submitting multiple SLURM jobs, for example for various random seed we can do the following:

xfel-calibrate TUTORIAL TEST –random-seed 1,2,3,4

or

xfel-calibrate TUTORIAL TEST –random-seed 1-5

will produce 4 processing jobs