Skip to content
Snippets Groups Projects
Commit 6687e001 authored by Karim Ahmed's avatar Karim Ahmed
Browse files

rename corr_data_source to corr_source_template

parent cd57d863
No related branches found
No related tags found
1 merge request!949[GOTTHARD2][Correct][Dark] Break assumptions on receiver names
%% Cell type:markdown id:bed7bd15-21d9-4735-82c1-c27c1a5e3346 tags: %% Cell type:markdown id:bed7bd15-21d9-4735-82c1-c27c1a5e3346 tags:
# Gotthard2 Offline Correction # Gotthard2 Offline Correction
Author: European XFEL Detector Group, Version: 1.0 Author: European XFEL Detector Group, Version: 1.0
Offline Correction for Gotthard2 Detector. Offline Correction for Gotthard2 Detector.
This notebook is able to correct 25um and 50um GH2 detectors using the same correction steps: This notebook is able to correct 25um and 50um GH2 detectors using the same correction steps:
- Convert 12bit raw data into 10bit, offset subtraction, then multiply with gain constant. - Convert 12bit raw data into 10bit, offset subtraction, then multiply with gain constant.
| Correction | constants | boolean to enable/disable | | Correction | constants | boolean to enable/disable |
|------------|-------------|-----------------------------| |------------|-------------|-----------------------------|
| 12bit to 10bit | `LUTGotthard2` | | | 12bit to 10bit | `LUTGotthard2` | |
| Offset | `OffsetGotthard2`|`offset_correction`| | Offset | `OffsetGotthard2`|`offset_correction`|
| Relative gain | `RelativeGainGotthard2` + `BadPixelsFFGotthard2` |`gain_correction`| | Relative gain | `RelativeGainGotthard2` + `BadPixelsFFGotthard2` |`gain_correction`|
Beside the corrected data, a mask is stored using the badpixels constant of the same parameter conditions and time. Beside the corrected data, a mask is stored using the badpixels constant of the same parameter conditions and time.
- `BadPixelsDarkGotthard2` - `BadPixelsDarkGotthard2`
- `BadPixelsFFGotthard2`, if relative gain correction is requested. - `BadPixelsFFGotthard2`, if relative gain correction is requested.
The correction is done per sequence file. If all selected sequence files have no images to correct the notebook will fail. The correction is done per sequence file. If all selected sequence files have no images to correct the notebook will fail.
The same result would be reached in case the needed dark calibration constants were not retrieved for all modules and `offset_correction` is True. The same result would be reached in case the needed dark calibration constants were not retrieved for all modules and `offset_correction` is True.
In case one of the gain constants were not retrieved `gain_correction` is switched to False and gain correction is disabled. In case one of the gain constants were not retrieved `gain_correction` is switched to False and gain correction is disabled.
The `data` datasets stored in the RECEIVER source along with the corrected image (`adc`) and `mask` are: The `data` datasets stored in the RECEIVER source along with the corrected image (`adc`) and `mask` are:
- `gain` - `gain`
- `bunchId` - `bunchId`
- `memoryCell` - `memoryCell`
- `frameNumber` - `frameNumber`
- `timestamp` - `timestamp`
- `trainId` - `trainId`
%% Cell type:code id:570322ed-f611-4fd1-b2ec-c12c13d55843 tags: %% Cell type:code id:570322ed-f611-4fd1-b2ec-c12c13d55843 tags:
``` python ``` python
in_folder = "/gpfs/exfel/exp/DETLAB/202330/p900326/raw" # the folder to read data from, required in_folder = "/gpfs/exfel/exp/DETLAB/202330/p900326/raw" # the folder to read data from, required
out_folder = "/gpfs/exfel/data/scratch/ahmedk/test/gotthard2" # the folder to output to, required out_folder = "/gpfs/exfel/data/scratch/ahmedk/test/gotthard2" # the folder to output to, required
metadata_folder = "" # Directory containing calibration_metadata.yml when run by xfel-calibrate metadata_folder = "" # Directory containing calibration_metadata.yml when run by xfel-calibrate
run = 20 # run to process, required run = 20 # run to process, required
sequences = [-1] # sequences to correct, set to [-1] for all, range allowed sequences = [-1] # sequences to correct, set to [-1] for all, range allowed
sequences_per_node = 1 # number of sequence files per node if notebook executed through xfel-calibrate, set to 0 to not run SLURM parallel sequences_per_node = 1 # number of sequence files per node if notebook executed through xfel-calibrate, set to 0 to not run SLURM parallel
# Parameters used to access raw data. # Parameters used to access raw data.
karabo_id = "DETLAB_25UM_GH2" # karabo prefix of Gotthard-II devices karabo_id = "DETLAB_25UM_GH2" # karabo prefix of Gotthard-II devices
karabo_da = [""] # data aggregators karabo_da = [""] # data aggregators
receiver_template = "RECEIVER{}" # receiver template used to read INSTRUMENT keys. receiver_template = "RECEIVER{}" # receiver template used to read INSTRUMENT keys.
receiver_affixes = [""] # The affix to format into the receiver template to be able to load the correct receiver name from the data. receiver_affixes = [""] # The affix to format into the receiver template to be able to load the correct receiver name from the data.
control_template = "CONTROL" # control template used to read CONTROL keys. control_template = "CONTROL" # control template used to read CONTROL keys.
ctrl_source_template = "{}/DET/{}" # template for control source name (filled with karabo_id_control) ctrl_source_template = "{}/DET/{}" # template for control source name (filled with karabo_id_control)
karabo_id_control = "" # Control karabo ID. Set to empty string to use the karabo-id karabo_id_control = "" # Control karabo ID. Set to empty string to use the karabo-id
corr_data_source = "{}/CORR/{}:daqOutput" # Correction data source. filled with karabo_id and correction receiver corr_source_template = "{}/CORR/{}:daqOutput" # Correction data source template. filled with karabo_id and correction receiver
corr_receiver = "" # The receiver name of the corrected data. Leave empty for using the same receiver name for the 50um GH2 or the first(Master) receiver for the 25um GH2.\n", corr_receiver = "" # The receiver name of the corrected data. Leave empty for using the same receiver name for the 50um GH2 or the first(Master) receiver for the 25um GH2.
# Parameters for calibration database. # Parameters for calibration database.
cal_db_interface = "tcp://max-exfl-cal001:8016#8025" # the database interface to use. cal_db_interface = "tcp://max-exfl-cal001:8016#8025" # the database interface to use.
cal_db_timeout = 180000 # timeout on caldb requests. cal_db_timeout = 180000 # timeout on caldb requests.
creation_time = "" # To overwrite the measured creation_time. Required Format: YYYY-MM-DD HR:MN:SC e.g. "2022-06-28 13:00:00" creation_time = "" # To overwrite the measured creation_time. Required Format: YYYY-MM-DD HR:MN:SC e.g. "2022-06-28 13:00:00"
# Parameters affecting corrected data. # Parameters affecting corrected data.
constants_file = "" # Use constants in given constant file path. /gpfs/exfel/data/scratch/ahmedk/dont_remove/gotthard2/constants/calibration_constants_GH2.h5 constants_file = "" # Use constants in given constant file path. /gpfs/exfel/data/scratch/ahmedk/dont_remove/gotthard2/constants/calibration_constants_GH2.h5
offset_correction = True # apply offset correction. This can be disabled to only apply LUT or apply LUT and gain correction for non-linear differential results. offset_correction = True # apply offset correction. This can be disabled to only apply LUT or apply LUT and gain correction for non-linear differential results.
gain_correction = True # apply gain correction. gain_correction = True # apply gain correction.
chunks_data = 1 # HDF chunk size for pixel data in number of frames. chunks_data = 1 # HDF chunk size for pixel data in number of frames.
# Parameter conditions. # Parameter conditions.
bias_voltage = -1 # Detector bias voltage, set to -1 to use value in raw file. bias_voltage = -1 # Detector bias voltage, set to -1 to use value in raw file.
exposure_time = -1. # Detector exposure time, set to -1 to use value in raw file. exposure_time = -1. # Detector exposure time, set to -1 to use value in raw file.
exposure_period = -1. # Detector exposure period, set to -1 to use value in raw file. exposure_period = -1. # Detector exposure period, set to -1 to use value in raw file.
acquisition_rate = -1. # Detector acquisition rate (1.1/4.5), set to -1 to use value in raw file. acquisition_rate = -1. # Detector acquisition rate (1.1/4.5), set to -1 to use value in raw file.
single_photon = -1 # Detector single photon mode (High/Low CDS), set to -1 to use value in raw file. single_photon = -1 # Detector single photon mode (High/Low CDS), set to -1 to use value in raw file.
# Parameters for plotting # Parameters for plotting
skip_plots = False # exit after writing corrected files skip_plots = False # exit after writing corrected files
pulse_idx_preview = 3 # pulse index to preview. The following even/odd pulse index is used for preview. # TODO: update to pulseId preview. pulse_idx_preview = 3 # pulse index to preview. The following even/odd pulse index is used for preview. # TODO: update to pulseId preview.
def balance_sequences(in_folder, run, sequences, sequences_per_node, karabo_da): def balance_sequences(in_folder, run, sequences, sequences_per_node, karabo_da):
from xfel_calibrate.calibrate import balance_sequences as bs from xfel_calibrate.calibrate import balance_sequences as bs
return bs(in_folder, run, sequences, sequences_per_node, karabo_da) return bs(in_folder, run, sequences, sequences_per_node, karabo_da)
``` ```
%% Cell type:code id:6e9730d8-3908-41d7-abe2-d78e046d5de2 tags: %% Cell type:code id:6e9730d8-3908-41d7-abe2-d78e046d5de2 tags:
``` python ``` python
import warnings import warnings
from logging import warning from logging import warning
import h5py import h5py
import pasha as psh import pasha as psh
import numpy as np import numpy as np
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
from IPython.display import Markdown, display from IPython.display import Markdown, display
from extra_data import RunDirectory, H5File from extra_data import RunDirectory, H5File
from pathlib import Path from pathlib import Path
import cal_tools.restful_config as rest_cfg import cal_tools.restful_config as rest_cfg
from cal_tools.calcat_interface import CalCatError, GOTTHARD2_CalibrationData from cal_tools.calcat_interface import CalCatError, GOTTHARD2_CalibrationData
from cal_tools.files import DataFile from cal_tools.files import DataFile
from cal_tools.gotthard2 import gotthard2algs, gotthard2lib from cal_tools.gotthard2 import gotthard2algs, gotthard2lib
from cal_tools.step_timing import StepTimer from cal_tools.step_timing import StepTimer
from cal_tools.tools import ( from cal_tools.tools import (
calcat_creation_time, calcat_creation_time,
write_constants_fragment, write_constants_fragment,
map_seq_files, map_seq_files,
) )
from XFELDetAna.plotting.heatmap import heatmapPlot from XFELDetAna.plotting.heatmap import heatmapPlot
warnings.filterwarnings('ignore') warnings.filterwarnings('ignore')
%matplotlib inline %matplotlib inline
``` ```
%% Cell type:code id:d7c02c48-4429-42ea-a42e-de45366d7fa3 tags: %% Cell type:code id:d7c02c48-4429-42ea-a42e-de45366d7fa3 tags:
``` python ``` python
in_folder = Path(in_folder) in_folder = Path(in_folder)
run_folder = in_folder / f"r{run:04d}" run_folder = in_folder / f"r{run:04d}"
out_folder = Path(out_folder) out_folder = Path(out_folder)
out_folder.mkdir(parents=True, exist_ok=True) out_folder.mkdir(parents=True, exist_ok=True)
if not karabo_id_control: if not karabo_id_control:
karabo_id_control = karabo_id karabo_id_control = karabo_id
ctrl_src = ctrl_source_template.format(karabo_id_control, control_template) ctrl_src = ctrl_source_template.format(karabo_id_control, control_template)
# Run's creation time: # Run's creation time:
creation_time = calcat_creation_time(in_folder, run, creation_time) creation_time = calcat_creation_time(in_folder, run, creation_time)
print(f"Creation time: {creation_time}") print(f"Creation time: {creation_time}")
``` ```
%% Cell type:code id:f9a8d1eb-ce6a-4ed0-abf4-4a6029734672 tags: %% Cell type:code id:f9a8d1eb-ce6a-4ed0-abf4-4a6029734672 tags:
``` python ``` python
step_timer = StepTimer() step_timer = StepTimer()
``` ```
%% Cell type:code id:892172d8 tags: %% Cell type:code id:892172d8 tags:
``` python ``` python
run_dc = RunDirectory(run_folder) run_dc = RunDirectory(run_folder)
# Read slow data # Read slow data
g2ctrl = gotthard2lib.Gotthard2Ctrl(run_dc=run_dc, ctrl_src=ctrl_src) g2ctrl = gotthard2lib.Gotthard2Ctrl(run_dc=run_dc, ctrl_src=ctrl_src)
if bias_voltage == -1: if bias_voltage == -1:
bias_voltage = g2ctrl.get_bias_voltage() bias_voltage = g2ctrl.get_bias_voltage()
if exposure_time == -1: if exposure_time == -1:
exposure_time = g2ctrl.get_exposure_time() exposure_time = g2ctrl.get_exposure_time()
if exposure_period == -1: if exposure_period == -1:
exposure_period = g2ctrl.get_exposure_period() exposure_period = g2ctrl.get_exposure_period()
if acquisition_rate == -1: if acquisition_rate == -1:
acquisition_rate = g2ctrl.get_acquisition_rate() acquisition_rate = g2ctrl.get_acquisition_rate()
if single_photon == -1: if single_photon == -1:
single_photon = g2ctrl.get_single_photon() single_photon = g2ctrl.get_single_photon()
gh2_detector = g2ctrl.get_det_type() gh2_detector = g2ctrl.get_det_type()
print("Bias Voltage:", bias_voltage) print("Bias Voltage:", bias_voltage)
print("Exposure Time:", exposure_time) print("Exposure Time:", exposure_time)
print("Exposure Period:", exposure_period) print("Exposure Period:", exposure_period)
print("Acquisition Rate:", acquisition_rate) print("Acquisition Rate:", acquisition_rate)
print("Single Photon:", single_photon) print("Single Photon:", single_photon)
print(f"Processing {gh2_detector} Gotthard2.") print(f"Processing {gh2_detector} Gotthard2.")
``` ```
%% Cell type:code id:21a8953a-8c76-475e-8f4f-b201cc25c159 tags: %% Cell type:code id:21a8953a-8c76-475e-8f4f-b201cc25c159 tags:
``` python ``` python
# GH2 calibration data object. # GH2 calibration data object.
g2_cal = GOTTHARD2_CalibrationData( g2_cal = GOTTHARD2_CalibrationData(
detector_name=karabo_id, detector_name=karabo_id,
sensor_bias_voltage=bias_voltage, sensor_bias_voltage=bias_voltage,
exposure_time=exposure_time, exposure_time=exposure_time,
exposure_period=exposure_period, exposure_period=exposure_period,
acquisition_rate=acquisition_rate, acquisition_rate=acquisition_rate,
single_photon=single_photon, single_photon=single_photon,
event_at=creation_time, event_at=creation_time,
client=rest_cfg.calibration_client(), client=rest_cfg.calibration_client(),
) )
da_to_pdu = None da_to_pdu = None
# Keep as long as it is essential to correct # Keep as long as it is essential to correct
# RAW data (FXE p003225) before the data mapping was added to CALCAT. # RAW data (FXE p003225) before the data mapping was added to CALCAT.
try: # in case local constants are used with old RAW data. This can be removed in the future. try: # in case local constants are used with old RAW data. This can be removed in the future.
da_to_pdu = g2_cal.mod_to_pdu da_to_pdu = g2_cal.mod_to_pdu
except CalCatError as e: except CalCatError as e:
print(e) print(e)
db_modules = [None] * len(karabo_da) db_modules = [None] * len(karabo_da)
if da_to_pdu: if da_to_pdu:
if karabo_da == [""]: if karabo_da == [""]:
karabo_da = sorted(da_to_pdu.keys()) karabo_da = sorted(da_to_pdu.keys())
else: else:
# Exclude non selected DA from processing. # Exclude non selected DA from processing.
karabo_da = [da for da in karabo_da if da in da_to_pdu] karabo_da = [da for da in karabo_da if da in da_to_pdu]
db_modules = [da_to_pdu[da] for da in karabo_da] db_modules = [da_to_pdu[da] for da in karabo_da]
print(f"Process modules: {db_modules} for run {run}") print(f"Process modules: {db_modules} for run {run}")
# Create the correction receiver name. # Create the correction receiver name.
receiver_names = [f"*{receiver_template.format(x)}*" for x in receiver_affixes] receiver_names = [f"*{receiver_template.format(x)}*" for x in receiver_affixes]
data_sources = list(run_dc.select(receiver_names).all_sources) data_sources = list(run_dc.select(receiver_names).all_sources)
if not corr_receiver: if not corr_receiver:
# This part assumes this data_source structure: '{karabo_id}/DET/{receiver_name}:{output_channel}' # This part assumes this data_source structure: '{karabo_id}/DET/{receiver_name}:{output_channel}'
if gh2_detector == "25um": # For 25um use virtual karabo_das for CALCAT data mapping. if gh2_detector == "25um": # For 25um use virtual karabo_das for CALCAT data mapping.
corr_receiver = data_sources[0].split("/")[-1].split(":")[0][:-2] corr_receiver = data_sources[0].split("/")[-1].split(":")[0][:-2]
else: else:
corr_receiver = data_sources[0].split("/")[-1].split(":")[0] corr_receiver = data_sources[0].split("/")[-1].split(":")[0]
print(f"Using {corr_receiver} as a receiver name for the corrected data.") print(f"Using {corr_receiver} as a receiver name for the corrected data.")
``` ```
%% Cell type:markdown id:8c852392-bb19-4c40-b2ce-3b787538a92d tags: %% Cell type:markdown id:8c852392-bb19-4c40-b2ce-3b787538a92d tags:
### Retrieving calibration constants ### Retrieving calibration constants
%% Cell type:code id:5717d722 tags: %% Cell type:code id:5717d722 tags:
``` python ``` python
# Used for old FXE (p003225) runs before adding Gotthard2 to CALCAT # Used for old FXE (p003225) runs before adding Gotthard2 to CALCAT
const_data = dict() const_data = dict()
if constants_file: if constants_file:
for mod in karabo_da: for mod in karabo_da:
const_data[mod] = dict() const_data[mod] = dict()
# load constants temporarily using defined local paths. # load constants temporarily using defined local paths.
with h5py.File(constants_file, "r") as cfile: with h5py.File(constants_file, "r") as cfile:
const_data[mod]["LUTGotthard2"] = cfile["LUT"][()] const_data[mod]["LUTGotthard2"] = cfile["LUT"][()]
const_data[mod]["OffsetGotthard2"] = cfile["offset_map"][()].astype(np.float32) const_data[mod]["OffsetGotthard2"] = cfile["offset_map"][()].astype(np.float32)
const_data[mod]["RelativeGainGotthard2"] = cfile["gain_map"][()].astype(np.float32) const_data[mod]["RelativeGainGotthard2"] = cfile["gain_map"][()].astype(np.float32)
const_data[mod]["Mask"] = cfile["bpix_ff"][()].astype(np.uint32) const_data[mod]["Mask"] = cfile["bpix_ff"][()].astype(np.uint32)
else: else:
constant_names = ["LUTGotthard2", "OffsetGotthard2", "BadPixelsDarkGotthard2"] constant_names = ["LUTGotthard2", "OffsetGotthard2", "BadPixelsDarkGotthard2"]
if gain_correction: if gain_correction:
constant_names += ["RelativeGainGotthard2", "BadPixelsFFGotthard2"] constant_names += ["RelativeGainGotthard2", "BadPixelsFFGotthard2"]
g2_metadata = g2_cal.metadata(calibrations=constant_names) g2_metadata = g2_cal.metadata(calibrations=constant_names)
# Display retrieved calibration constants timestamps # Display retrieved calibration constants timestamps
g2_cal.display_markdown_retrieved_constants(metadata=g2_metadata) g2_cal.display_markdown_retrieved_constants(metadata=g2_metadata)
# Validate the constants availability and raise/warn correspondingly. # Validate the constants availability and raise/warn correspondingly.
for mod, calibrations in g2_metadata.items(): for mod, calibrations in g2_metadata.items():
dark_constants = {"LUTGotthard2"} dark_constants = {"LUTGotthard2"}
if offset_correction: if offset_correction:
dark_constants |= {"OffsetGotthard2", "BadPixelsDarkGotthard2"} dark_constants |= {"OffsetGotthard2", "BadPixelsDarkGotthard2"}
missing_dark_constants = dark_constants - set(calibrations) missing_dark_constants = dark_constants - set(calibrations)
if missing_dark_constants: if missing_dark_constants:
karabo_da.remove(mod) karabo_da.remove(mod)
warning(f"Dark constants {missing_dark_constants} are not available to correct {mod}.") # noqa warning(f"Dark constants {missing_dark_constants} are not available to correct {mod}.") # noqa
missing_gain_constants = { missing_gain_constants = {
"BadPixelsFFGotthard2", "RelativeGainGotthard2"} - set(calibrations) "BadPixelsFFGotthard2", "RelativeGainGotthard2"} - set(calibrations)
if gain_correction and missing_gain_constants: if gain_correction and missing_gain_constants:
warning(f"Gain constants {missing_gain_constants} are not retrieved for mod {mod}.") warning(f"Gain constants {missing_gain_constants} are not retrieved for mod {mod}.")
if not karabo_da: if not karabo_da:
raise ValueError("Dark constants are not available for all modules.") raise ValueError("Dark constants are not available for all modules.")
``` ```
%% Cell type:code id:ac1cdec5 tags: %% Cell type:code id:ac1cdec5 tags:
``` python ``` python
# Record constant details in YAML metadata. # Record constant details in YAML metadata.
write_constants_fragment( write_constants_fragment(
out_folder=(metadata_folder or out_folder), out_folder=(metadata_folder or out_folder),
det_metadata=g2_metadata, det_metadata=g2_metadata,
caldb_root=g2_cal.caldb_root) caldb_root=g2_cal.caldb_root)
# Load constants data for all constants. # Load constants data for all constants.
const_data = g2_cal.ndarray_map(metadata=g2_metadata) const_data = g2_cal.ndarray_map(metadata=g2_metadata)
# Prepare constant arrays. # Prepare constant arrays.
if not constants_file: if not constants_file:
for mod in karabo_da: for mod in karabo_da:
# Create the mask array. # Create the mask array.
bpix = const_data[mod].get("BadPixelsDarkGotthard2") bpix = const_data[mod].get("BadPixelsDarkGotthard2")
if bpix is None: if bpix is None:
bpix = np.zeros((1280, 2, 3), dtype=np.uint32) bpix = np.zeros((1280, 2, 3), dtype=np.uint32)
if const_data[mod].get("BadPixelsFFGotthard2") is not None: if const_data[mod].get("BadPixelsFFGotthard2") is not None:
bpix |= const_data[mod]["BadPixelsFFGotthard2"] bpix |= const_data[mod]["BadPixelsFFGotthard2"]
const_data[mod]["Mask"] = bpix const_data[mod]["Mask"] = bpix
# Prepare empty arrays for missing constants. # Prepare empty arrays for missing constants.
if const_data[mod].get("OffsetGotthard2") is None: if const_data[mod].get("OffsetGotthard2") is None:
const_data[mod]["OffsetGotthard2"] = np.zeros( const_data[mod]["OffsetGotthard2"] = np.zeros(
(1280, 2, 3), dtype=np.float32) (1280, 2, 3), dtype=np.float32)
if const_data[mod].get("RelativeGainGotthard2") is None: if const_data[mod].get("RelativeGainGotthard2") is None:
const_data[mod]["RelativeGainGotthard2"] = np.ones( const_data[mod]["RelativeGainGotthard2"] = np.ones(
(1280, 2, 3), dtype=np.float32) (1280, 2, 3), dtype=np.float32)
const_data[mod]["RelativeGainGotthard2"] = const_data[mod]["RelativeGainGotthard2"].astype( # noqa const_data[mod]["RelativeGainGotthard2"] = const_data[mod]["RelativeGainGotthard2"].astype( # noqa
np.float32, copy=False) # Old gain constants are not float32. np.float32, copy=False) # Old gain constants are not float32.
``` ```
%% Cell type:code id:2c7dd0bb tags: %% Cell type:code id:2c7dd0bb tags:
``` python ``` python
file_da = list({kda.split('/')[0] for kda in karabo_da}) file_da = list({kda.split('/')[0] for kda in karabo_da})
mapped_files, total_files = map_seq_files( mapped_files, total_files = map_seq_files(
run_folder, run_folder,
file_da, file_da,
sequences, sequences,
) )
# This notebook doesn't account for processing more # This notebook doesn't account for processing more
# than one file data aggregator. # than one file data aggregator.
seq_files = mapped_files[file_da[0]] seq_files = mapped_files[file_da[0]]
if not len(seq_files): if not len(seq_files):
raise IndexError( raise IndexError(
"No sequence files available to correct for the selected sequences and karabo_da.") "No sequence files available to correct for the selected sequences and karabo_da.")
print(f"Processing a total of {total_files} sequence files") print(f"Processing a total of {total_files} sequence files")
``` ```
%% Cell type:code id:23fcf7f4-351a-4df7-8829-d8497d94fecc tags: %% Cell type:code id:23fcf7f4-351a-4df7-8829-d8497d94fecc tags:
``` python ``` python
context = psh.ProcessContext(num_workers=23) context = psh.ProcessContext(num_workers=23)
``` ```
%% Cell type:code id:daecd662-26d2-4cb8-aa70-383a579cf9f9 tags: %% Cell type:code id:daecd662-26d2-4cb8-aa70-383a579cf9f9 tags:
``` python ``` python
def correct_train(wid, index, d): def correct_train(wid, index, d):
g = gain[index] g = gain[index]
gotthard2algs.convert_to_10bit(d, const_data[mod]["LUTGotthard2"], data_corr[index, ...]) gotthard2algs.convert_to_10bit(d, const_data[mod]["LUTGotthard2"], data_corr[index, ...])
gotthard2algs.correct_train( gotthard2algs.correct_train(
data_corr[index, ...], data_corr[index, ...],
mask[index, ...], mask[index, ...],
g, g,
const_data[mod]["OffsetGotthard2"].astype(np.float32), # PSI map is in f8 const_data[mod]["OffsetGotthard2"].astype(np.float32), # PSI map is in f8
const_data[mod]["RelativeGainGotthard2"], const_data[mod]["RelativeGainGotthard2"],
const_data[mod]["Mask"], const_data[mod]["Mask"],
apply_offset=offset_correction, apply_offset=offset_correction,
apply_gain=gain_correction, apply_gain=gain_correction,
) )
``` ```
%% Cell type:code id:f88c1aa6-a735-4b72-adce-b30162f5daea tags: %% Cell type:code id:f88c1aa6-a735-4b72-adce-b30162f5daea tags:
``` python ``` python
corr_data_source = corr_data_source.format(karabo_id, corr_receiver) corr_data_source = corr_source_template.format(karabo_id, corr_receiver)
for raw_file in seq_files: for raw_file in seq_files:
out_file = out_folder / raw_file.name.replace("RAW", "CORR") out_file = out_folder / raw_file.name.replace("RAW", "CORR")
# Select module INSTRUMENT sources and deselect empty trains. # Select module INSTRUMENT sources and deselect empty trains.
dc = H5File(raw_file).select(data_sources, require_all=True) dc = H5File(raw_file).select(data_sources, require_all=True)
n_trains = len(dc.train_ids) n_trains = len(dc.train_ids)
# Initialize GH2 data and gain arrays to store in corrected files. # Initialize GH2 data and gain arrays to store in corrected files.
if gh2_detector == "25um": if gh2_detector == "25um":
data_stored = np.zeros((dc[data_sources[0], "data.adc"].shape[:2] + (1280 * 2,)), dtype=np.float32) data_stored = np.zeros((dc[data_sources[0], "data.adc"].shape[:2] + (1280 * 2,)), dtype=np.float32)
gain_stored = np.zeros((dc[data_sources[0], "data.adc"].shape[:2] + (1280 * 2,)), dtype=np.uint8) gain_stored = np.zeros((dc[data_sources[0], "data.adc"].shape[:2] + (1280 * 2,)), dtype=np.uint8)
else: else:
data_stored = None data_stored = None
gain_stored = None gain_stored = None
for i, (src, mod) in enumerate(zip(data_sources, karabo_da)): for i, (src, mod) in enumerate(zip(data_sources, karabo_da)):
step_timer.start() step_timer.start()
print(f"Correcting {src} for {raw_file}") print(f"Correcting {src} for {raw_file}")
data = dc[src, "data.adc"].ndarray() data = dc[src, "data.adc"].ndarray()
gain = dc[src, "data.gain"].ndarray() gain = dc[src, "data.gain"].ndarray()
step_timer.done_step("Preparing raw data") step_timer.done_step("Preparing raw data")
dshape = data.shape dshape = data.shape
step_timer.start() step_timer.start()
# Allocate shared arrays. # Allocate shared arrays.
data_corr = context.alloc(shape=dshape, dtype=np.float32) data_corr = context.alloc(shape=dshape, dtype=np.float32)
mask = context.alloc(shape=dshape, dtype=np.uint32) mask = context.alloc(shape=dshape, dtype=np.uint32)
context.map(correct_train, data) context.map(correct_train, data)
step_timer.done_step(f"Correcting one receiver in one sequence file") step_timer.done_step(f"Correcting one receiver in one sequence file")
step_timer.start() step_timer.start()
# Provided PSI gain map has 0 values. Set inf values to nan. # Provided PSI gain map has 0 values. Set inf values to nan.
# TODO: This can maybe be removed after creating XFEL gain maps.? # TODO: This can maybe be removed after creating XFEL gain maps.?
data_corr[np.isinf(data_corr)] = np.nan data_corr[np.isinf(data_corr)] = np.nan
# Create CORR files and add corrected data sections. # Create CORR files and add corrected data sections.
image_counts = dc[src, "data.adc"].data_counts(labelled=False) image_counts = dc[src, "data.adc"].data_counts(labelled=False)
if gh2_detector == "25um": if gh2_detector == "25um":
data_stored[..., i::2] = data_corr.copy() data_stored[..., i::2] = data_corr.copy()
gain_stored[..., i::2] = gain.copy() gain_stored[..., i::2] = gain.copy()
else: # "50um" else: # "50um"
data_stored = data_corr data_stored = data_corr
gain_stored = gain gain_stored = gain
with DataFile(out_file, "w") as ofile: with DataFile(out_file, "w") as ofile:
# Create INDEX datasets. # Create INDEX datasets.
ofile.create_index(dc.train_ids, from_file=dc.files[0]) ofile.create_index(dc.train_ids, from_file=dc.files[0])
ofile.create_metadata( ofile.create_metadata(
like=dc, like=dc,
sequence=dc.run_metadata()["sequenceNumber"], sequence=dc.run_metadata()["sequenceNumber"],
instrument_channels=(f"{corr_data_source}/data",) instrument_channels=(f"{corr_data_source}/data",)
) )
# Create Instrument section to later add corrected datasets. # Create Instrument section to later add corrected datasets.
outp_source = ofile.create_instrument_source(corr_data_source) outp_source = ofile.create_instrument_source(corr_data_source)
# Create count/first datasets at INDEX source. # Create count/first datasets at INDEX source.
outp_source.create_index(data=image_counts) outp_source.create_index(data=image_counts)
# Store uncorrected trainId in the corrected file. # Store uncorrected trainId in the corrected file.
outp_source.create_key( outp_source.create_key(
f"data.trainId", data=dc.train_ids, f"data.trainId", data=dc.train_ids,
chunks=min(50, len(dc.train_ids)) chunks=min(50, len(dc.train_ids))
) )
# Create datasets with the available corrected data # Create datasets with the available corrected data
for field_name, field_data in { for field_name, field_data in {
"adc": data_stored, "adc": data_stored,
"gain": gain_stored, "gain": gain_stored,
}.items(): }.items():
outp_source.create_key( outp_source.create_key(
f"data.{field_name}", data=field_data, f"data.{field_name}", data=field_data,
chunks=((chunks_data,) + data_corr.shape[1:]) chunks=((chunks_data,) + data_corr.shape[1:])
) )
# For GH2 25um, the data of the second receiver is # For GH2 25um, the data of the second receiver is
# stored in the corrected file. # stored in the corrected file.
for field in ["bunchId", "memoryCell", "frameNumber", "timestamp"]: for field in ["bunchId", "memoryCell", "frameNumber", "timestamp"]:
outp_source.create_key( outp_source.create_key(
f"data.{field}", data=dc[src, f"data.{field}"].ndarray(), f"data.{field}", data=dc[src, f"data.{field}"].ndarray(),
chunks=(chunks_data, data_corr.shape[1]) chunks=(chunks_data, data_corr.shape[1])
) )
outp_source.create_compressed_key(f"data.mask", data=mask) outp_source.create_compressed_key(f"data.mask", data=mask)
step_timer.done_step("Storing data") step_timer.done_step("Storing data")
``` ```
%% Cell type:code id:94b8e4d2-9f8c-4c23-a509-39238dd8435c tags: %% Cell type:code id:94b8e4d2-9f8c-4c23-a509-39238dd8435c tags:
``` python ``` python
print(f"Total processing time {step_timer.timespan():.01f} s") print(f"Total processing time {step_timer.timespan():.01f} s")
step_timer.print_summary() step_timer.print_summary()
``` ```
%% Cell type:code id:0ccc7f7e-2a3f-4ac0-b854-7d505410d2fd tags: %% Cell type:code id:0ccc7f7e-2a3f-4ac0-b854-7d505410d2fd tags:
``` python ``` python
if skip_plots: if skip_plots:
print("Skipping plots") print("Skipping plots")
import sys import sys
sys.exit(0) sys.exit(0)
``` ```
%% Cell type:code id:ff203f77-3811-46f3-bf7d-226d2dcab13f tags: %% Cell type:code id:ff203f77-3811-46f3-bf7d-226d2dcab13f tags:
``` python ``` python
mod_dcs = {} mod_dcs = {}
first_seq_raw = seq_files[0] first_seq_raw = seq_files[0]
first_seq_corr = out_folder / first_seq_raw.name.replace("RAW", "CORR") first_seq_corr = out_folder / first_seq_raw.name.replace("RAW", "CORR")
mod_dcs[corr_data_source] = {} mod_dcs[corr_data_source] = {}
with H5File(first_seq_corr) as out_dc: with H5File(first_seq_corr) as out_dc:
tid, mod_dcs[corr_data_source]["train_corr_data"] = next( tid, mod_dcs[corr_data_source]["train_corr_data"] = next(
out_dc[corr_data_source, "data.adc"].trains() out_dc[corr_data_source, "data.adc"].trains()
) )
if gh2_detector == "25um": if gh2_detector == "25um":
mod_dcs[corr_data_source]["train_raw_data"] = np.zeros((data_corr.shape[1], 1280 * 2), dtype=np.float32) mod_dcs[corr_data_source]["train_raw_data"] = np.zeros((data_corr.shape[1], 1280 * 2), dtype=np.float32)
mod_dcs[corr_data_source]["train_raw_gain"] = np.zeros((data_corr.shape[1], 1280 * 2), dtype=np.uint8) mod_dcs[corr_data_source]["train_raw_gain"] = np.zeros((data_corr.shape[1], 1280 * 2), dtype=np.uint8)
for i, src in enumerate(data_sources): for i, src in enumerate(data_sources):
with H5File(first_seq_raw) as in_dc: with H5File(first_seq_raw) as in_dc:
train_dict = in_dc.train_from_id(tid)[1][src] train_dict = in_dc.train_from_id(tid)[1][src]
if gh2_detector == "25um": if gh2_detector == "25um":
mod_dcs[corr_data_source]["train_raw_data"][..., i::2] = train_dict["data.adc"] mod_dcs[corr_data_source]["train_raw_data"][..., i::2] = train_dict["data.adc"]
mod_dcs[corr_data_source]["train_raw_gain"][..., i::2] = train_dict["data.gain"] mod_dcs[corr_data_source]["train_raw_gain"][..., i::2] = train_dict["data.gain"]
else: else:
mod_dcs[corr_data_source]["train_raw_data"] = train_dict["data.adc"] mod_dcs[corr_data_source]["train_raw_data"] = train_dict["data.adc"]
mod_dcs[corr_data_source]["train_raw_gain"] = train_dict["data.gain"] mod_dcs[corr_data_source]["train_raw_gain"] = train_dict["data.gain"]
``` ```
%% Cell type:code id:1b379438-eb1d-42b2-ac83-eb8cf88c46db tags: %% Cell type:code id:1b379438-eb1d-42b2-ac83-eb8cf88c46db tags:
``` python ``` python
display(Markdown("### Mean RAW and CORRECTED across pulses for one train:")) display(Markdown("### Mean RAW and CORRECTED across pulses for one train:"))
display(Markdown(f"Train: {tid}")) display(Markdown(f"Train: {tid}"))
if gh2_detector == "50um": if gh2_detector == "50um":
title = f"{{}} data for {karabo_da} ({db_modules})" title = f"{{}} data for {karabo_da} ({db_modules})"
else: else:
title = f"Interleaved {{}} data for {karabo_da} ({db_modules})" title = f"Interleaved {{}} data for {karabo_da} ({db_modules})"
step_timer.start() step_timer.start()
fig, ax = plt.subplots(figsize=(15, 15)) fig, ax = plt.subplots(figsize=(15, 15))
raw_data = mod_dcs[corr_data_source]["train_raw_data"] raw_data = mod_dcs[corr_data_source]["train_raw_data"]
im = ax.plot(np.mean(raw_data, axis=0)) im = ax.plot(np.mean(raw_data, axis=0))
ax.set_title(title.format("RAW"), fontsize=20) ax.set_title(title.format("RAW"), fontsize=20)
ax.set_xlabel("Strip #", size=20) ax.set_xlabel("Strip #", size=20)
ax.set_ylabel("12-bit ADC output", size=20) ax.set_ylabel("12-bit ADC output", size=20)
plt.xticks(fontsize=20) plt.xticks(fontsize=20)
plt.yticks(fontsize=20) plt.yticks(fontsize=20)
pass pass
fig, ax = plt.subplots(figsize=(15, 15)) fig, ax = plt.subplots(figsize=(15, 15))
corr_data = mod_dcs[corr_data_source]["train_corr_data"] corr_data = mod_dcs[corr_data_source]["train_corr_data"]
im = ax.plot(np.mean(corr_data, axis=0)) im = ax.plot(np.mean(corr_data, axis=0))
ax.set_title(title.format("CORRECTED"), fontsize=20) ax.set_title(title.format("CORRECTED"), fontsize=20)
ax.set_xlabel("Strip #", size=20) ax.set_xlabel("Strip #", size=20)
ax.set_ylabel("10-bit KeV. output", size=20) ax.set_ylabel("10-bit KeV. output", size=20)
plt.xticks(fontsize=20) plt.xticks(fontsize=20)
plt.yticks(fontsize=20) plt.yticks(fontsize=20)
pass pass
step_timer.done_step("Plotting mean data") step_timer.done_step("Plotting mean data")
``` ```
%% Cell type:code id:58a6a276 tags: %% Cell type:code id:58a6a276 tags:
``` python ``` python
display(Markdown(f"### RAW and CORRECTED strips across pulses for train {tid}")) display(Markdown(f"### RAW and CORRECTED strips across pulses for train {tid}"))
step_timer.start() step_timer.start()
for plt_data, dname in zip( for plt_data, dname in zip(
["train_raw_data", "train_corr_data"], ["RAW", "CORRECTED"] ["train_raw_data", "train_corr_data"], ["RAW", "CORRECTED"]
): ):
fig, ax = plt.subplots(figsize=(15, 15)) fig, ax = plt.subplots(figsize=(15, 15))
plt.rcParams.update({"font.size": 20}) plt.rcParams.update({"font.size": 20})
heatmapPlot( heatmapPlot(
mod_dcs[corr_data_source][plt_data], mod_dcs[corr_data_source][plt_data],
y_label="Pulses", y_label="Pulses",
x_label="Strips", x_label="Strips",
title=title.format(dname), title=title.format(dname),
use_axis=ax, use_axis=ax,
cb_pad=0.8, cb_pad=0.8,
) )
pass pass
step_timer.done_step("Plotting RAW and CORRECTED data for one train") step_timer.done_step("Plotting RAW and CORRECTED data for one train")
``` ```
%% Cell type:code id:cd8f5e08-fcee-4bff-ba63-6452b3d892a2 tags: %% Cell type:code id:cd8f5e08-fcee-4bff-ba63-6452b3d892a2 tags:
``` python ``` python
# Validate given "pulse_idx_preview" # Validate given "pulse_idx_preview"
if pulse_idx_preview + 1 > data.shape[1]: if pulse_idx_preview + 1 > data.shape[1]:
print( print(
f"WARNING: selected pulse_idx_preview {pulse_idx_preview} is not available in data." f"WARNING: selected pulse_idx_preview {pulse_idx_preview} is not available in data."
" Previewing 1st pulse." " Previewing 1st pulse."
) )
pulse_idx_preview = 1 pulse_idx_preview = 1
if data.shape[1] == 1: if data.shape[1] == 1:
odd_pulse = 1 odd_pulse = 1
even_pulse = None even_pulse = None
else: else:
odd_pulse = pulse_idx_preview if pulse_idx_preview % 2 else pulse_idx_preview + 1 odd_pulse = pulse_idx_preview if pulse_idx_preview % 2 else pulse_idx_preview + 1
even_pulse = ( even_pulse = (
pulse_idx_preview if not (pulse_idx_preview % 2) else pulse_idx_preview + 1 pulse_idx_preview if not (pulse_idx_preview % 2) else pulse_idx_preview + 1
) )
if pulse_idx_preview + 1 > data.shape[1]: if pulse_idx_preview + 1 > data.shape[1]:
pulse_idx_preview = 1 pulse_idx_preview = 1
if data.shape[1] > 1: if data.shape[1] > 1:
pulse_idx_preview = 2 pulse_idx_preview = 2
``` ```
%% Cell type:code id:e5f0d4d8-e32c-4f2c-8469-4ebbfd3f644c tags: %% Cell type:code id:e5f0d4d8-e32c-4f2c-8469-4ebbfd3f644c tags:
``` python ``` python
display(Markdown("### RAW and CORRECTED even/odd pulses for one train:")) display(Markdown("### RAW and CORRECTED even/odd pulses for one train:"))
display(Markdown(f"Train: {tid}")) display(Markdown(f"Train: {tid}"))
fig, ax = plt.subplots(figsize=(15, 15)) fig, ax = plt.subplots(figsize=(15, 15))
raw_data = mod_dcs[corr_data_source]["train_raw_data"] raw_data = mod_dcs[corr_data_source]["train_raw_data"]
corr_data = mod_dcs[corr_data_source]["train_corr_data"] corr_data = mod_dcs[corr_data_source]["train_corr_data"]
ax.plot(raw_data[odd_pulse], label=f"Odd Pulse {odd_pulse}") ax.plot(raw_data[odd_pulse], label=f"Odd Pulse {odd_pulse}")
if even_pulse: if even_pulse:
ax.plot(raw_data[even_pulse], label=f"Even Pulse {even_pulse}") ax.plot(raw_data[even_pulse], label=f"Even Pulse {even_pulse}")
ax.set_title(title.format("RAW"), fontsize=20) ax.set_title(title.format("RAW"), fontsize=20)
ax.set_xlabel("Strip #", size=20) ax.set_xlabel("Strip #", size=20)
ax.set_ylabel("12-bit ADC RAW", size=20) ax.set_ylabel("12-bit ADC RAW", size=20)
plt.xticks(fontsize=20) plt.xticks(fontsize=20)
plt.yticks(fontsize=20) plt.yticks(fontsize=20)
ax.legend() ax.legend()
pass pass
fig, ax = plt.subplots(figsize=(15, 15)) fig, ax = plt.subplots(figsize=(15, 15))
ax.plot(corr_data[odd_pulse], label=f"Odd Pulse {odd_pulse}") ax.plot(corr_data[odd_pulse], label=f"Odd Pulse {odd_pulse}")
if even_pulse: if even_pulse:
ax.plot(corr_data[even_pulse], label=f"Even Pulse {even_pulse}") ax.plot(corr_data[even_pulse], label=f"Even Pulse {even_pulse}")
ax.set_title(title.format("CORRECTED"), fontsize=20) ax.set_title(title.format("CORRECTED"), fontsize=20)
ax.set_xlabel("Strip #", size=20) ax.set_xlabel("Strip #", size=20)
ax.set_ylabel("10-bit KeV CORRECTED", size=20) ax.set_ylabel("10-bit KeV CORRECTED", size=20)
plt.xticks(fontsize=20) plt.xticks(fontsize=20)
plt.yticks(fontsize=20) plt.yticks(fontsize=20)
ax.legend() ax.legend()
pass pass
step_timer.done_step("Plotting RAW and CORRECTED odd/even pulses.") step_timer.done_step("Plotting RAW and CORRECTED odd/even pulses.")
``` ```
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment