Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • calibration/pycalibration
1 result
Show changes
Showing
with 335 additions and 333 deletions
......@@ -35,4 +35,3 @@ done
rm *.bak
#cd .. rm api/* sphinx-apidoc -o ./api/ -E ../../iCalibrationDB/
......@@ -3,4 +3,4 @@
.. role:: header-failed
.. role:: passed
.. role:: skipped
.. role:: failed
\ No newline at end of file
.. role:: failed
......@@ -23,7 +23,7 @@ run can be assigned to that commit::
To run all tests, navigate to the test directory and execute::
python -m unittest discover
This will usually entail executing a notebook under test via SLURM
first, then checking its output against the last commited artefacts
of that test type.
......@@ -32,7 +32,7 @@ If individual tests are run, e.g. for debugging, additional options
exist to skip tests, or notebook execution::
python test_XXX.py --help
where `test_XXX.py` is the test name, will give you a list of options
available for that test.
......@@ -44,7 +44,7 @@ generate new artefacts.
Running tests will generate entries for test reports in the
artefacts directory under the most recent commit.
Reviewers should check that such updates are present in the
Reviewers should check that such updates are present in the
list of changed files.
......@@ -64,7 +64,7 @@ Contrary to running tests alone, new artefacts need to be generated
for each affected test individually::
python test_XXX.py --generate
replacing `test_XXX.py` with the test you'd like to run. This
will execute the notebook, create artefact entries in the artefact
dir, and after that will check for consistency by executing the test against
......@@ -76,15 +76,15 @@ commit the new artefacts and create a merge request for your branch::
git add tests/artefacts/
git commit -m "Added new artefacts for changes related to baseline shifts"
Please also add comments in the MR description on why artefacts have
changed.
.. note::
Reviewers should always evaluate if the changes in test artefacts are
Reviewers should always evaluate if the changes in test artefacts are
appropriate, intended and acceptable.
Test Reports
++++++++++++
......@@ -114,4 +114,3 @@ Repositories of calibration constants used in testing can be found at::
/gpfs/exfel/exp/XMPL/201750/p700001/usr
.. include:: test_results.rst
......@@ -17,7 +17,7 @@ The Tutorial consist of this documentation and two very simple notebooks:
calibration tool-chain.
To have a look at those notebooks start from a shell with the karabo environment::
jupyter-notebook
This will open a jupyter kernel running in your browser where you can then open the notebooks in the folder notebooks/Tutorial. If you in addition also start on another shell the ipcluster as instructed in the calversion.ipynb notebook::
......@@ -50,14 +50,14 @@ to install the necessary packages and setup the environment:
./karabo-2.2.4-Release-CentOS-7-x86_64.sh
source karabo/activate
source karabo/activate
3. Get the package pycalibration which contains the offline calibration tool-chain::
git clone https://git.xfel.eu/gitlab/detectors/pycalibration.git
4. Install the necessary requirements and the package itself::
cd pycalibration
pip install -r requirements.txt .
......
......@@ -9,31 +9,31 @@ when developing new offline calibration tools.
Fresh Start
-----------
If you are starting a blank notebook from scratch you should first
If you are starting a blank notebook from scratch you should first
think about a few preconsiderations:
* Will the notebook performan a headless task, or will it also be
an important interface for evaluating the results in form of a
* Will the notebook performan a headless task, or will it also be
an important interface for evaluating the results in form of a
report.
* Do you need to run concurrently? Is concurrency handled internally,
e.g. by use of ipcluster, or also on a host level, using cluster
e.g. by use of ipcluster, or also on a host level, using cluster
computing via slurm.
In case you plan on using the notebook as a report tool, you should make
sure to provide sufficient guidance and textual details using e.g. markdown
cells in the notebook. You should also structure it into appropriate
cells in the notebook. You should also structure it into appropriate
subsections.
If you plan on running concurrently on the cluster, identify which variable
should be mapped to concurent runs. For autofilling it an integer list is
should be mapped to concurent runs. For autofilling it an integer list is
needed.
Once you've clarified the above points, you should create a new notebook,
either in an existing detector folder, or if for a yet not integrated
either in an existing detector folder, or if for a yet not integrated
detector, into a new folder with the detector's name. Give it a suffix
`_NBC` to denote that it is enabled for the tool chain.
You should then start writing your code following the guidelines
You should then start writing your code following the guidelines
below.
......@@ -41,10 +41,10 @@ From Existing Notebook
----------------------
Copy your existing notebook into the appropriate detector directory,
or create a new one if the detector does not exist yet. Give the copy
a suffix `_NBC` to denote that it is enabled for the tool chain.
or create a new one if the detector does not exist yet. Give the copy
a suffix `_NBC` to denote that it is enabled for the tool chain.
You should then start restructuring your code following the guidelines
You should then start restructuring your code following the guidelines
below.
Title and Author Information
......@@ -55,11 +55,11 @@ author and version. These should be given in a leading markdown cell in
the form::
# My Fancy Calculation #
Author: Jane Doe, Version 0.1
A description of the notebook.
Information in the format will allow automatic parsing of author and version.
......@@ -91,7 +91,7 @@ required::
sequences = [0,1,2,3,4] # sequences files to use, range allowed
cluster_profile = "noDB" # The ipcluster profile to use
local_output = False # output constants locally
Here, `in_folder` and `out_folder` are required string values. Values for required parameters have to be given when executing from the command line. This means that any defaults given in the first cell of the code are ignored (they are only used to derive the type of the parameter). `Modules` is a list, which from the command line could also be assigned using a range expression, e.g. `5-10,12,13,18-21`, which would translate to `5,6,7,8,9,12,13,18,19,20`. It is also a required parameter. The parameter `local_output` is a Boolean. The corresponding argument given in the command line will change this parameter from `false` to `True`. There is no way to change this parameter from `True` to `False` from the command line.
The `cluster_profile` parameter is a bit special, in that the tool kit expects exactly this
......@@ -124,10 +124,10 @@ to the following parameters being exposed via the command line::
--no-cluster-job Do not run as a cluster job
--report-to str Filename (and optionally path) for output report
--modules str [str ...]
modules to work on, required, range allowed.
modules to work on, required, range allowed.
Default: [0]
--sequences str [str ...]
sequences files to use, range allowed.
sequences files to use, range allowed.
Default: [0, 1, 2, 3, 4]
--cluster-profile str
The ipcluster profile to use. Default: noDB2
......@@ -135,7 +135,7 @@ to the following parameters being exposed via the command line::
--local-output output constants locally. Default: False
...
.. note::
......@@ -184,9 +184,9 @@ wanting to run the tool will need to install these requirements as well. Thus,
* keep runtimes and library requirements in mind. A library doing its own parallelism either
needs to programatically be able to set this up, or automatically do so. If you need to
start something from the command line first, things might be tricky as you will likely
start something from the command line first, things might be tricky as you will likely
need to run this via `POpen` commands with appropriate environment variable.
Writing out data
~~~~~~~~~~~~~~~~
......@@ -197,7 +197,7 @@ possibly done later on in a notebook.
Also consider using HDF5 via h5py_ as your output format. If you correct or calibrated
input data, which adhears to the XFEL naming convention, you should maintain the convention
in your output data. You should not touch any data that you do not actively work on and
should assure that the `INDEX` and identifier entries are syncronized with respect to
should assure that the `INDEX` and identifier entries are syncronized with respect to
your output data. E.g. if you remove pulses from a train, the `INDEX/.../count` section
should reflect this.
......@@ -205,8 +205,8 @@ Finally, XFEL RAW data can contain filler data from the DAQ. One possible way of
this data is the following::
datapath = "/INSTRUMENT/FXE_DET_LPD1M-1/DET/{}CH0:xtdf/image/cellId".format(channel)
count = np.squeeze(infile[datapath])
count = np.squeeze(infile[datapath])
first = np.squeeze(infile[datapath])
if np.count_nonzero(count != 0) == 0: # filler data has counts of 0
print("File {} has no valid counts".format(infile))
......@@ -215,14 +215,14 @@ this data is the following::
idxtrains = np.squeeze(infile["/INDEX/trainId"])
medianTrain = np.nanmedian(idxtrains) # protect against freak train ids
valid &= (idxtrains > medianTrain - 1e4) & (idxtrains < medianTrain + 1e4)
# index ranges in which non-filler data exists
last_index = int(first[valid][-1]+count[valid][-1])
first_index = int(first[valid][0])
# access these indices
cellIds = np.squeeze(np.array(infile[datapath][first_index:last_index, ...]))
Plotting
~~~~~~~~
......@@ -243,10 +243,10 @@ Calibration Database Interaction
--------------------------------
Tasks which require calibration constants or produce such should do this by interacting with
the European XFEL calibration database.
the European XFEL calibration database.
In terms of developement workflow it is usually easier to work with file-based I/O first and
only switch over to the database after the algorithmic part of the notebook has matured.
only switch over to the database after the algorithmic part of the notebook has matured.
Reasons for this include:
* for developing against the database new constants will have to be integrated therein first
......@@ -263,7 +263,7 @@ Testing
The most important test is that your notebook completes flawlessy outside any special
tool chain feature. After all, the tool chain will only replace parameters, and then
launch a concurrent job and generate a report out of notebook. If it fails to run in the
launch a concurrent job and generate a report out of notebook. If it fails to run in the
normal Jupyter notebook environment, it will certainly fail in the tool chain environment.
Once you are satisfied with your current state of initial development, you can add it
......@@ -273,7 +273,7 @@ Any changes you now make in the notebook will be automatically propagated to the
Specifically, you should verify that all arguments are parsed correctly, e.g. by calling::
xfel-calibrate DETECTOR NOTEBOOK_TYPE --help
From then on, check include if parallel slurm jobs are exectuted correctly and if a report
is generated at the end.
......@@ -298,4 +298,4 @@ documentation.
.. _matplotlib: https://matplotlib.org/
.. _numpy: http://www.numpy.org/
.. _h5py: https://www.h5py.org/
.. _iCalibrationDB: https://in.xfel.eu/readthedocs/docs/icalibrationdb/en/latest/
\ No newline at end of file
.. _iCalibrationDB: https://in.xfel.eu/readthedocs/docs/icalibrationdb/en/latest/
......@@ -860,4 +860,4 @@
},
"nbformat": 4,
"nbformat_minor": 2
}
\ No newline at end of file
}
......@@ -1592,4 +1592,4 @@
},
"nbformat": 4,
"nbformat_minor": 1
}
\ No newline at end of file
}
......@@ -607,4 +607,4 @@
},
"nbformat": 4,
"nbformat_minor": 2
}
\ No newline at end of file
}
%% Cell type:markdown id: tags:
# DSSC Characterize Dark Images #
Author: S. Hauf, Version: 0.1
The following code analyzes a set of dark images taken with the DSSC detector to deduce detector offsets and noise. Data for the detector is presented in one run and don't acquire multiple gain stages.
The notebook explicitely does what pyDetLib provides in its offset calculation method for streaming data.
%% Cell type:code id: tags:
``` python
cluster_profile = "noDB" # The ipcluster profile to use
in_folder = "/gpfs/exfel/exp/SCS/202031/p900170/raw" # path to input data, required
out_folder = "/gpfs/exfel/data/scratch/samartse/data/DSSC" # path to output to, required
sequences = [0] # sequence files to evaluate.
modules = [-1] # modules to run for
run = 223 #run number in which data was recorded, required
karabo_id = "SCS_DET_DSSC1M-1" # karabo karabo_id
karabo_da = ['-1'] # a list of data aggregators names, Default [-1] for selecting all data aggregators
receiver_id = "{}CH0" # inset for receiver devices
path_template = 'RAW-R{:04d}-{}-S{:05d}.h5' # the template to use to access data
h5path = '/INSTRUMENT/{}/DET/{}:xtdf/image' # path in the HDF5 file to images
h5path_idx = '/INDEX/{}/DET/{}:xtdf/image' # path in the HDF5 file to images
slow_data_pattern = 'RAW-R{}-DA{}-S00000.h5'
use_dir_creation_date = True # use the dir creation date for determining the creation time
cal_db_interface = "tcp://max-exfl016:8020" # the database interface to use
cal_db_timeout = 3000000 # timeout on caldb requests"
local_output = True # output constants locally
db_output = False # output constants to database
mem_cells = 0 # number of memory cells used, set to 0 to automatically infer
bias_voltage = 100 # detector bias voltage
rawversion = 2 # RAW file format version
thresholds_offset_sigma = 3. # thresholds in terms of n sigma noise for offset deduced bad pixels
thresholds_offset_hard = [4, 125] # thresholds in absolute ADU terms for offset deduced bad pixels,
# minimal threshold at 4 is set at hardware level, DSSC full range 0-511
thresholds_noise_sigma = 3. # thresholds in terms of n sigma noise for offset deduced bad pixels
thresholds_noise_hard = [0.001, 3] # thresholds in absolute ADU terms for offset deduced bad pixels
offset_numpy_algorithm = "mean"
instrument = "SCS" # the instrument
high_res_badpix_3d = False # set this to True if you need high-resolution 3d bad pixel plots. Runtime: ~ 1h
slow_data_aggregators = [1,2,3,4] # quadrant/aggregator
operation_mode = '' # Detector operation mode, optional
```
%% Cell type:code id: tags:
``` python
# imports and things that do not usually need to be changed
from datetime import datetime
import os
import warnings
warnings.filterwarnings('ignore')
from collections import OrderedDict
import h5py
from ipyparallel import Client
from IPython.display import display, Markdown, Latex
import matplotlib
matplotlib.use('agg')
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import tabulate
from cal_tools.enums import BadPixels
from cal_tools.plotting import (create_constant_overview,
plot_badpix_3d, show_overview,
show_processed_modules)
from cal_tools.tools import (get_dir_creation_date, get_from_db,
get_notebook_name, get_random_db_interface,
get_pdu_from_db, get_notebook_name,
get_random_db_interface, get_report,
map_gain_stages, parse_runs,
run_prop_seq_from_path,
save_const_to_h5, send_to_db)
from cal_tools.dssclib import (get_dssc_ctrl_data,
get_pulseid_checksum)
from iCalibrationDB import Constants, Conditions, Detectors, Versions
view = Client(profile=cluster_profile)[:]
view.use_dill()
# make sure a cluster is running with ipcluster start --n=32, give it a while to start
h5path = h5path.format(karabo_id, receiver_id)
h5path_idx = h5path_idx.format(karabo_id, receiver_id)
gain_names = ['High', 'Medium', 'Low']
if karabo_da[0] == '-1':
if modules[0] == -1:
modules = list(range(16))
karabo_da = ["DSSC{:02d}".format(i) for i in modules]
else:
modules = [int(x[-2:]) for x in karabo_da]
max_cells = mem_cells
offset_runs = OrderedDict()
offset_runs["high"] = run
creation_time=None
if use_dir_creation_date:
creation_time = get_dir_creation_date(in_folder, run)
print(f"Using {creation_time} as creation time of constant.")
run, prop, seq = run_prop_seq_from_path(in_folder)
dinstance = "DSSC1M1"
print(f"Detector in use is {karabo_id}")
cal_db_interface = get_random_db_interface(cal_db_interface)
```
%% Cell type:code id: tags:
``` python
print("Parameters are:")
print(f"Proposal: {prop}")
print(f"Memory cells: {mem_cells}/{max_cells}")
print("Runs: {}".format([ v for v in offset_runs.values()]))
print(f"Sequences: {sequences}")
print(f"Using DB: {db_output}")
print(f"Input: {in_folder}")
print(f"Output: {out_folder}")
print(f"Bias voltage: {bias_voltage}V")
file_loc = f'proposal:{prop} runs:{[ v for v in offset_runs.values()][0]}'
report = get_report(out_folder)
```
%% Cell type:markdown id: tags:
The following lines will create a queue of files which will the be executed module-parallel. Distinguishing between different gains.
%% Cell type:code id: tags:
``` python
# set everything up filewise
os.makedirs(out_folder, exist_ok=True)
gmf = map_gain_stages(in_folder, offset_runs, path_template, karabo_da, sequences)
gain_mapped_files, total_sequences, total_file_size = gmf
print(f"Will process a total of {total_sequences} file.")
```
%% Cell type:markdown id: tags:
## Calculate Offsets, Noise and Thresholds ##
The calculation is performed per-pixel and per-memory-cell. Offsets are simply the median value for a set of dark data taken at a given gain, noise the standard deviation, and gain-bit values the medians of the gain array.
%% Cell type:code id: tags:
``` python
import copy
from functools import partial
def characterize_module(cells, bp_thresh, rawversion, karabo_id, h5path, h5path_idx, inp):
import numpy as np
import copy
import h5py
from cal_tools.enums import BadPixels
def get_num_cells(fname, h5path):
with h5py.File(fname, "r") as f:
cells = f[f"{h5path}/cellId"][()]
if cells == []:
return
maxcell = np.max(cells)
options = [100, 200, 400, 500, 600, 700, 800]
dists = np.array([(o-maxcell) for o in options])
dists[dists<0] = 10000 # assure to always go higher
return options[np.argmin(dists)]
filename, channel = inp
h5path = h5path.format(channel)
h5path_idx = h5path_idx.format(channel)
if cells == 0:
cells = get_num_cells(filename, h5path)
if cells is None:
raise ValueError(f"ERROR! Empty image data file for channel {channel}")
print(f"Using {cells} memory cells")
pulseid_checksum = None
thresholds_offset_hard, thresholds_offset_sigma, thresholds_noise_hard, thresholds_noise_sigma = bp_thresh
infile = h5py.File(filename, "r", driver="core")
if rawversion == 2:
count = np.squeeze(infile[f"{h5path_idx}/count"])
first = np.squeeze(infile[f"{h5path_idx}/first"])
last_index = int(first[count != 0][-1]+count[count != 0][-1])
first_index = int(first[count != 0][0])
else:
status = np.squeeze(infile[f"{h5path_idx}/status"])
if np.count_nonzero(status != 0) == 0:
return
last = np.squeeze(infile[f"{h5path_idx}/last"])
first = np.squeeze(infile[f"{h5path_idx}/first"])
last_index = int(last[status != 0][-1]) + 1
first_index = int(first[status != 0][0])
im = np.array(infile[f"{h5path}/data"][first_index:last_index,...])
cellIds = np.squeeze(infile[f"{h5path}/cellId"][first_index:last_index,...])
infile.close()
pulseid_checksum = get_pulseid_checksum(filename, h5path, h5path_idx)
im = im[:, 0, ...].astype(np.float32)
im = np.rollaxis(im, 2)
im = np.rollaxis(im, 2, 1)
mcells = cells
offset = np.zeros((im.shape[0], im.shape[1], mcells), dtype = np.float64)
noise = np.zeros((im.shape[0], im.shape[1], mcells), dtype = np.float64)
for cc in np.unique(cellIds[cellIds < mcells]):
cellidx = cellIds == cc
if offset_numpy_algorithm == "mean":
offset[...,cc] = np.mean(im[..., cellidx], axis=2)
else:
offset[...,cc] = np.median(im[..., cellidx], axis=2)
noise[...,cc] = np.std(im[..., cellidx], axis=2)
# bad pixels
bp = np.zeros(offset.shape, np.uint32)
# offset related bad pixels
offset_mn = np.nanmedian(offset, axis=(0,1))
offset_std = np.nanstd(offset, axis=(0,1))
bp[(offset < offset_mn-thresholds_offset_sigma*offset_std) |
(offset > offset_mn+thresholds_offset_sigma*offset_std)] |= BadPixels.OFFSET_OUT_OF_THRESHOLD.value
bp[(offset < thresholds_offset_hard[0]) | (offset > thresholds_offset_hard[1])] |= BadPixels.OFFSET_OUT_OF_THRESHOLD.value
bp[~np.isfinite(offset)] |= BadPixels.OFFSET_NOISE_EVAL_ERROR.value
# noise related bad pixels
noise_mn = np.nanmedian(noise, axis=(0,1))
noise_std = np.nanstd(noise, axis=(0,1))
bp[(noise < noise_mn-thresholds_noise_sigma*noise_std) |
(noise > noise_mn+thresholds_noise_sigma*noise_std)] |= BadPixels.NOISE_OUT_OF_THRESHOLD.value
bp[(noise < thresholds_noise_hard[0]) | (noise > thresholds_noise_hard[1])] |= BadPixels.NOISE_OUT_OF_THRESHOLD.value
bp[~np.isfinite(noise)] |= BadPixels.OFFSET_NOISE_EVAL_ERROR.value
return offset, noise, bp, cells, pulseid_checksum
offset_g = OrderedDict()
noise_g = OrderedDict()
gain_g = OrderedDict()
badpix_g = OrderedDict()
gg = 0
start = datetime.now()
all_cells = []
checksums = {}
try:
tGain, encodedGain, operatingFreq = get_dssc_ctrl_data(in_folder + "/r{:04d}/".format(offset_runs["high"]),
slow_data_pattern,
slow_data_aggregators,
offset_runs["high"])
except IOError:
print("ERROR: Couldn't access slow data to read tGain, encodedGain, and operatingFreq \n")
for gain, mapped_files in gain_mapped_files.items():
inp = []
dones = []
for i in modules:
qm = "Q{}M{}".format(i//4 +1, i % 4 + 1)
if qm in mapped_files and not mapped_files[qm].empty():
fname_in = mapped_files[qm].get()
print("Process file: ", fname_in)
dones.append(mapped_files[qm].empty())
else:
continue
inp.append((fname_in, i))
p = partial(characterize_module, max_cells,
(thresholds_offset_hard, thresholds_offset_sigma,
thresholds_noise_hard, thresholds_noise_sigma), rawversion, karabo_id, h5path, h5path_idx)
results = list(map(p, inp))
for ii, r in enumerate(results):
i = modules[ii]
offset, noise, bp, thiscell, pulseid_checksum = r
all_cells.append(thiscell)
qm = "Q{}M{}".format(i//4 +1, i % 4 + 1)
if qm not in offset_g:
offset_g[qm] = np.zeros((offset.shape[0], offset.shape[1], offset.shape[2]))
noise_g[qm] = np.zeros_like(offset_g[qm])
badpix_g[qm] = np.zeros_like(offset_g[qm], np.uint32)
checksums[qm] = pulseid_checksum
offset_g[qm][...] = offset
noise_g[qm][...] = noise
badpix_g[qm][...] = bp
gg +=1
if len(all_cells) > 0:
max_cells = np.max(all_cells)
print(f"Using {max_cells} memory cells")
else:
raise ValueError("0 processed memory cells. No raw data available.")
```
%% Cell type:code id: tags:
``` python
# TODO: add db_module when received from myMDC
# Create the modules dict of karabo_das and PDUs
qm_dict = OrderedDict()
for i, k_da in zip(modules, karabo_da):
qm = f"Q{i//4+1}M{i%4+1}"
qm_dict[qm] = {"karabo_da": k_da,
"db_module": ""}
```
%% Cell type:code id: tags:
``` python
# Retrieve existing constants for comparison
clist = ["Offset", "Noise"]
old_const = {}
old_mdata = {}
print('Retrieve pre-existing constants for comparison.')
detinst = getattr(Detectors, dinstance)
for qm in offset_g.keys():
device = getattr(detinst, qm)
qm_db = qm_dict[qm]
karabo_da = qm_db["karabo_da"]
for const in clist:
dconst =getattr(Constants.DSSC, const)()
condition = Conditions.Dark.DSSC(memory_cells=max_cells,
bias_voltage=bias_voltage,
pulseid_checksum=checksums[qm],
acquisition_rate=operatingFreq[qm],
target_gain=tGain[qm],
encoded_gain=encodedGain[qm])
data, mdata = get_from_db(device,
getattr(Constants.DSSC, const)(),
# This should be used in case of running notebook
# by a different method other than myMDC which already
# sends CalCat info.
# TODO: Set db_module to "" by default in the first cell
if not qm_db["db_module"]:
qm_db["db_module"] = get_pdu_from_db(karabo_id, karabo_da, dconst,
condition, cal_db_interface,
snapshot_at=creation_time)[0]
data, mdata = get_from_db(karabo_id, karabo_da,
dconst,
condition,
None,
cal_db_interface, creation_time=creation_time,
verbosity=2, timeout=cal_db_timeout)
old_const[const] = data
if mdata is not None and data is not None:
time = mdata.calibration_constant_version.begin_at
old_mdata[const] = time.isoformat()
os.makedirs(f'{out_folder}/old/', exist_ok=True)
save_const_to_h5(device,
save_const_to_h5(qm_db["db_module"], karabo_id,
getattr(Constants.DSSC, const)(),
condition, data, file_loc, creation_time,
condition, data, file_loc, report,
creation_time,
f'{out_folder}/old/')
else:
old_mdata[const] = "Not found"
```
%% Cell type:code id: tags:
``` python
res = OrderedDict()
for i in modules:
qm = f"Q{i//4+1}M{i%4+1}"
try:
res[qm] = {'Offset': offset_g[qm],
'Noise': noise_g[qm],
}
except Exception as e:
print(f"Error: No constants for {qm}: {e}")
```
%% Cell type:code id: tags:
``` python
# Push the same constant two different times.
# One with the generated pulseID check sum setting for the offline calibration.
# And another for the online calibration as it doesn't have this pulseID checksum, yet.
md = None
for dont_use_pulseIds in [True, False]:
for qm in res.keys():
detinst = getattr(Detectors, dinstance)
device = getattr(detinst, qm)
karabo_da = qm_dict[qm]["karabo_da"]
db_module = qm_dict[qm]["db_module"]
for const in res[qm].keys():
dconst = getattr(Constants.DSSC, const)()
dconst.data = res[qm][const]
opfreq = None if dont_use_pulseIds else operatingFreq[qm]
targetgain = None if dont_use_pulseIds else tGain[qm]
encodedgain = None if dont_use_pulseIds else encodedGain[qm]
pidsum = None if dont_use_pulseIds else checksums[qm]
# set the operating condition
condition = Conditions.Dark.DSSC(memory_cells=max_cells,
bias_voltage=bias_voltage,
pulseid_checksum=pidsum,
acquisition_rate=opfreq,
target_gain=targetgain,
encoded_gain=encodedgain)
if db_output:
md = send_to_db(device, dconst, condition, file_loc,
md = send_to_db(db_module, karabo_id, dconst, condition, file_loc, report,
cal_db_interface, creation_time=creation_time, timeout=cal_db_timeout)
if local_output and dont_use_pulseIds: # Don't save constant localy two times.
md = save_const_to_h5(device, dconst, condition,
dconst.data, file_loc,
md = save_const_to_h5(db_module, karabo_id, dconst, condition,
dconst.data, file_loc, report,
creation_time, out_folder)
print(f"Calibration constant {const} is stored locally.\n")
if not dont_use_pulseIds:
print("Constants parameter conditions are:\n")
print(f"• memory_cells: {max_cells}\n• bias_voltage: {bias_voltage}\n"
f"• pulseid_checksum: {pidsum}\n• acquisition_rate: {opfreq}\n"
f"• target_gain: {targetgain}\n• encoded_gain: {encodedgain}\n"
f"• creation_time: {creation_time}\n")
```
%% Cell type:code id: tags:
``` python
mnames = []
for i in modules:
qm = f"Q{i//4+1}M{i % 4+1}"
display(Markdown(f'## Position of the module {qm} and its ASICs##'))
mnames.append(qm)
show_processed_modules(dinstance=dinstance, constants=None, mnames=mnames, mode="position")
```
%% Cell type:markdown id: tags:
## Single-Cell Overviews ##
Single cell overviews allow to identify potential effects on all memory cells, e.g. on sensor level. Additionally, they should serve as a first sanity check on expected behaviour, e.g. if structuring on the ASIC level is visible in the offsets, but otherwise no immediate artifacts are visible.
%% Cell type:code id: tags:
``` python
cell = 9
gain = 0
out_folder = None
show_overview(res, cell, gain, out_folder=out_folder, infix="_{}".format(run))
```
%% Cell type:code id: tags:
``` python
cols = {BadPixels.NOISE_OUT_OF_THRESHOLD.value: (BadPixels.NOISE_OUT_OF_THRESHOLD.name, '#FF000080'),
BadPixels.OFFSET_NOISE_EVAL_ERROR.value: (BadPixels.OFFSET_NOISE_EVAL_ERROR.name, '#0000FF80'),
BadPixels.OFFSET_OUT_OF_THRESHOLD.value: (BadPixels.OFFSET_OUT_OF_THRESHOLD.name, '#00FF0080'),
BadPixels.OFFSET_OUT_OF_THRESHOLD.value | BadPixels.NOISE_OUT_OF_THRESHOLD.value: ('MIXED', '#DD00DD80')}
if high_res_badpix_3d:
display(Markdown("""
## Global Bad Pixel Behaviour ##
The following plots show the results of bad pixel evaluation for all evaluated memory cells.
Cells are stacked in the Z-dimension, while pixels values in x/y are rebinned with a factor of 2.
This excludes single bad pixels present only in disconnected pixels.
Hence, any bad pixels spanning at least 4 pixels in the x/y-plane, or across at least two memory cells are indicated.
Colors encode the bad pixel type, or mixed type.
"""))
# set rebin_fac to 1 for avoiding rebining and
# losing real values of badpixels(High resolution).
gain = 0
for mod, data in badpix_g.items():
plot_badpix_3d(data, cols, title=mod, rebin_fac=2)
plt.show()
```
%% Cell type:markdown id: tags:
## Aggregate values, and per Cell behaviour ##
The following tables and plots give an overview of statistical aggregates for each constant, as well as per cell behavior.
%% Cell type:code id: tags:
``` python
create_constant_overview(offset_g, "Offset (ADU)", max_cells, entries=1)
```
%% Cell type:code id: tags:
``` python
create_constant_overview(noise_g, "Noise (ADU)", max_cells, 0, 100, entries=1)
```
%% Cell type:code id: tags:
``` python
bad_pixel_aggregate_g = OrderedDict()
for m, d in badpix_g.items():
bad_pixel_aggregate_g[m] = d.astype(np.bool).astype(np.float)
create_constant_overview(bad_pixel_aggregate_g, "Bad pixel fraction", max_cells, entries=1)
```
%% Cell type:markdown id: tags:
## Summary tables ##
The following tables show summary information for the evaluated module. Values for currently evaluated constants are compared with values for pre-existing constants retrieved from the calibration database.
%% Cell type:code id: tags:
``` python
header = ['Parameter',
"New constant", "Old constant ",
"New constant", "Old constant ",
"New constant", "Old constant "]
for const in ['Offset', 'Noise']:
table = [['','High gain', 'High gain']]
for qm in res.keys():
data = np.copy(res[qm][const])
if old_const[const] is not None:
dataold = np.copy(old_const[const])
f_list = [np.nanmedian, np.nanmean, np.nanstd, np.nanmin, np.nanmax]
n_list = ['Median', 'Mean', 'Std', 'Min', 'Max']
for i, f in enumerate(f_list):
line = [n_list[i]]
line.append('{:6.1f}'.format(f(data[...,gain])))
if old_const[const] is not None:
line.append('{:6.1f}'.format(f(dataold[...,gain])))
else:
line.append('-')
table.append(line)
display(Markdown('### {} [ADU], good and bad pixels ###'.format(const)))
md = display(Latex(tabulate.tabulate(table, tablefmt='latex', headers=header)))
```
......
%% Cell type:markdown id: tags:
# FastCCD Dark Characterization
Author: I. Klačková, S. Hauf, K. Setoodehnia and M. Cascella
The following notebook provides dark image analysis of the FastCCD detector.
Dark characterization evaluates offset and noise of the FastCCD detector, corrects the noise for Common Mode (CM), and defines bad pixels relative to offset and CM corrected noise. Bad pixels are then excluded and CM corrected noise is recalculated excluding the bad pixels. Resulting offset and CM corrected noise maps, as well as the bad pixel map are sent to the calibration database.
%% Cell type:code id: tags:
``` python
cluster_profile = "noDB" # ipcluster profile to use
in_folder = "/gpfs/exfel/exp/SCS/201930/p900074/raw" # input folder, required
out_folder = '/gpfs/exfel/data/scratch/ahmedk/test/fastccd' # output folder, required
sequence = 0 # sequence file to use
run = 351 # which run to read data from, required
karabo_da = ['DA05'] # data aggregators
karabo_id = "SCS_CDIDET_FCCD2M" # karabo prefix of PNCCD devices
receiver_id = "FCCD" # inset for receiver devices
path_template = 'RAW-R{:04d}-{}-S{{:05d}}.h5' # the template to use to access data
h5path = '/INSTRUMENT/{}/DAQ/{}:daqOutput/data/image/pixels' # path to the data in the HDF5 file
h5path_t = '/CONTROL/{}/CTRL/LSLAN/inputA/crdg/value' # path to find temperature
h5path_cntrl = '/RUN/{}/DET/FCCD' # path to find control data
use_dir_creation_date = True # To be used to retrieve calibration constants later on (for database time derivation)
cal_db_interface = "tcp://max-exfl016:8020" # the calibration database interface to use
cal_db_timeout = 300000 # timeout on calibration database requests
db_output = False # Output constants to the calibration database
local_output = True # output constants locally
number_dark_frames = 0 # number of images to be used, if set to 0 all available images are used
# The two operation modes for FastCCD have fixed names which cannot be changed:
operation_mode = "FF" # FS stands for frame-store and FF for full-frame opeartion.
operation_mode = "FF" # FS stands for frame-store and FF for full-frame operation.
temp_limits = 5 # to find calibration constants later on, the sensor temperature is allowed to vary by 5 units
bad_pixel_offset_sigma = 5. # Any pixel whose offset is beyond 5 standard deviations, is a bad pixel
bad_pixel_noise_sigma = 5. # Any pixel whose noise is beyond 5 standard deviations, is a bad pixel
sigmaNoise = 5. # Any pixel whose signal exceeds 'sigmaNoise'*noiseCM (common mode corrected noise) will be masked
fix_temperature = 0. # Fixed operation temperature in Kelvins. If set to 0, mean value of the data file's temperature is
# used.
chunkSize = 100 # Number of images to read per chunk
cpuCores = 40 # Specifies the number of running cpu cores
commonModeAxis = 1 # Axis along which common mode will be calculated (0: along rows, 1: along columns)
run_parallel = True # For parallel computation
# According to our gain calibration using 55Fe source, we have the following conversion gains (e.g., 1 ADU = 6.3 e-):
ADU_to_electron_upper_hg = 6.3 # for upper hemisphere and high gain
ADU_to_electron_lower_hg = 6.4 # for lower hemisphere and high gain
ADU_to_electron_upper_mg = 23.4 # for upper hemisphere and medium gain
ADU_to_electron_lower_mg = 23.4 # for lower hemisphere and medium gain
ADU_to_electron_upper_lg = 49.3 # for upper hemisphere and low gain
ADU_to_electron_lower_lg = 47.3 # for lower hemisphere and low gain
operation_mode = '' # Detector operation mode, optional
```
%% Cell type:code id: tags:
``` python
# Required Packages:
import copy
import datetime
import time
import os
import warnings
warnings.filterwarnings('ignore')
import h5py
from IPython.display import display, Markdown
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from prettytable import PrettyTable
from cal_tools.tools import (get_dir_creation_date, get_random_db_interface,
get_pdu_from_db, get_report,
save_const_to_h5, send_to_db)
from cal_tools.enums import BadPixels
from iCalibrationDB import Constants, Conditions, Detectors, Versions
from iCalibrationDB.detectors import DetectorTypes
from XFELDetAna import xfelpyanatools as xana
from XFELDetAna import xfelpycaltools as xcal
from XFELDetAna.detectors.fastccd import readerh5 as fastccdreaderh5
from XFELDetAna.util import env
env.iprofile = cluster_profile
import XFELDetAna.xfelprofiler as xprof
profiler = xprof.Profiler()
profiler.disable()
from XFELDetAna.xfelreaders import ChunkReader
```
%% Cell type:code id: tags:
``` python
# Output Folder Creation:
if not os.path.exists(out_folder):
os.makedirs(out_folder)
```
%% Cell type:code id: tags:
``` python
# Number of Images:
def nImagesOrLimit(nImages, limit):
if limit == 0:
return nImages
else:
return min(nImages, limit)
```
%% Cell type:code id: tags:
``` python
proposal = list(filter(None, in_folder.strip('/').split('/')))[-2]
file_loc = 'proposal:{} runs:{}'.format(proposal, run)
report = get_report(out_folder)
```
%% Cell type:code id: tags:
``` python
# Detector Operation Mode, Calibration Database Settings, and Some Initial Run Parameters & Paths:
display(Markdown('### Initial Settings'))
if operation_mode == "FS":
x = 960 # rows of the FastCCD to analyze in FS mode
y = 960 # columns of the FastCCD to analyze in FS mode
print('\nYou are analyzing data in FS mode.')
else:
x = 1934 # rows of the FastCCD to analyze in FF mode
y = 960 # columns of the FastCCD to analyze in FF mode
print('\nYou are analyzing data in FF mode.')
ped_dir = "{}/r{:04d}".format(in_folder, run)
fp_name = path_template.format(run, karabo_da[0])
fp_path = '{}/{}'.format(ped_dir, fp_name)
filename = fp_path.format(sequence)
h5path = h5path.format(karabo_id, receiver_id)
h5path_t = h5path_t.format(karabo_id)
h5path_cntrl = h5path_cntrl.format(karabo_id)
creation_time = None
if use_dir_creation_date:
creation_time = get_dir_creation_date(in_folder, run)
cal_db_interface = get_random_db_interface(cal_db_interface)
print('Calibration database interface: {}'.format(cal_db_interface))
print("Sending constants to the calibration database: {}".format(db_output))
print("HDF5 path to data: {}".format(h5path))
print("Run number: {}".format(run))
print("Reading data from: {}".format(filename))
if creation_time:
print("Using {} as creation time".format(creation_time.isoformat()))
```
%% Cell type:code id: tags:
``` python
# Reading Parameters such as Detector Bias, Gain, etc. from the Data:
memoryCells = 1 # FastCCD has 1 memory cell
sensorSize = [x, y]
blockSize = [sensorSize[0]//2, sensorSize[1]] # Sensor area will be analysed according to blocksize
xcal.defaultBlockSize = blockSize
nImages = fastccdreaderh5.getDataSize(filename, h5path)[0] # Specifies total number of images to proceed
nImages = nImagesOrLimit(nImages, number_dark_frames)
profile = False
with h5py.File(filename, 'r') as f:
bias_voltage = int(f['{}/biasclock/bias/value'.format(h5path_cntrl)][0])
det_gain = int(f['{}/exposure/gain/value'.format(h5path_cntrl)][0])
integration_time = int(f['{}/exposure/exposure_time/value'.format(h5path_cntrl)][0])
temperature = np.mean(f[h5path_t])
temperature = round(temperature, 2)
```
%% Cell type:code id: tags:
``` python
# Printing the Parameters Read from the Data File:
display(Markdown('### Evaluated Parameters'))
print("Number of dark images to analyze:", nImages)
gain_dict = {
"high gain" : 8,
"medium gain" : 2,
"low gain" : 1,
"auto gain" : 0
}
for gain, value in gain_dict.items():
if det_gain == value:
gain_setting = gain
print("Bias voltage is {} V.".format(bias_voltage))
print("Detector gain is set to x{} ({}).".format(det_gain, gain_setting))
print("Detector integration time is set to {}".format(integration_time), 'ms.')
if fix_temperature != 0.:
print("Using a fixed temperature of {} K".format(fix_temperature))
else:
# This is needed while sending the
# calibration constant to the DB later
fix_temperature = temperature + 273.15
print("Temperature is not fixed.")
print("Mean temperature was {:0.2f} °C / {:0.2f} K".format(temperature, fix_temperature))
```
%% Cell type:code id: tags:
``` python
# Reading Files in Chunks:
# Chunk reader returns an iterator to access the data in the file within the ranges:
reader = ChunkReader(filename, fastccdreaderh5.readData, nImages, chunkSize, path = h5path, pixels_x = sensorSize[0],
pixels_y = sensorSize[1],)
```
%% Cell type:code id: tags:
``` python
# Calculator:
# noiseCal is a noise map calculator, which internally also produces a per-pixel mean map, i.e. an offset map:
noiseCal = xcal.NoiseCalculator(sensorSize, memoryCells, cores=cpuCores, blockSize=blockSize, runParallel=run_parallel)
```
%% Cell type:markdown id: tags:
### First Iteration
Characterization of dark images with purpose to create dark maps (offset, noise and bad pixel maps) is an iterative process. Firstly, initial offset and noise maps are produced from raw dark data.
%% Cell type:code id: tags:
``` python
counter1 = 1 # To count how many "if data.shape[2] >= chunkSize" instances are there.
counter2 = 0 # To count how many "if data.shape[2] < chunkSize" instances are there.
chunkSize_new = 0 # See below
for data in reader.readChunks():
data = np.bitwise_and(data.astype(np.uint16), 0b0011111111111111).astype(np.float32)
dx = np.count_nonzero(data, axis=(0, 1))
data = data[:,:,dx != 0]
# Some sequences may have less than 500 frames in them. To find out how many images there are, we will temporarily
# change chunkSize to be the same as whatever number of frames the last chunk of data has:
if data.shape[2] < chunkSize:
chunkSize_new = data.shape[2]
print("Number of images are less than chunkSize. chunkSize is temporarily changed to {}."
.format(chunkSize_new))
images = images + chunkSize_new
counter2 += 1
else:
images = counter1*chunkSize + counter2*chunkSize_new
counter1 += 1
noiseCal.fill(data) # Filling calculators with data
chunkSize = 100 # resetting the chunkSize to its default value for the next sequence or data-chunk
print('A total number of {} images are processed.'.format(images))
```
%% Cell type:code id: tags:
``` python
offsetMap = noiseCal.getOffset() # Producing offset map
noiseMap = noiseCal.get() # Producing noise map
noiseCal.reset() # Resetting noise calculator
print("Initial maps are created.")
```
%% Cell type:markdown id: tags:
### Offset and Noise Maps prior to Common Mode Correction
In the following, the histogram of the FastCCD offset, FastCCD offset map, as well as the initial uncorrected noise map are plotted:
%% Cell type:code id: tags:
``` python
#************** OFFSET MAP HISTOGRAM ***********#
ho, co = np.histogram(offsetMap.flatten(), bins=700) # ho = offset histogram; co = offset bin centers
do = {'x': co[:-1],
'y': ho,
'y_err': np.sqrt(ho[:]),
'drawstyle': 'bars',
'color': 'cornflowerblue',
'label': 'Raw Signal (ADU)'
}
fig = xana.simplePlot(do, figsize='1col', aspect=1, x_label = 'Raw Signal (ADU)', y_label="Counts",
x_range = (3400, 4400), title = 'Offset Histogram')
#fig.savefig('Offset_Hist.svg', format='svg', dpi=1200, bbox_inches='tight')
t0 = PrettyTable()
t0.title = "Raw Signal"
t0.field_names = ["Mean", "Median", "Standard Deviation"]
t0.add_row(["{:0.3f} (ADU)".format(np.mean(data)), "{:0.3f} (ADU)".format(np.median(data)), "{:0.3f} (ADU)"
.format(np.std(data))])
print(t0,'\n')
#************** OffsetMAP *******************#
fig = xana.heatmapPlot(offsetMap[:,:,0], x_label='Column Number', y_label='Row Number', aspect=1,
x_range=(0, y), y_range=(0, x), vmin=3000, vmax=4300, lut_label='Offset (ADU)',
panel_x_label='Columns Stat (ADU)', panel_y_label='Rows Stat (ADU)',
panel_top_low_lim = 3000, panel_top_high_lim = 4500, panel_side_low_lim = 3000,
panel_side_high_lim = 5000, title = 'OffsetMap')
#fig.savefig('RawOffsetMap.pdf', format='pdf', dpi=400, bbox_inches='tight')
#************** Raw NoiseMAP *******************#
fig = xana.heatmapPlot(noiseMap[:,:,0], x_label='Column Number', y_label='Row Number', aspect=1,
lut_label='Uncorrected Noise (ADU)', x_range=(0, y),
y_range=(0, x), vmax=2*np.mean(noiseMap), panel_x_label='Columns Stat (ADU)',
panel_y_label='Rows Stat (ADU)', panel_top_low_lim = 0, panel_top_high_lim = 20,
panel_side_low_lim = 0, panel_side_high_lim = 50, title = 'Uncorrected NoiseMap')
#fig.savefig('RawNoiseMap.pdf', format='pdf', dpi=400, bbox_inches='tight')
```
%% Cell type:code id: tags:
``` python
# Offset Correction:
offsetCorrection = xcal.OffsetCorrection(sensorSize, offsetMap, nCells = memoryCells, cores=cpuCores, gains=None,
runParallel=run_parallel, blockSize=blockSize)
offsetCorrection.debug()
# Common Mode Correction:
# This is the new method subtracting the median of all pixels that are read out at the same time along a row:
cmCorrection = xcal.CommonModeCorrection([data.shape[0], data.shape[1]], [data.shape[0]//2, data.shape[1]],
commonModeAxis, parallel=run_parallel, dType=np.float32, stride=10,
noiseMap=noiseMap.astype(np.float32), minFrac=0)
cmCorrection.debug()
```
%% Cell type:code id: tags:
``` python
# Histogram Calculators:
# For offset corrected data:
histCalCorrected = xcal.HistogramCalculator(sensorSize, bins=400, range=[-200, 200], memoryCells=memoryCells,
cores=cpuCores, gains=None, blockSize=blockSize)
# For common mode corrected data:
histCalCMCorrected = xcal.HistogramCalculator(sensorSize, bins=400, range=[-200, 200], memoryCells=memoryCells,
cores=cpuCores, gains=None, blockSize=blockSize)
```
%% Cell type:markdown id: tags:
### Second Iteration
During the second iteration, the data are offset corrected and then common mode corrected to produced a common mode corrected noise map. The common mode correction is calculated by subtracting out the median of all pixels that are read out at the same time along a row.
%% Cell type:code id: tags:
``` python
counter1 = 1 # To count how many "if data.shape[2] >= chunkSize" instances are there.
counter2 = 0 # To count how many "if data.shape[2] < chunkSize" instances are there.
chunkSize_new = 0 # See below
for data in reader.readChunks():
data = np.bitwise_and(data.astype(np.uint16), 0b0011111111111111).astype(np.float32)
dx = np.count_nonzero(data, axis=(0, 1))
data = data[:,:,dx != 0]
# Some sequences may have less than 500 frames in them. To find out how many images there are, we will temporarily
# change chunkSize to be the same as whatever number of frames the last chunk of data has:
if data.shape[2] < chunkSize:
chunkSize_new = data.shape[2]
print("Number of images are less than chunkSize. chunkSize is temporarily changed to {}."
.format(chunkSize_new))
images = images + chunkSize_new
counter2 += 1
else:
images = counter1*chunkSize + counter2*chunkSize_new
counter1 += 1
data = offsetCorrection.correct(data) # Offset correction
offset_corr_data = copy.copy(data) # I am copying this so that I can have access to it in the table below
histCalCorrected.fill(data)
cellTable=np.zeros(data.shape[2], np.int32) # Common mode correction
data = cmCorrection.correct(data.astype(np.float32), cellTable=cellTable) # Common mode correction
histCalCMCorrected.fill(data)
noiseCal.fill(data) # Filling noise calculator with common mode (CM) corrected data
chunkSize = 100 # resetting the chunkSize to its default value for the next sequence or data-chunk
print('A total number of {} images are processed.'.format(images))
print("Offset and common mode corrections are applied.")
```
%% Cell type:code id: tags:
``` python
noiseMapCM = noiseCal.get() # Produces CM corrected noise map
ho, eo, co, so = histCalCorrected.get()
hCM, eCM, cCM, sCM = histCalCMCorrected.get()
```
%% Cell type:code id: tags:
``` python
# We are copying these so that we can replot them later after the calculators are reset:
ho_second_trial = copy.copy(ho)
co_second_trial = copy.copy(co)
hCM_second_trial = copy.copy(hCM)
cCM_second_trial = copy.copy(cCM)
```
%% Cell type:markdown id: tags:
### Signal after Offset and Common Mode Corrections
Here, the offset corrected signal is compared to the common-mode corrected signal (in the form of binned histograms):
%% Cell type:code id: tags:
``` python
do = [{'x': co,
'y': ho,
'y_err': np.sqrt(ho[:]),
'drawstyle': 'steps-mid',
'color': 'cornflowerblue',
'label': 'Offset Corrected Signal'
},
{'x': cCM,
'y': hCM,
'y_err': np.sqrt(hCM[:]),
'drawstyle': 'steps-mid',
'color': 'red',
'label': 'Common Mode Corrected Signal'
}]
fig = xana.simplePlot(do, figsize='2col', aspect=1, x_label = 'Corrected Signal (ADU)', y_label="Counts",
x_range = (-20, 20), legend='top-right-frame-1col', title = 'Corrected Signal - 2nd Iteration')
#fig.savefig('Corrected_Signal_Hist_1.svg', format='svg', dpi=1200, bbox_inches='tight')
t0 = PrettyTable()
t0.title = "Comparison of the First Round of Corrections - Bad Pixels Not Excluded"
t0.field_names = ["After Offset Correction", "After Common Mode Correction"]
t0.add_row(["Mean: {:0.3f} (ADU)".format(np.mean(offset_corr_data)), "Mean: {:0.3f} (ADU)".format(np.mean(data))])
t0.add_row(["Median: {:0.3f} (ADU)".format(np.median(offset_corr_data)), "Median: {:0.3f} (ADU)"
.format(np.median(data))])
t0.add_row(["Standard Deviation: {:0.3f} (ADU)".format(np.std(offset_corr_data)), "Standard Deviation: {:0.3f} (ADU)"
.format(np.std(data))])
print(t0,'\n')
```
%% Cell type:markdown id: tags:
### Noise Map after Common Mode Correction
In the following, the effect of common mode correction on the noise is shown. Finally common mode corrected noise map (noiseMapCM) is displayed and compared to the initial uncorrected noise map:
%% Cell type:code id: tags:
``` python
#*****NOISE MAP HISTOGRAM FROM THE COMMON MODE CORRECTED DATA*******#
hn, cn = np.histogram(noiseMap.flatten(), bins=200, range=(0, 40)) # hn: histogram of noise, cn: bin centers for noise
hn_CM, cn_CM = np.histogram(noiseMapCM.flatten(), bins=200, range=(0, 40))
dn = [{'x': cn[:-1],
'y': hn,
#'y_err': np.sqrt(hn[:]),
'drawstyle': 'steps-mid',#'bars',
'color': 'blue',#'cornflowerblue',
'label': 'Uncorrected Noise'
},
{'x': cn_CM[:-1],
'y': hn_CM,
#'y_err': np.sqrt(hn_CM[:]),
'drawstyle': 'steps-mid',#'bars',
'color': 'crimson',#'red',#'cornflowerblue',
#'ecolor': 'crimson',
'label': 'Common Mode Corrected Noise'
}]
fig = xana.simplePlot(dn, figsize='2col', aspect=1, x_label = 'Noise (ADU)', y_label="Counts",
x_range=(0, 40), y_range=(0, 1e6), y_log=True, legend='top-center-frame-1col',
title = 'Noise Comparison')
#fig.savefig('Noise_CM_1_Hist.svg', format='svg', dpi=1200, bbox_inches='tight')
fig = xana.heatmapPlot(noiseMapCM[:,:,0], aspect=1, x_label='Column Number', y_label='Row Number',
lut_label='Common Mode Corrected Noise (ADU)', x_range=(0,y), y_range=(0,x),
vmax=2*np.mean(noiseMapCM), panel_top_low_lim = 0, panel_top_high_lim = 20,
panel_side_low_lim = 0, panel_side_high_lim = 50, title = 'Common Mode Corrected Noise',
panel_x_label='Columns Stat (ADU)', panel_y_label='Rows Stat (ADU)')
#fig.savefig('NoiseMapCM.pdf', format='pdf', dpi=400, bbox_inches='tight')
```
%% Cell type:code id: tags:
``` python
# Resetting the calculators so we can do a third iteration later:
noiseCal.reset()
histCalCorrected.reset()
histCalCMCorrected.reset()
cmCorrection.reset()
```
%% Cell type:markdown id: tags:
### Initial BadPixelMap
This is generated based on the offset and CM corrected noise maps:
%% Cell type:code id: tags:
``` python
bad_pixels = np.zeros(offsetMap.shape, np.uint32)
mnoffset = np.nanmedian(offsetMap)
stdoffset = np.nanstd(offsetMap)
bad_pixels[(offsetMap < mnoffset-bad_pixel_offset_sigma*stdoffset) |
(offsetMap > mnoffset+bad_pixel_offset_sigma*stdoffset)] = BadPixels.OFFSET_OUT_OF_THRESHOLD.value
mnnoise = np.nanmedian(noiseMapCM)
stdnoise = np.nanstd(noiseMapCM)
bad_pixels[(noiseMapCM < mnnoise-bad_pixel_noise_sigma*stdnoise) |
(noiseMapCM > mnnoise+bad_pixel_noise_sigma*stdnoise)] = BadPixels.NOISE_OUT_OF_THRESHOLD.value
fig = xana.heatmapPlot(np.log2(bad_pixels[:,:,0]),aspect=1, x_label='Column Number', y_label='Row Number',
lut_label='2^(Assigned Value to Bad Pixels)', x_range=(0, y), y_range=(0, x),
title = 'Bad Pixels Map Excluding Non-Sensitive\n Areas in Middle of CCD',
panel_x_label= 'Columns Stat', panel_y_label='Rows Stat')
```
%% Cell type:markdown id: tags:
Here, we are adding the pixels in the hole (center of the FastCCD) as well as 4 rows in the center of the detector, which we call overscan region:
%% Cell type:code id: tags:
``` python
def create_circular_mask(h, w, center=None, radius=None):
import numpy as np
import math
if center is None: # use the middle of the image
center = [int(w/2), int(h/2)]
if radius is None: # use the smallest distance between the center and image walls
radius = min(center[0], center[1], w-center[0], h-center[1])
Y, X = np.ogrid[:h, :w]
dist_from_center = np.sqrt((X - center[0])**2 + (Y-center[1])**2)
mask = dist_from_center < radius
return mask
```
%% Cell type:code id: tags:
``` python
mask = np.zeros(offsetMap.shape, np.uint32)
# Defining a circular mask + a rectangular mask (overscan) for the hole in the middle of the CCD:
h, w = (x, y)
hole_mask_bool = create_circular_mask(h-4, w, radius=61.5, center=(w//2, (h-4)//2))
hole_mask = np.zeros(hole_mask_bool.shape, np.uint32)
hole_mask[hole_mask_bool] = BadPixels.NON_SENSITIVE.value
overscan_mask = np.full((4, w), BadPixels.OVERSCAN.value)
mask[:,:,0] = np.insert(hole_mask, (h-4)//2, overscan_mask, 0)
# Assigning this masked area as bad pixels:
bad_pixels = np.bitwise_or(bad_pixels, mask)
fig = xana.heatmapPlot(np.log2(bad_pixels[:,:,0]),aspect=1, x_label='Column Number', y_label='Row Number',
lut_label='2^(Assigned Value to Bad Pixels)', x_range=(0, y), y_range=(0, x),
panel_top_low_lim = 0, panel_top_high_lim = 20, panel_side_low_lim = 0, panel_side_high_lim = 20,
title = 'Bad Pixels Map Including Non-Sensitive\n Areas in Middle of CCD',
panel_x_label='Columns Stat', panel_y_label='Rows Stat', vmax=20)
#fig.savefig('BadPixelMap_1.svg', format='svg', dpi=1200, bbox_inches='tight')
```
%% Cell type:markdown id: tags:
### Third Iteration
During the third iteration, the bad pixel map is applied to the data. Bad pixels are masked. Offset and common mode corrections are applied once again to the data, which now have bad pixdels excluded, to produce a common mode corrected noise map:
%% Cell type:code id: tags:
``` python
# bad_pixels is an array of (1934, 960, 1) filled with zeros except at indices where we have the actual bad pixels, whose
# values are set to be: 2 (2^1: BadPixels.OFFSET_OUT_OF_THRESHOLD.value), or
# 262144 (2^18: BadPixels.OVERSCAN.value), or 524288 (2^19: BadPixels.NON_SENSITIVE.value). These indices can be found
# using np.argwhere(bad_pixels != 0)
event_threshold = sigmaNoise*np.median(noiseMapCM) # for exclusion of possible cosmic ray events
noiseCal.setBadPixelMask(bad_pixels != 0)
```
%% Cell type:code id: tags:
``` python
counter1 = 1 # To count how many "if data.shape[2] >= chunkSize" instances are there.
counter2 = 0 # To count how many "if data.shape[2] < chunkSize" instances are there.
chunkSize_new = 0 # See below
for data in reader.readChunks():
#data = data.astype(np.float32)
data = np.bitwise_and(data.astype(np.uint16), 0b0011111111111111).astype(np.float32)
dx = np.count_nonzero(data, axis=(0, 1))
data = data[:,:,dx != 0]
# Some sequences may have less than 500 frames in them. To find out how many images there are, we will temporarily
# change chunkSize to be the same as whatever number of frames the last chunk of data has:
if data.shape[2] < chunkSize:
chunkSize_new = data.shape[2]
print("Number of images are less than chunkSize. chunkSize is temporarily changed to {}."
.format(chunkSize_new))
images = images + chunkSize_new
counter2 += 1
else:
images = counter1*chunkSize + counter2*chunkSize_new
counter1 += 1
data_copy = offsetCorrection.correct(copy.copy(data))
cellTable=np.zeros(data_copy.shape[2], np.int32)
data_copy = cmCorrection.correct(data_copy.astype(np.float32), cellTable=cellTable)
data[data_copy > event_threshold] = np.nan # cosmic rays
data = np.ma.MaskedArray(data, np.isnan(data), fill_value=0) # masking cosmics, the default fill_value is 1e+20
data = offsetCorrection.correct(data)
offset_corr_data2 = copy.copy(data) # I am copying this so that I can have access to it in the table below
histCalCorrected.fill(data)
cellTable=np.zeros(data.shape[2], np.int32)
data = cmCorrection.correct(data.astype(np.float32), cellTable=cellTable)
histCalCMCorrected.fill(data)
noiseCal.fill(data)
chunkSize = 100 # resetting the chunkSize to its default value for the next sequence or data-chunk
print('A total number of {} images are processed.'.format(images))
print("Final iteration is Performed.")
```
%% Cell type:code id: tags:
``` python
noiseMapCM_2nd = noiseCal.get().filled(0) # the masked pixels are filled with zero
ho2, eo2, co2, so2 = histCalCorrected.get()
hCM2, eCM2, cCM2, sCM2 = histCalCMCorrected.get()
```
%% Cell type:markdown id: tags:
### Plots of the Final Results
The following plot and table compare the offset and common mode corrected signal with and without the bad pixels:
%% Cell type:code id: tags:
``` python
do_Final = [{'x': co_second_trial,
'y': ho_second_trial,
'y_err': np.sqrt(ho_second_trial[:]),
'drawstyle': 'steps-mid',
'color': 'blue',#'cornflowerblue',
'errorstyle': 'bars',
'label': 'Offset Corrected Signal, Bad Pixels Included - 2nd Trial'
},
{'x': cCM_second_trial,
'y': hCM_second_trial,
'y_err': np.sqrt(hCM_second_trial[:]),
'drawstyle': 'steps-mid',
'color': 'red',
'errorstyle': 'bars',
'ecolor': 'crimson',
'label': 'Common Mode Corrected Signal, Bad Pixels Included - 2nd Trial'
},
{'x': co2,
'y': ho2,
'y_err': np.sqrt(ho2[:]),
'drawstyle': 'steps-mid',
'color': 'black', #'cornflowerblue',
'errorstyle': 'bars',
'label': 'Offset Corrected Signal, Bad Pixels Excluded - 3rd Trial'
},
{'x': cCM2,
'y': hCM2,
'y_err': np.sqrt(hCM2[:]),
'drawstyle': 'steps-mid',
'color': 'orange', #'cornflowerblue',
'errorstyle': 'bars',
'label': 'Common Mode Corrected Signal, Bad Pixels Excluded - 3rd Trial'
}]
fig = xana.simplePlot(do_Final, figsize='2col', aspect=1, x_label = 'Corrected Signal (ADU)',
y_label="Counts (Logarithmic Scale)", y_log=True, x_range=(-40, 40),
legend='bottom-left-frame-1col', title = 'Comparison of Corrected Signal')
#fig.savefig('Corrected_Signal_Hist_2.svg', format='svg', dpi=1200, bbox_inches='tight')
# offset_corr_data2 and data most likely have some nan's => I am going to use nanmean, nanmedian and nanstd functions:
t0 = PrettyTable()
t0.title = "Comparison of the Second Round of Corrections - Bad Pixels Excluded"
t0.field_names = ["After Offset Correction", "After Common Mode Correction"]
t0.add_row(["Mean: {:0.3f} (ADU)".format(np.nanmean(offset_corr_data2)), "Mean: {:0.3f} (ADU)".format(np.nanmean(data))])
t0.add_row(["Median: {:0.3f} (ADU)".format(np.nanmedian(offset_corr_data2)), "Median: {:0.3f} (ADU)"
.format(np.nanmedian(data))])
t0.add_row(["Standard Deviation: {:0.3f} (ADU)".format(np.nanstd(offset_corr_data2)),
"Standard Deviation: {:0.3f} (ADU)".format(np.nanstd(data))])
print(t0,'\n')
```
%% Cell type:markdown id: tags:
### Final NoiseMap
The effect of exclusion of bad pixels on common mode corrected noise is shown below. Finally common mode corrected noise map with bad pixels excluded (noiseMapCM_2nd) is displayed:
%% Cell type:code id: tags:
``` python
#*****NOISE MAP HISTOGRAM FROM THE COMMON MODE CORRECTED DATA*******#
hn_CM2, cn_CM2 = np.histogram(noiseMapCM_2nd.flatten(), bins=200, range=(0, 40))
dn2 = [{'x': cn[:-1],
'y': hn,
#'y_err': np.sqrt(hn[:]),
'drawstyle': 'steps-mid',#'bars',
'color': 'blue', #'cornflowerblue',
'label': 'Uncorrected Noise'
},
{'x': cn_CM[:-1],
'y': hn_CM,
#'y_err': np.sqrt(hn_CM[:]),
'drawstyle': 'steps-mid',
'color': 'red',
#'ecolor': 'crimson',
'label': 'Common Mode Corrected Noise prior to Bad Pixels Exclusion'
},
{'x': cn_CM2[:-1],
'y': hn_CM2,
#'y_err': np.sqrt(hn_CM2[:]),
'drawstyle': 'steps-mid',
'color': 'black', #'cornflowerblue',
'label': 'Common Mode Corrected Noise after Bad Pixels Exclusion'
}]
fig = xana.simplePlot(dn2, figsize='2col', aspect = 1, x_label = 'Noise (ADU)', y_label="Counts", y_log=True,
x_range=(0, 40), y_range=(0, 1e6), legend='top-right-frame-1col', title = 'Final Noise Comparison')
#fig.savefig('Noise_Hist_2.svg', format='svg', dpi=1200, bbox_inches='tight')
fig = xana.heatmapPlot(np.log2(noiseMapCM_2nd[:,:,0]), aspect=1, x_label='Column Number', y_label='Row Number',
lut_label='Noise (ADU)', x_range=(0, y), y_range=(0, x), vmax=2*np.mean(noiseMapCM_2nd),
title = 'Final Common Mode Corrected Noise\n (Bad Pixels Excluded)',
panel_x_label='Columns Stat (ADU)', panel_y_label='Rows Stat (ADU)')
#fig.savefig('NoiseMapCM_2nd.pdf', format='pdf', dpi=400, bbox_inches='tight')
```
%% Cell type:markdown id: tags:
### Final Bad Pixel Map
Lastly, the final bad pixel map is generated based on the OffsetMap and the noiseMapCM_2nd (common mode corrected noise after exclusion of the initial bad pixels):
%% Cell type:code id: tags:
``` python
bad_pixels = np.zeros(offsetMap.shape, np.uint32)
mnoffset = np.nanmedian(offsetMap)
stdoffset = np.nanstd(offsetMap)
bad_pixels[(offsetMap < mnoffset-bad_pixel_offset_sigma*stdoffset) |
(offsetMap > mnoffset+bad_pixel_offset_sigma*stdoffset)] = BadPixels.OFFSET_OUT_OF_THRESHOLD.value
mnnoise = np.nanmedian(noiseMapCM_2nd)
stdnoise = np.nanstd(noiseMapCM_2nd)
bad_pixels[(noiseMapCM_2nd < mnnoise-bad_pixel_noise_sigma*stdnoise) |
(noiseMapCM_2nd > mnnoise+bad_pixel_noise_sigma*stdnoise)] = BadPixels.NOISE_OUT_OF_THRESHOLD.value
bad_pixels = np.bitwise_or(bad_pixels, mask)
fig = xana.heatmapPlot(np.log2(bad_pixels[:,:,0]),aspect=1, x_label='Column Number', y_label='Row Number',
lut_label='2^(Assigned Value to Bad Pixels)', x_range=(0, y), y_range=(0, x),
panel_top_low_lim = 0, panel_top_high_lim = 20, panel_side_low_lim = 0, panel_side_high_lim = 20,
title = 'Final Bad Pixels Map', panel_x_label='Columns Stat',
panel_y_label='Rows Stat', vmax=20)
#fig.savefig('BadPixelMap_2.svg', format='svg', dpi=1200, bbox_inches='tight')
```
%% Cell type:code id: tags:
``` python
display(Markdown('### Statistics on the Bad Pixels'))
num_bad_pixels = np.count_nonzero(bad_pixels)
num_all_pixels = x*y
percentage_bad_pixels = num_bad_pixels*100/num_all_pixels
print("Number of bad pixels: {:0.0f}, i.e. {:0.2f}% of all pixels".format(num_bad_pixels, percentage_bad_pixels))
```
%% Cell type:markdown id: tags:
### Electronic Noise
According to Table 6.1 (page 80) of Ivana Klačková's master's thesis: "Conversion gain for the FastCCD high gain is: lower hemisphere = 6.2e-/ADU and upper hemisphere = 6.1e-/ADU." Also, we know that the high gain/medium gain and high gain/low gain ratios are 4 and 8, respectively since high gain = x8, medium gain = x2 and low gain = x1. We do not currently (October - 2019) know the conversion gains for the FastCCD medium and lows gains in electrons. Therefore, we will use those of the high gains (in both hemispheres) together with the gain ratios to convert the noise in ADU to electrons.
The following Tables present the noise along lower hemisphere, upper hemisphere, and the entire FastCCD detector at different stages. Here, the values in the first table (in ADU and e-) are the mean of noise per pixel, where noise is considered to be the initial uncorrected noise, CM corrected noise after second trial (including bad pixels) and CM corrected noise after third trial (excluding bad pixels).
The values of the second table (in electrons) are the standard deviation of noise per pixel.
%% Cell type:code id: tags:
``` python
# noiseMap refers to the initial uncorrected noise, noiseMapCM refers to common mode corrected noise with inclusion of
# bad pixels, and noiseMapCM_2nd refers to common mode corrected noise without inclusion of bad pixels:
ADU_to_electron_hg = (ADU_to_electron_upper_hg + ADU_to_electron_lower_hg)/2 # Average of ADU_to_electron for entire CCD
# for high gain
ADU_to_electron_mg = (ADU_to_electron_upper_mg + ADU_to_electron_lower_mg)/2 # Average of ADU_to_electron for entire CCD
# for medium gain
ADU_to_electron_lg = (ADU_to_electron_upper_lg + ADU_to_electron_lower_lg)/2 # Average of ADU_to_electron for entire CCD
# for low gain
```
%% Cell type:code id: tags:
``` python
for gain, value in gain_dict.items():
if det_gain == gain_dict["low gain"]:
ADU_to_electron = ADU_to_electron_lg
ADU_to_electron_upper = ADU_to_electron_upper_lg
ADU_to_electron_lower = ADU_to_electron_lower_lg
elif det_gain == gain_dict["medium gain"]:
ADU_to_electron = ADU_to_electron_mg
ADU_to_electron_upper = ADU_to_electron_upper_mg
ADU_to_electron_lower = ADU_to_electron_lower_mg
else: # Here, we assume the auto gain and high gain conversions from ADU to electrons are the same.
ADU_to_electron = ADU_to_electron_hg
ADU_to_electron_upper = ADU_to_electron_upper_hg
ADU_to_electron_lower = ADU_to_electron_lower_hg
print("Abbreviations:")
print(" - ED = Entire Detector;\n - LH: Lower Hemisphere;\n - UH: Upper Hemisphere;")
print(" - CM Noise: Common Mode Corrected Noise;")
print(" - BP: Bad Pixels\n")
t0 = PrettyTable()
t0.title = "Averages of Noise per Pixel"
t0.field_names = ["Uncorrected Noise", "CM Noise, BP Incl.", "CM Noise, BP Excl."]
t0.add_row(["ED: {:0.2f} ADU = {:0.2f} e-".format(np.mean(noiseMap), np.mean(noiseMap)*ADU_to_electron),
"ED: {:0.2f} ADU = {:0.2f} e-".format(np.mean(noiseMapCM), np.mean(noiseMapCM)*ADU_to_electron),
"ED: {:0.2f} ADU = {:0.2f} e-".format(np.mean(noiseMapCM_2nd), np.mean(noiseMapCM_2nd)*ADU_to_electron)])
t0.add_row(["LH: {:0.2f} ADU = {:0.2f} e-".format(np.mean(noiseMap[:x//2,:]),
np.mean(noiseMap[:x//2,:])*ADU_to_electron_lower),
"LH: {:0.2f} ADU = {:0.2f} e-".format(np.mean(noiseMapCM[:x//2,:]),
np.mean(noiseMapCM[:x//2,:])*ADU_to_electron_lower),
"LH: {:0.2f} ADU = {:0.2f} e-".format(np.mean(noiseMapCM_2nd[:x//2,:]),
np.mean(noiseMapCM_2nd[:x//2,:])*ADU_to_electron_lower)])
t0.add_row(["UH: {:0.2f} ADU = {:0.2f} e-".format(np.mean(noiseMap[x//2:,:]),
np.mean(noiseMap[x//2:,:])*ADU_to_electron_upper),
"UH: {:0.2f} ADU = {:0.2f} e-".format(np.mean(noiseMapCM[x//2:,:]),
np.mean(noiseMapCM[x//2:,:])*ADU_to_electron_upper),
"UH: {:0.2f} ADU = {:0.2f} e-".format(np.mean(noiseMapCM_2nd[x//2:,:]),
np.mean(noiseMapCM_2nd[x//2:,:])*ADU_to_electron_upper)])
print(t0,'\n')
t1 = PrettyTable()
t1.title = "Standard Deviations of Noise per Pixel"
t1.field_names = ["Uncorrected Noise", "CM Noise, BP Incl.", "CM Noise, BP Excl."]
t1.add_row(["ED: {:0.2f} e-".format(np.std(noiseMap)*ADU_to_electron),
"ED: {:0.2f} e-".format(np.std(noiseMapCM)*ADU_to_electron),
"ED: {:0.2f} e-".format(np.std(noiseMapCM_2nd)*ADU_to_electron)])
t1.add_row(["LH: {:0.2f} e-".format(np.std(noiseMap[:x//2,:])*ADU_to_electron_lower),
"LH: {:0.2f} e-".format(np.std(noiseMapCM[:x//2,:])*ADU_to_electron_lower),
"LH: {:0.2f} e-".format(np.std(noiseMapCM_2nd[:x//2,:])*ADU_to_electron_lower)])
t1.add_row(["UH: {:0.2f} e-".format(np.std(noiseMap[x//2:,:])*ADU_to_electron_upper),
"UH: {:0.2f} e-".format(np.std(noiseMapCM[x//2:,:])*ADU_to_electron_upper),
"UH: {:0.2f} e-".format(np.std(noiseMapCM_2nd[x//2:,:])*ADU_to_electron_upper)])
print(t1)
```
%% Cell type:markdown id: tags:
### Calibration Constants
%% Cell type:code id: tags:
``` python
dictionary = {}
dictionary['Offset'] = offsetMap.data
dictionary['Noise'] = noiseMapCM_2nd.data
dictionary['BadPixelsDark'] = bad_pixels.data
md = None
for const in dictionary:
dconst = getattr(Constants.CCD(DetectorTypes.fastCCD), const)()
dconst.data = dictionary[const]
condition = Conditions.Dark.CCD(bias_voltage=bias_voltage,
integration_time=integration_time,
gain_setting=det_gain,
temperature=fix_temperature,
pixels_x=1934,
pixels_y=960)
for parm in condition.parameters:
if parm.name == "Sensor Temperature":
parm.lower_deviation = temp_limits
parm.upper_deviation = temp_limits
# This should be used in case of running notebook
# by a different method other than myMDC which already
# sends CalCat info.
# TODO: Set db_module to "" by default in the first cell
if not db_module:
db_module = get_pdu_from_db(karabo_id, karabo_da, dconst,
condition, cal_db_interface,
snapshot_at=creation_time)[0]
if db_output:
md = send_to_db(db_module, karabo_id, dconst, condition, file_loc, report,
cal_db_interface, creation_time=creation_time, timeout=cal_db_timeout)
if local_output:
md = save_const_to_h5(db_module, karabo_id, dconst, condition, dconst.data,
file_loc, report, creation_time, out_folder)
print(f"Calibration constant {const} is stored locally.")
print("Constants parameter conditions are:\n")
print(f"• bias_voltage: {bias_voltage}\n• integration_time: {integration_time}\n"
f"• temperature: {fix_temperature}\n• gain_setting: {det_gain}\n"
f"• creation_time: {md.calibration_constant_version.begin_at if md is not None else creation_time}\n")
```
......
......@@ -45,7 +45,7 @@ elif not overwrite:
def combine_stack(d, sdim):
combined = np.zeros((sdim, 2048,2048))
combined[...] = np.nan
map_x = [1,0,0,1]
map_y = [1,1,0,0]
to_map = d
......@@ -97,7 +97,7 @@ saveFile.close()
# set everything up filewise
from queue import Queue
def map_modules_from_files(filelist):
module_files = {}
mod_ids = {}
......@@ -111,7 +111,7 @@ def map_modules_from_files(filelist):
for file in filelist:
if file_infix in file:
module_files[name].put(file)
return module_files, mod_ids
dirlist = os.listdir(in_folder)
......@@ -120,14 +120,14 @@ for entry in dirlist:
#only h5 file
abs_entry = "{}/{}".format(in_folder, entry)
if os.path.isfile(abs_entry) and os.path.splitext(abs_entry)[1] == ".h5":
if sequences is None:
file_list.append(abs_entry)
else:
for seq in sequences:
if "{:05d}.h5".format(seq) in abs_entry:
file_list.append(os.path.abspath(abs_entry))
mapped_files, mod_ids = map_modules_from_files(file_list)
import copy
......@@ -136,7 +136,7 @@ def correct_module(cells, inp):
import numpy as np
import copy
import h5py
def splitOffGainLPD(d):
msk = np.zeros(d.shape, np.uint16)
msk[...] = 0b0000111111111111
......@@ -145,9 +145,9 @@ def correct_module(cells, inp):
gain = np.bitwise_and(d, msk)//4096
gain[gain > 2] = 2
return data, gain
if True:
filename, filename_out, channel, offset, rel_gain = inp
infile = h5py.File(filename, "r", driver="core")
......@@ -176,34 +176,34 @@ def correct_module(cells, inp):
im, gain = splitOffGainLPD(im[:,0,...])
im = im.astype(np.float32)
im = np.rollaxis(im, 2)
im = np.rollaxis(im, 2, 1)
gain = np.rollaxis(gain, 2)
gain = np.rollaxis(gain, 2, 1)
om = offset
rc = rel_gain
for cc in range(im.shape[2]//cells):
tg = gain[...,cc*cells:(cc+1)*cells]
offset = np.choose(tg, (om[...,0], om[...,1], om[...,2]))
if rc is not None:
rel_cor = np.choose(tg, (rc[...,0], rc[...,1], rc[...,2]))
tim = im[...,cc*cells:(cc+1)*cells]
tim = tim - offset
if rc is not None:
if rc is not None:
tim /= rel_cor
im[...,cc*cells:(cc+1)*cells] = tim
outfile["INSTRUMENT/FXE_DET_LPD1M-1/DET/{}CH0:xtdf/image/data".format(channel)] = np.rollaxis(np.rollaxis(im,1), 2)
outfile["INSTRUMENT/FXE_DET_LPD1M-1/DET/{}CH0:xtdf/image/gain".format(channel)] = np.rollaxis(np.rollaxis(gain,1), 2)
outfile.close()
done = False
first_files = []
......@@ -228,7 +228,7 @@ while not done:
rel_gains[i][...,:max_cells,:] if do_rel_gain else None))
first = False
p = partial(correct_module, max_cells)
r = view.map_sync(p, inp)
done = all(dones)
......@@ -239,7 +239,7 @@ for i, ff in enumerate(first_files):
try:
rf, cf = ff
if rf is not None:
infile = h5py.File(rf, "r")
raw.append(np.array(infile["/INSTRUMENT/FXE_DET_LPD1M-1/DET/{}CH0:xtdf/image/data".format(i)][max_cells*3:4*max_cells,0,...]))
infile.close()
......@@ -250,7 +250,7 @@ for i, ff in enumerate(first_files):
infile.close()
else:
raise Exception("File not found")
except Exception as e:
corrected.append(np.zeros((max_cells, 256, 256)))
raw.append(np.zeros((max_cells, 256, 256)))
......
......@@ -7,28 +7,28 @@ from matplotlib import pylab as plt
def getModulePosition(metrologyFile, moduleId):
"""Position (in mm) of a module relative to the top left
"""Position (in mm) of a module relative to the top left
corner of its quadrant. In case of tile-level positions,
the the position refers to the center of the top left
the the position refers to the center of the top left
pixel.
Args
----
metrologyFile : str
Fully qualified path and filename of the metrology file
moduleId : str
Identifier of the module in question (e.g. 'Q1M2T03')
Returns
-------
ndarray:
ndarray:
(x, y)-Position of the module in its quadrant
Raises
------
ValueError: In case the moduleId contains invalid module
identifieres
"""
......@@ -38,11 +38,11 @@ def getModulePosition(metrologyFile, moduleId):
#
# QXMYTZZ
#
# where X, Y, and Z are digits. Q denotes the quadrant
# (X = 1, ..., 4), M the supermodule (Y = 1, ..., 4) and T
# where X, Y, and Z are digits. Q denotes the quadrant
# (X = 1, ..., 4), M the supermodule (Y = 1, ..., 4) and T
# the tile (Z = 1, ..., 16; with leading zeros).
modulePattern = re.compile(r'[QMT]\d+')
# Give the module identifier Q1M1T01, the moduleList splits this
# Give the module identifier Q1M1T01, the moduleList splits this
# into the associated quadrant, supermodule, and tile identifiers:
# >>> print(moduleList)
# ['Q1', 'M1', 'T01']
......@@ -53,7 +53,7 @@ def getModulePosition(metrologyFile, moduleId):
# >>> print(h5Keys)
# ['Q1', 'Q1/M1', 'Q1/M1/T01']
h5Keys = ['/'.join(moduleList[:idx+1]) for idx in range(len(moduleList))]
# Every module of the detector gives its position relative to
# the top left corner of its parent structure. Every position
# is stored in the positions array
......@@ -83,17 +83,17 @@ def getModulePosition(metrologyFile, moduleId):
def translateToModuleBL(tilePositions):
"""Tile coordinates within a supermodule with the
origin in the bottom left corner.
Parameters
----------
tilePositions : ndarray
Tile positions as retrieved from the LPD metrology
Tile positions as retrieved from the LPD metrology
file. Must have shape (16, 2)
Returns
-------
ndarray
Tile positions relative to the bottom left corner.
"""
......@@ -115,7 +115,7 @@ def translateToModuleBL(tilePositions):
# In the clockwise order of LPD tiles, the 8th
# tile in the list is the bottom left tile
bottomLeft8th = np.asarray([0., moduleCoords[8][1]])
# Translate coordinates to the bottom left corner
# Translate coordinates to the bottom left corner
# of the bottom left tile
bottomLeft = moduleCoords - bottomLeft8th
return bottomLeft
......@@ -124,44 +124,44 @@ def translateToModuleBL(tilePositions):
def plotSupermoduleData(tileData, metrologyPositions, zoom=1., vmin=100., vmax=6000.):
"""Plots data of a supermodule with tile positions
determined by the metrology data.
Parameters
----------
tileData : ndarray
Supermodule image data separated in individual tiles.
Supermodule image data separated in individual tiles.
Must have shape (16, 32, 128).
metrologyPositions : ndarray
Tile positions as retrieved from the metrology file.
metrologyPositions : ndarray
Tile positions as retrieved from the metrology file.
Must have shape (16, 2)
zoom : float, optional
Can enlarge or decrease the size of the plot. Default = 1.
vmin, vmax : float, optional
Value range. Default vmin=100., vmax=6000.
Returns
-------
matplotlib.Figure
Figure object containing the supermodule image
Figure object containing the supermodule image
"""
# Data needs to have 16 tiles, each with
# 32x128 pixels
assert tileData.shape == (16, 32, 128)
# Conversion coefficient, required since
# matplotlib does its business in inches
mmToInch = 1./25.4 # inch/mm
# Some constants
numberOfTiles = 16
numberOfRows = 8
numberOfCols = 2
tileWidth = 65.7 # in mm
tileHeight = 17.7 # in mm
# Base width and height are given by spatial
# extend of the modules. The constants 3.4 and 1
# are estimated as a best guess for gaps between
......@@ -169,26 +169,26 @@ def plotSupermoduleData(tileData, metrologyPositions, zoom=1., vmin=100., vmax=6
figureWidth = zoom * numberOfCols*(tileWidth + 3.4)*mmToInch
figureHeight = zoom * numberOfRows*(tileHeight + 1.)*mmToInch
fig = plt.figure(figsize=(figureWidth, figureHeight))
# The metrology file references module positions
# The metrology file references module positions
bottomRightCornerCoordinates = translateToModuleBL(metrologyPositions)
# The offset here accounts for the fact that there
# might be negative x,y values
offset = np.asarray(
[min(bottomRightCornerCoordinates[:, 0]),
[min(bottomRightCornerCoordinates[:, 0]),
min(bottomRightCornerCoordinates[:, 1])]
)
# Account for blank borders in the plot
borderLeft = 0.5 * mmToInch
borderBottom = 0.5 * mmToInch
# The height and width of the plot remain
# constant for a given supermodule
width = zoom * 65.7 * mmToInch / (figureWidth - 2.*borderLeft)
height = zoom * 17.7 * mmToInch / (figureHeight - 2.*borderBottom)
for i in range(numberOfTiles):
# This is the top left corner of the tile with
# respect to the top left corner of the supermodule
......@@ -200,38 +200,38 @@ def plotSupermoduleData(tileData, metrologyPositions, zoom=1., vmin=100., vmax=6
ax = fig.add_axes((ax0, ay0, width, height), frameon=False)
# Do not display axes, tick markers or labels
ax.tick_params(
axis='both', left='off', top='off', right='off', bottom='off',
axis='both', left='off', top='off', right='off', bottom='off',
labelleft='off', labeltop='off', labelright='off', labelbottom='off'
)
# Plot the image. If one wanted to have a colorbar
# the img object would be needed to produce one
img = ax.imshow(
tileData[i],
interpolation='nearest',
tileData[i],
interpolation='nearest',
vmin=vmin, vmax=vmax
)
return fig
def splitChannelDataIntoTiles(channelData, clockwiseOrder=False):
"""Splits the raw channel data into indiviual tiles
Args
----
channelData : ndarray
Raw channel data. Must have shape (256, 256)
clockwiseOrder : bool, optional
If set to True, the sequence of tiles is given
in the clockwise order starting with the top
right tile (LPD standard). If set to false, tile
data is returned in reading order
Returns
-------
ndarray
Same data, but reshaped into (12, 32, 128)
"""
......@@ -240,8 +240,8 @@ def splitChannelDataIntoTiles(channelData, clockwiseOrder=False):
orderedTiles = tiles.reshape(16, 32, 128)
if clockwiseOrder:
# Naturally, the tile data after splitting is in reading
# order (i.e. top left tile is first, top right tile is second,
# etc.). The official LPD tile order however is clockwise,
# order (i.e. top left tile is first, top right tile is second,
# etc.). The official LPD tile order however is clockwise,
# starting with the top right tile. The following array
# contains indices of tiles in reading order as they would
# be iterated in clockwise order (starting from the top right)
......@@ -253,22 +253,22 @@ def splitChannelDataIntoTiles(channelData, clockwiseOrder=False):
def splitChannelDataIntoTiles2(channelData, clockwiseOrder=False):
"""Splits the raw channel data into indiviual tiles
Args
----
channelData : ndarray
Raw channel data. Must have shape (256, 256)
clockwiseOrder : bool, optional
If set to True, the sequence of tiles is given
in the clockwise order starting with the top
right tile (LPD standard). If set to false, tile
data is returned in reading order
Returns
-------
ndarray
Same data, but reshaped into (12, 32, 128)
"""
......@@ -277,8 +277,8 @@ def splitChannelDataIntoTiles2(channelData, clockwiseOrder=False):
orderedTiles = np.moveaxis(tiles.reshape(16, 128, 32, channelData.shape[2]), 2, 1)
if clockwiseOrder:
# Naturally, the tile data after splitting is in reading
# order (i.e. top left tile is first, top right tile is second,
# etc.). The official LPD tile order however is clockwise,
# order (i.e. top left tile is first, top right tile is second,
# etc.). The official LPD tile order however is clockwise,
# starting with the top right tile. The following array
# contains indices of tiles in reading order as they would
# be iterated in clockwise order (starting from the top right)
......@@ -294,7 +294,7 @@ def returnPositioned2(geometry_file, modules, dquads):
tile_order = [1, 2, 3, 4]
cells = 0
for sm, mn in modules:
position = np.asarray([getModulePosition(geometry_file,
'Q{}/M{:d}/T{:02d}'.format(
sm//4+1,
......@@ -357,7 +357,7 @@ def positionFileList(filelist, datapath, geometry_file, quad_pos, nImages='all')
data = {}
for file in files:
ch = int(re.findall(r'.*-{}([0-9]+)-.*'.format(detector), file)[0])
try:
with h5py.File(file, 'r', driver='core') as f:
d = np.squeeze(f[datapath.format(ch)][()] if nImages == 'all' else f[datapath.format(ch)][:nImages,:,:])
......
%% Cell type:markdown id: tags:
# pnCCD Gain Characterization #
Authors: DET Group, modified by Kiana Setoodehnia on December 2020 - Version 4.0
The following notebook provides gain characterization for the pnCCD. It relies on data which are previously corrected using the Meta Data Catalog web service interface or by running the Correct_pnCCD_NBC.ipynb notebook. Prior to running this notebook, the corrections which should be applied by the web service or the aforementioned notebook are as follows:
- offset correction
- common mode correction
- split pattern classification
%% Cell type:code id: tags:
``` python
cluster_profile = "noDB" # ipcluster profile to use
in_folder = "/gpfs/exfel/exp/SQS/202031/p900166/raw" # input folder for the raw data
out_folder = '/gpfs/exfel/data/scratch/setoodeh/Test' # output folder
run = 347 # which run to read data from
sequences = [-1] # sequences to correct, set to -1 for all, range allowed
db_module = "pnCCD_M205_M206"
karabo_da = 'PNCCD01' # data aggregators
karabo_da_control = "PNCCD02" # file inset for control data
karabo_id = "SQS_NQS_PNCCD1MP" # karabo prefix of PNCCD devices
receiver_id = "PNCCD_FMT-0" # inset for receiver devices
path_template = 'CORR-R{:04d}-PNCCD01-S{{:05d}}.h5' # the template to use to access data
path_template_ctrl = 'RAW-R{:04d}-{}-S{{:05d}}.h5' # the template to use to access data
path_template_seqs = "{}/r{:04d}/*PNCCD01-S*.h5"
h5path = '/INSTRUMENT/{}/CAL/{}:output/data/' # path to data in the HDF5 file
h5path_ctrl = '/CONTROL/{}/CTRL/TCTRL'
cpuCores = 40 # specifies the number of running cpu cores
use_dir_creation_date = True # this is needed to obtain creation time of the run
sequences_per_node = 1
chunkSize = 100 # number of images to read per chunk
run_parallel = True
db_output = False # if True, the notebook injects dark constants into the calibration database
local_output = True # if True, the notebook saves dark constants locally
cal_db_interface = "tcp://max-exfl016:8015" # calibration DB interface to use
cal_db_timeout = 300000 # timeout on caldb requests
creation_time = "" # To overwrite the measured creation_time. Required Format: YYYY-MM-DD HR:MN:SC.00 e.g. 2019-07-04 11:02:41.00
# pnCCD parameters:
fix_temperature_top = 0. # fix temperature of top pnCCD sensor in K, set to 0. to use the value from slow data
fix_temperature_bot = 0. # fix temperature of bottom pnCCD sensor in K, set to 0. to use the value from slow data
gain = 0.1 # the detector's gain setting. Set to 0, to use the value from slow data
bias_voltage = 0. # the detector's bias voltage. set to 0. to use the value from slow data.
integration_time = 70 # detector's integration time
photon_energy = 1.6 # Al fluorescence in keV
def balance_sequences(in_folder, run, sequences, sequences_per_node, karabo_da):
from xfel_calibrate.calibrate import balance_sequences as bs
return bs(in_folder, run, sequences, sequences_per_node, karabo_da)
```
%% Cell type:code id: tags:
``` python
import copy
import datetime
from datetime import timedelta
import glob
import os
import traceback
import warnings
warnings.filterwarnings('ignore')
from functools import partial
import h5py
import iminuit as im
from iminuit import Minuit
from IPython.display import display, Markdown
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import ImageGrid, AxesGrid
import numpy as np
from prettytable import PrettyTable
from cal_tools.pnccdlib import extract_slow_data, VALID_GAINS
from cal_tools.tools import (get_dir_creation_date, get_pdu_from_db,
get_report, save_const_to_h5, send_to_db)
from iCalibrationDB import (Conditions, ConstantMetaData,
Constants, Detectors, Versions)
from iCalibrationDB.detectors import DetectorTypes
import XFELDetAna.xfelprofiler as xprof
profiler = xprof.Profiler()
profiler.disable()
from XFELDetAna.util import env
env.iprofile = cluster_profile
from XFELDetAna import xfelpycaltools as xcal
from XFELDetAna import xfelpyanatools as xana
from XFELDetAna.plotting.util import prettyPlotting
prettyPlotting=True
if sequences[0] == -1:
sequences = None
```
%% Cell type:code id: tags:
``` python
# Calibration Database Settings, and Some Initial Run Parameters & Paths:
display(Markdown('### Initial Settings and Paths'))
pixels_x = 1024 # rows of pnCCD in pixels
pixels_y = 1024 # columns of pnCCD in pixels
print(f"pnCCD size is: {pixels_x}x{pixels_y} pixels.")
print(f'Calibration database interface to be used: {cal_db_interface}')
proposal = list(filter(None, in_folder.strip('/').split('/')))[-2]
file_loc =f'Proposal: {proposal}, Run: {run}'
print(file_loc)
report = get_report(out_folder)
print("report_path:", report)
# Paths to the data:
ped_dir = "{}/r{:04d}".format(out_folder, run)
ped_dir_ctrl = "{}/r{:04d}".format(in_folder, run)
fp_name = path_template.format(run, karabo_da)
fp_path = '{}/{}'.format(ped_dir, fp_name)
h5path = h5path.format(karabo_id, receiver_id)
print("HDF5 path to data: {}\n".format(h5path))
# Output Folder Creation:
os.makedirs(out_folder, exist_ok=True)
# Run's creation time:
if creation_time:
try:
creation_time = datetime.datetime.strptime(creation_time, '%Y-%m-%d %H:%M:%S.%f')
except Exception as e:
print(f"creation_time value error: {e}."
"Use same format as YYYY-MM-DD HR:MN:SC.ms e.g. 2019-07-04 11:02:41.00/n")
creation_time = None
print("Given creation time will not be used.")
else:
creation_time = None
if not creation_time and use_dir_creation_date:
creation_time = get_dir_creation_date(in_folder, run)
print(f"Creation time: {creation_time}")
# Reading all sequences of the run:
file_list = []
total_sequences = 0
fsequences = []
if sequences is None:
file_list = glob.glob(fp_path.format(0).replace('00000', '*'))
file_list = sorted(file_list, key = lambda x: (len (x), x))
total_sequences = len(file_list)
fsequences = range(total_sequences)
else:
for seq in sequences:
abs_entry = fp_path.format(seq)
if os.path.isfile(abs_entry):
file_list.append(abs_entry)
total_sequences += 1
fsequences.append(seq)
sequences = fsequences
print(f"This run has a total number of {total_sequences} sequences.\n")
```
%% Cell type:code id: tags:
``` python
display(Markdown('### List of Files to be Processed'))
print("Reading data from the following files:")
for i in range(len(file_list)):
print(file_list[i])
```
%% Cell type:code id: tags:
``` python
display(Markdown('### Detector Parameters'))
# extract slow data
if karabo_da_control:
ctrl_fname = os.path.join(ped_dir_ctrl, path_template_ctrl.format(run, karabo_da_control)).format(sequences[0])
ctrl_path = h5path_ctrl.format(karabo_id)
mdl_ctrl_path = f"/CONTROL/{karabo_id}/MDL/"
(bias_voltage, gain,
fix_temperature_top,
fix_temperature_bot) = extract_slow_data(karabo_id, karabo_da_control, ctrl_fname, ctrl_path,
mdl_ctrl_path, bias_voltage, gain,
fix_temperature_top, fix_temperature_bot)
print(f"Bias voltage is {bias_voltage:0.2f} V.")
print(f"Detector gain is set to {int(gain)}.")
print(f"Top pnCCD sensor is at temperature of {fix_temperature_top:0.2f} K")
print(f"Bottom pnCCD sensor is at temperature of {fix_temperature_bot:0.2f} K")
print(f"Photon energy is {photon_energy} keV")
```
%% Cell type:code id: tags:
``` python
# The ADU values correspoding to the boundaries of the first peak region are used as cti_limit_low and
# cti_limit_high:
gain_k = [k for k, v in VALID_GAINS.items() if v == gain][0]
if gain_k == 'a':
cti_limit_low = 300 # lower limit of cti
cti_limit_high = 8200 # higher limit of cti
max_points = 100000 # maximum data value
elif gain_k == 'b':
cti_limit_low = 500
cti_limit_high = 2000
max_points = 100000
elif gain_k == 'c':
cti_limit_low = 100
cti_limit_high = 500
max_points = 100000
elif gain_k == 'd':
if bias_voltage <= 400.:
cti_limit_low = 50
cti_limit_high = 150
max_points = 20000 # ccoords.shape for each quadrant?
elif bias_voltage > 400.:
cti_limit_low = 50
cti_limit_high = 120
max_points = 100000
elif gain_k == 'e':
if bias_voltage <= 400.:
cti_limit_low = 10
cti_limit_high = 150
max_points = 8000
elif bias_voltage > 400.:
cti_limit_low = 13
cti_limit_high = 45
max_points = 100000
else:
cti_limit_low = 13
cti_limit_high = 200
max_points = 100000
```
%% Cell type:code id: tags:
``` python
# Sensor size and block size definitions (important for common mode and other corrections):
sensorSize = [pixels_x, pixels_y]
blockSize = [sensorSize[0]//2, sensorSize[1]//2] # Sensor area will be analysed according to blocksize
xcal.defaultBlockSize = blockSize
memoryCells = 1 # pnCCD has 1 memory cell
```
%% Cell type:markdown id: tags:
### Reading in already corrected data ###
In the following, we read in already corrected data and separate double events to track the ratios of split events. Additionally, data structures for CTI and relative gain determination are created.
%% Cell type:code id: tags:
``` python
# pattern kernels as defined in pyDetLib:
# possibly at pydetlib/lib/src/XFELDetAna/util/pattern_kernels.py
kernel = {}
kernel["203"] = np.array([[0, 1, 0],
[0, 1, 0],
[0, 0, 0]], dtype=np.float)[:,:,np.newaxis]
kernel["201"] = np.array([[0, 0, 0],
[0, 1, 1],
[0, 0, 0]], dtype=np.float)[:,:,np.newaxis]
kernel["202"] = np.array([[0, 0, 0],
[0, 1, 0 ],
[0, 1, 0]], dtype=np.float)[:,:,np.newaxis]
kernel["200"] = np.array([[0, 0, 0],
[1, 1, 0 ],
[0, 0, 0]], dtype=np.float)[:,:,np.newaxis]
def separateDouble(pdat, alldat, p):
"""
Just a helper function to separate doubles back out
:param pdat: the pattern indices
:param alldat: the corrected data
:param p: the pattern indice to work on
"""
cdat = np.zeros_like(alldat)
kstack = kernel[str(p)]
idx = pdat == p
p1 = alldat[idx]
p2 = None
if np.roll(kstack, 1, axis=0)[1,1] == 1:
idx = np.roll(pdat, -1, axis = 0) == p
p2 = alldat[idx]
if np.roll(kstack, -1, axis=0)[1,1] == 1:
idx = np.roll(pdat, 1, axis = 0) == p
p2 = alldat[idx]
if np.roll(kstack, 1, axis=1)[1,1] == 1:
idx = np.roll(pdat, -1, axis = 1) == p
p2 = alldat[idx]
if np.roll(kstack, -1, axis=1)[1,1] == 1:
idx = np.roll(pdat, 1, axis = 1) == p
p2 = alldat[idx]
r = np.zeros((p1.shape[0], 2), np.float)
r[:,0] = np.abs(p1)-np.abs(p2)
r[:,1] = np.abs(p2)
return r
```
%% Cell type:markdown id: tags:
The following construction of the 4 ccd quadrants are based on a right-handed axis which increase towards up (y), left (x) and into the page (z). We take the z-axis as beam axis, therefore, the image is constructed as if you are looking with the beam (you see what beam sees). The zero of the axis is on the bottom right corner as shown below:
---------- ^ y
| LR | UR |
---------- |
| LL | UL |
x <----------0 (beam going into the page)
To divide the array of shape (1024, 1024) into 4 arrays of shapes (512, 512), we take python's rule of axis = 0 being vertical (along rows of the matrices) and axis = 1 being horizontal (along columns of the matrices). Thus, we have:
Convension:
quadrant coordinates = (a, b), (c, d), where a and b are the boundaries of the quadrant along axis = 0, and c and d are those along axis = 1:
. UL coordinates = (0,512), (0,512)
. LL coordinates = (0,512), (512, 1024)
. UR coordinates = (512, 1024), (0,512)
. LR coordinates = (512,1024), (512,1024)
%% Cell type:code id: tags:
``` python
# actual data reading happens here:
# separated doubles will be held in these lists
up_doubles = [] # up major partner
down_doubles = [] # down major partner
left_doubles = [] # left major partner
right_doubles = [] # right major partner
# A struct in the C programming language is a composite data type declaration that defines a physically grouped
# list of variables under one name in a block of memory, allowing the different variables to be accessed via a
# single pointer or by the struct declared name which returns the same address.
# structs for CTI and gain evaluation:
# we work by quadrant:
dstats = {'upper left': {'coords': [0, pixels_x//2, 0, pixels_y//2], # coordinates of this quadrant
'row_coords': [], # will hold our data's row coordinates
'col_coords': [], # will hold our data's column coordinates
'col_dir': 1, # 1 if columns read out to left, -1 if read out to right
'vals': [], # will hold our data values
},
'lower left': {'coords':[0, pixels_x//2, pixels_y//2, pixels_y],
'row_coords': [],
'col_coords': [],
'col_dir': 1,
'vals': [],
},
'upper right': {'coords':[pixels_x//2, pixels_x, 0, pixels_y//2],
'row_coords': [],
'col_coords': [],
'col_dir': -1,
'vals': [],
},
'lower right': {'coords': [pixels_x//2, pixels_x, pixels_y//2, pixels_y],
'row_coords': [],
'col_coords': [],
'col_dir': -1,
'vals': [],
},
}
# structure that identifies and holds pattern statistics
pstats = {'singles': {'p': 100,},
'first singles': {'p': 101,},
'clusters': {'p': 1000,},
'left major doubles': {'p': 200,},
'right major doubles': {'p': 201,},
'up major doubles': {'p': 202,},
'down major doubles': {'p': 203,},
'left major triples': {'p': 300,},
'right major triples': {'p': 301,},
'up major triples': {'p': 302,},
'down major triples': {'p': 303,},
'left major quads': {'p': 400,},
'right major quads': {'p': 401,},
'up major quads': {'p': 402,},
'down major quads': {'p': 403,},
}
# total number of patterns
tpats = 0
for k, f in enumerate(file_list):
with h5py.File(f, 'r', driver='core') as infile:
data = infile[h5path+"/pixels_classified"][()]
patterns = infile[h5path+"/patterns"][()]
bpix = infile[h5path+"/mask"][()]
# separating out doubles
up_doubles.append(separateDouble(patterns, data, 202))
down_doubles.append(separateDouble(patterns, data, 203))
right_doubles.append(separateDouble(patterns, data, 201))
left_doubles.append(separateDouble(patterns, data, 200))
# creating pattern statistics
for pstr, entry in pstats.items():
cpat = patterns == entry["p"]
entry["counts"] = np.count_nonzero(cpat)
hist = entry.get("hist", 0)
hist += np.sum(cpat, axis=0)
entry["hist"] = hist
# for normalization of relative occurences
tpats += np.count_nonzero((patterns > 0) & (patterns < 1000))
# creating coordinates needed for CTI and relative gain fitting
xv, yv = np.meshgrid(np.arange(data.shape[0]), np.arange(data.shape[1]))
xv = np.repeat(xv[...,None], data.shape[2], axis=2)
yv = np.repeat(yv[...,None], data.shape[2], axis=2)
# we take all single values
all_singles = np.zeros_like(data)
all_singles[(patterns == 100) | (patterns == 101)] = data[(patterns == 100) | (patterns == 101)]
#all_singles[bpix != 0] = np.nan
# and then save data and coordinates quadrant-wise
for key, entry in dstats.items():
co = entry["coords"]
cd = entry["col_dir"]
asd = all_singles[:, co[2]:co[3], co[0]:co[1]]
# taking column direction into account - right hemisphere reads to right, left hemisphere reads to left
if cd == 1:
yv, xv = np.meshgrid(np.arange(asd.shape[1]), np.arange(asd.shape[2])+co[2]) # row counting
# downwards
else:
yv, xv = np.meshgrid(np.arange(asd.shape[1], 0, -1), np.arange(asd.shape[2])+co[2]) # row counting
# downwards
xv = np.repeat(xv[None,...], data.shape[0], axis=0)
yv = np.repeat(yv[None,...], data.shape[0], axis=0)
entry['row_coords'].append(xv[asd > 0].flatten())
entry['col_coords'].append(yv[asd > 0].flatten())
entry['vals'].append(asd[asd > 0].flatten())
#break
up_doubles = np.concatenate(up_doubles, axis=0)
down_doubles = np.concatenate(down_doubles, axis=0)
left_doubles = np.concatenate(left_doubles, axis=0)
right_doubles = np.concatenate(right_doubles, axis=0)
for key, entry in dstats.items():
entry['row_coords'] = np.concatenate(entry['row_coords'])
entry['col_coords'] = np.concatenate(entry['col_coords'])
entry['vals'] = np.concatenate(entry['vals'])
```
%% Cell type:markdown id: tags:
### Pattern Statistics ###
Relative occurrences are normalized to non-cluster events. This table probably only shows the statistics of the last sequence (not the entire run).
%% Cell type:code id: tags:
``` python
table = PrettyTable()
table.field_names = ["Pattern type", "Absolute Number", "Relative Occurence"]
for key, entry in pstats.items():
table.add_row([key, entry["counts"], "{:0.2f}".format(float(entry["counts"])/tpats)])
print(table)
```
%% Cell type:code id: tags:
``` python
display(Markdown('### Event Patterns'))
fig = plt.figure(1, (15., 15.))
grid = ImageGrid(fig, 111, # similar to subplot(111)
nrows_ncols=(5, 4), # creates 2x2 grid of axes
axes_pad=0.3, # pad between axes in inch.
)
i = 0
for key, entry in pstats.items():
d = entry["hist"]
# The AxesGrid object works as a list of axes.
grid[i].imshow(d, vmin=0, vmax=2.*np.mean(d[d != 0]), cmap="gray_r")
grid[i].set_ylabel("Row")
grid[i].annotate(key, (20, -30), color="red", annotation_clip=False)
i += 1
if i == 3: # For the empty panel
i += 1
if i == 16: # I actually don't know how this is working and why only one panel has x-label
grid[i].set_xlabel("Column")
```
%% Cell type:code id: tags:
``` python
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111)
ax.scatter(np.abs(up_doubles[:,0]), np.abs(up_doubles[:,1]), 0.5, alpha=0.5, color='b', label="up doubles")
ax.scatter(np.abs(down_doubles[:,1]), np.abs(down_doubles[:,0]), 0.5, alpha=0.5, color='g', label="down doubles")
ax.set_xlabel('Up Doubles')
ax.set_ylabel('Down Doubles')
ax.legend()
plt.show()
```
%% Cell type:code id: tags:
``` python
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
ax.scatter(np.abs(left_doubles[:,0]), np.abs(left_doubles[:,1]), 0.5, alpha=0.5, color='b', label="left doubles")
ax.scatter(np.abs(right_doubles[:,1]), np.abs(right_doubles[:,0]), 0.5, alpha=0.5, color='g', label="right doubles")
ax.semilogx()
ax.semilogy()
ax.set_xlabel('Left Doubles')
ax.set_ylabel('Right Doubles')
ax.legend()
plt.show()
```
%% Cell type:code id: tags:
``` python
display(Markdown('### Display of Signal along the Columns'))
fig = plt.figure(figsize=(15., 15.))
grid = AxesGrid(fig, 111, # similar to subplot(111)
nrows_ncols=(2, 2), # creates 2x2 grid of axes
axes_pad=0.3, # pad between axes in inch.
aspect=False,
)
ais = [0, 2, 1, 3]
ai = 0
for key, entry in dstats.items():
i = ais[ai]
rcoords = entry["row_coords"][:max_points]
ccoords = entry["col_coords"][:max_points]
avals = entry["vals"][:max_points]
idx = (avals > cti_limit_low) & (avals < cti_limit_high)
xax = ccoords[~idx]
yax = rcoords[~idx]
grid[i].scatter(xax, avals[~idx], 1, color="grey", alpha=0.2) # These are the data points which ar outside the
# valid CTI limits
xax = ccoords[idx]
yax = rcoords[idx]
pvals = avals[idx]
unique_coords = np.unique(xax)
mean_signal = np.asarray([np.mean(pvals[xax==jdx]) for jdx in unique_coords])
cmap = matplotlib.cm.get_cmap('summer')
colors = cmap(yax/np.max(yax))
grid[i].scatter(xax, pvals, 1, color=colors)
grid[i].scatter(unique_coords, mean_signal, 3, color='tomato')
grid[i].set_ylabel("Signal Value (ADU)")
grid[i].set_xlabel("Column Coordinate")
grid[i].annotate(key, (0.1, 1.02), xycoords="axes fraction", color="red", annotation_clip=False)
if entry["col_dir"] == -1: # columns reading out to the right
grid[i].set_xlim(512, 0)
l = grid[i].set_ylim(0, 1.25*cti_limit_high)
ai += 1
```
%% Cell type:code id: tags:
``` python
display(Markdown('### Display of Signal along the Rows'))
fig = plt.figure(figsize=(15., 15.))
grid = AxesGrid(fig, 111, # similar to subplot(111)
nrows_ncols=(2, 2), # creates 2x2 grid of axes
axes_pad=0.3, # pad between axes in inch.
aspect=False,
)
ai = 0
for key, entry in dstats.items():
i = ais[ai]
rcoords = entry["row_coords"][:max_points]
ccoords = entry["col_coords"][:max_points]
avals = entry["vals"][:max_points]
co = entry["coords"]
yax = ccoords[~idx]
xax = rcoords[~idx]
grid[i].scatter(avals[~idx], xax, 1, color="grey", alpha=0.2) # These are the data points which ar outside the
# valid CTI limits
yax = ccoords[idx]
xax = rcoords[idx]
pvals = avals[idx]
unique_coords = np.unique(xax)
mean_signal = np.asarray([np.mean(pvals[xax==jdx]) for jdx in unique_coords])
cmap = matplotlib.cm.get_cmap('summer')
colors = cmap(yax/np.max(yax))
grid[i].scatter(pvals, xax, 1, color=colors)
grid[i].scatter(mean_signal, unique_coords, 3, color='tomato')
grid[i].set_xlabel("Signal Value (ADU)")
grid[i].set_ylabel("Row Coordinate")
grid[i].annotate(key, (0.1, 1.02), xycoords="axes fraction", color="red", annotation_clip=False)
grid[i].set_ylim(co[3], co[2])
l = grid[i].set_xlim(0, 1.25*cti_limit_high)
ai += 1
```
%% Cell type:code id: tags:
``` python
# Fitting is performed in this cell for just a randomly selected sample row as an example:
def show_fit(row_example):
if row_example > 512 or row_example < 0:
print("Invalid row number. Try again.")
else:
max_points = -1
limit_cols = [0, 512]
for key, entry in dstats.items():
rcoords = entry["row_coords"][:max_points]
ccoords = entry["col_coords"][:max_points]
avals = entry["vals"][:max_points]
co = entry["coords"]
idx = (avals > cti_limit_low) & (avals < cti_limit_high) & (ccoords >= limit_cols[0]) & \
(ccoords <= limit_cols[1])
xax = rcoords[idx]-co[2]
yax = ccoords[idx] # columns
avalsf = avals[idx]
min_data = 5
ms = np.zeros(sensorSize[0]//2, np.float)
bs = np.zeros(sensorSize[0]//2, np.float)
mErr = np.zeros(sensorSize[0]//2, np.float)
bErr = np.zeros(sensorSize[0]//2, np.float)
axes = None
fig = None
fig, ax = plt.subplots(1, sharex=True, figsize=(10, 5))
def linFunc(x, *p):
return p[1] * x + p[0]
for i in [row_example]:
idx = (xax == i)
zm = avalsf[idx]
zx = yax[idx]
def linWrap(m, b):
return np.sum(((linFunc(zx, b, -m) - zm)) ** 2)
pparm = dict(throw_nan=False, pedantic=False, print_level=0)
pparm["m"] = 0
if gain == 1.0:
pparm["limit_m"] = (0.9, 1)
else:
pparm["limit_m"] = (0, 1)
pparm["b"] = np.nanmean(zm)
mini = Minuit(linWrap, **pparm)
fmin = mini.migrad()
if fmin[0]['is_valid'] and fmin[0]['has_covariance'] and (min_data is None or zx.size > min_data):
ferr = mini.minos(maxcall = 10000)
res = mini.values
ms[i] = res["m"]
bs[i] = res["b"]
mErr[i] = max(-ferr['m']['lower'], ferr['m']['upper'])
bErr[i] = max(-ferr['b']['lower'], ferr['b']['upper'])
zx, uidx = np.unique(zx, return_index=True)
zm = zm[uidx]
yt = linFunc(zx, res["b"], res["m"])
ax.scatter(zx, zm, 3) # point size = 3
ax.plot(zx, -ms[i] * zx + bs[i], color='r',
label = 'CTI fit for row {} in {} quadrant'.format(row_example, key))
ax.annotate('valid: {}'.format(fmin[0]['is_valid']),
xy=(0.2, 0.3), xycoords='axes fraction')
ax.set_xlabel("Column Coordinate")
ax.set_ylabel("Singles Events (between CTI_low and CTI_high)")
ax.set_title("Events Are Only Shown for One Row")
ax.set_xlim(0, 512)
ax.legend()
```
%% Cell type:code id: tags:
``` python
display(Markdown('### Example of CTI Fit for a Specific Row'))
fig = show_fit(365)
```
%% Cell type:code id: tags:
``` python
# Fitting is performed in this cell for the entire detector to get the CTI and relative gain:
max_points = -1
limit_cols = [0, 512]
counter = 0 # Total number of bad rows (over all quadrants) for which the fit does not converge
for key, entry in dstats.items():
rcoords = entry["row_coords"][:max_points]
ccoords = entry["col_coords"][:max_points]
avals = entry["vals"][:max_points]
co = entry["coords"]
idx = (avals > cti_limit_low) & (avals < cti_limit_high) & (ccoords >= limit_cols[0]) & \
(ccoords <= limit_cols[1])
xax = rcoords[idx]-co[2]
yax = ccoords[idx]
avalsf = avals[idx]
min_data = 5
ms = np.zeros(sensorSize[0]//2, np.float)
bs = np.zeros(sensorSize[0]//2, np.float)
mErr = np.zeros(sensorSize[0]//2, np.float)
bErr = np.zeros(sensorSize[0]//2, np.float)
def linFunc(x, *p):
return p[1] * x + p[0]
for i in range(sensorSize[0]//2):
idx = (xax == i)
zm = avalsf[idx]
zx = yax[idx]
def linWrap(m, b):
return np.sum(((linFunc(zx, b, -m) - zm)) ** 2)
pparm = dict(throw_nan=False, pedantic=False, print_level=0)
pparm["m"] = 0
if gain == 1.0:
pparm["limit_m"] = (0.9, 1)
else:
pparm["limit_m"] = (0, 1)
pparm["b"] = np.nanmean(zm)
mini = Minuit(linWrap, **pparm)
fmin = mini.migrad()
if fmin[0]['is_valid'] and fmin[0]['has_covariance'] and (min_data is None or zx.size > min_data):
ferr = mini.minos(maxcall = 10000)
res = mini.values
ms[i] = res["m"]
bs[i] = res["b"]
mErr[i] = max(-ferr['m']['lower'], ferr['m']['upper'])
bErr[i] = max(-ferr['b']['lower'], ferr['b']['upper'])
zx, uidx = np.unique(zx, return_index=True)
zm = zm[uidx]
yt = linFunc(zx, res["b"], res["m"])
else:
counter += 1
cti = ms / bs
ctiErr = np.sqrt((1. / bs * mErr) ** 2 + (ms / bs ** 2 * bErr) ** 2)
relGain = bs/np.nanmean(bs)
relGainErr = np.sqrt((1. / np.nanmean(bs) * bErr) ** 2 + (bs / np.nanmean(bs) ** 2 * np.std(bs)) ** 2)
entry['cti'] = cti
entry['cti_err'] = ctiErr
entry['rel_gain'] = relGain
entry['rel_gain_err'] = relGainErr
print("Fitting is performed for the entire detector, and CTI and relative gain are calculated.")
print(f"Total number of fits which did not converge: {counter}")
```
%% Cell type:code id: tags:
``` python
display(Markdown('### CTI per Quadrant'))
fig = plt.figure(figsize=(15., 15.))
grid = AxesGrid(fig, 111, # similar to subplot(111)
nrows_ncols=(2, 2), # creates 2x2 grid of axes
axes_pad=0.3, # pad between axes in inch.
aspect=False,
)
ais = [0, 2, 1, 3]
ai = 0
for key, entry in dstats.items():
i = ais[ai]
co = entry["coords"]
cti = entry["cti"]
x = np.arange(cti.size)+co[2]
grid[i].scatter(cti, x, 1., color='k')
if gain == 1.0:
grid[i].set_xlim(1e-5, 1e-2)
else:
grid[i].set_xlim(1e-8, 1e-2)
grid[i].semilogx()
grid[i].set_xlabel("CTI")
grid[i].set_ylabel("Row Coordinate")
grid[i].annotate(key, (0.1, 1.02), xycoords="axes fraction", color="red", annotation_clip=False)
grid[i].set_ylim(co[3], co[2])
ai += 1
```
%% Cell type:code id: tags:
``` python
display(Markdown('### Uncertainty in CTI per Quadrant'))
fig = plt.figure(figsize=(15., 15.))
grid = AxesGrid(fig, 111, # similar to subplot(111)
nrows_ncols=(2, 2), # creates 2x2 grid of axes
axes_pad=0.3, # pad between axes in inch.
aspect=False,
)
ais = [0, 2, 1, 3]
ai = 0
for key, entry in dstats.items():
i = ais[ai]
co = entry["coords"]
cti_err = entry["cti_err"]
x = np.arange(cti_err.size)+co[2]
grid[i].scatter(cti_err, x, 1., color='k')
if gain == 1.0:
grid[i].set_xlim(1e-12, 1e-8)
else:
grid[i].set_xlim(1e-10, 1e-5)
grid[i].semilogx()
grid[i].set_xlabel("Uncertainty in CTI")
grid[i].set_ylabel("Row Coordinate")
grid[i].annotate(key, (0.1, 1.02), xycoords="axes fraction", color="red", annotation_clip=False)
grid[i].set_ylim(co[3], co[2])
ai += 1
```
%% Cell type:code id: tags:
``` python
display(Markdown('### Relative Gain per Quadrant'))
fig = plt.figure(figsize=(15., 15.))
grid = AxesGrid(fig, 111, # similar to subplot(111)
nrows_ncols=(2, 2), # creates 2x2 grid of axes
axes_pad=0.3, # pad between axes in inch.
aspect=False,
)
ais = [0, 2, 1, 3]
ai = 0
for key, entry in dstats.items():
i = ais[ai]
co = entry["coords"]
rgain = entry["rel_gain"]
x = np.arange(rgain.size)+co[2]
grid[i].scatter(rgain, x, 1., color='k')
grid[i].set_xlim(0.5, 1.5)
grid[i].set_xlabel("Relative Gain")
grid[i].set_ylabel("Row Coordinate")
grid[i].annotate(key, (0.1, 1.02), xycoords="axes fraction", color="red", annotation_clip=False)
grid[i].set_ylim(co[3], co[2])
ai += 1
```
%% Cell type:code id: tags:
``` python
display(Markdown('### Uncertainty in Relative Gain per Quadrant'))
fig = plt.figure(figsize=(15., 15.))
grid = AxesGrid(fig, 111, # similar to subplot(111)
nrows_ncols=(2, 2), # creates 2x2 grid of axes
axes_pad=0.3, # pad between axes in inch.
aspect=False,
)
ais = [0, 2, 1, 3]
ai = 0
for key, entry in dstats.items():
i = ais[ai]
co = entry["coords"]
rgain_err = entry["rel_gain_err"]
x = np.arange(rgain_err.size)+co[2]
grid[i].scatter(rgain_err, x, 1., color='k')
grid[i].set_xlim(0, 0.2)
grid[i].set_xlabel("Relative Gain Uncertainty")
grid[i].set_ylabel("Row Coordinate")
grid[i].annotate(key, (0.1, 1.02), xycoords="axes fraction", color="red", annotation_clip=False)
grid[i].set_ylim(co[3], co[2])
ai += 1
```
%% Cell type:code id: tags:
``` python
# Sum of the total number of bad columns in each quadrant should be equal to the value of "counter"
didnt_converge = [] # list of columns in each quadrant for which the fit did not converge. Here, we assume the
# number of columns in each quadrant always starts from 0 and ends at 512
print("Rows for which fits did not converge:\n")
for key, entry in dstats.items():
rel_gain = entry["rel_gain"]
print("{}:".format(key))
for index, element in enumerate(rel_gain):
if element == 0:
didnt_converge.append(index)
print("Total number: {}".format(len(didnt_converge)))
print("These rows are as follows:")
print(didnt_converge)
print("\n")
didnt_converge = []
```
%% Cell type:markdown id: tags:
For those rows where the fit did not converge, we will set their gain to 1 and their CTI to the average value of the CTI in that quadrant.
%% Cell type:code id: tags:
``` python
ais = [0, 2, 1, 3]
ai = 0
for key, entry in dstats.items():
i = ais[ai]
cti = entry["cti"]
idx = np.where(np.isnan(cti))
for i in idx[0]:
cti[i] = np.nanmean(cti)
rel_gain = entry["rel_gain"]
for index, element in enumerate(rel_gain):
if element == 0:
rel_gain[index] = 1
```
%% Cell type:code id: tags:
``` python
# Calculating CTE and gain maps:
ctemap = np.zeros((sensorSize[0], sensorSize[1]))
ctimap = np.zeros((sensorSize[0], sensorSize[1]))
gainmap = np.zeros((sensorSize[0], sensorSize[1]))
productmap = np.zeros((sensorSize[0], sensorSize[1]))
for key, entry in dstats.items():
co = entry["coords"]
rel_gain = entry["rel_gain"]
cti = entry["cti"]
if entry["col_dir"] == 1:
# plus sign is so that the charge transfer inefficiency is 100% at the center of the CCD, which is the
# farthest distance away from the readout. If using a minus sign, you can set CTE = 100% at the readout
# => most efficient on the readout and less efficient towards the center of the chip:
quadcte = (1-cti[:, None])**np.repeat(np.arange(512)[None, :], 512, axis=0)
else:
quadcte = (1-cti[:, None])**np.repeat(np.arange(512, 0, -1)[None, :], 512, axis=0)
#quadcti = cti[:, None]
quadgain = rel_gain[:, None]
quadproduct = quadgain*quadcte
ctemap[co[2]:co[3], co[0]:co[1]] = quadcte
gainmap[co[2]:co[3], co[0]:co[1]] = quadgain
productmap[co[2]:co[3], co[0]:co[1]] = quadproduct
```
%% Cell type:code id: tags:
``` python
display(Markdown('### CTE and Relative Gain Maps'))
fig = xana.heatmapPlot(ctemap, figsize=(8, 8), x_label='Columns', y_label='Rows',
lut_label='Charge Transfer Efficiency',
aspect=1, cmap = 'viridis',x_range=(0, pixels_y), y_range=(0, pixels_x),
panel_x_label='Along Rows', panel_y_label='Along Columns',
title = 'CTE Map for pnCCD (Gain = 1/{})'.format(int(gain)))
fig = xana.heatmapPlot(gainmap, figsize=(8, 8), x_label='Columns', y_label='Rows',
lut_label='Relative Gain', vmin=0.8, vmax=1.2,
aspect=1, x_range=(0, pixels_y), y_range=(0, pixels_x),
panel_top_low_lim = 0.9, panel_top_high_lim = 1.1,
panel_side_low_lim = 0.9, panel_side_high_lim = 1.1,
panel_x_label='Along Rows', panel_y_label='Along Columns',
title = 'Relative Gain Map for pnCCD (Gain = 1/{})'.format(int(gain)))
fig = xana.heatmapPlot(productmap, figsize=(8, 8), x_label='Columns', y_label='Rows',
lut_label='Relative Gain*CTI',
aspect=1, x_range=(0, pixels_y), y_range=(0, pixels_x), vmin=0.8, vmax=1.2,
panel_x_label='Along Rows', panel_y_label='Along Columns',
panel_top_low_lim = 0.9, panel_top_high_lim = 1.1, panel_side_low_lim = 0,
panel_side_high_lim = 1.5, title = 'Relative Gain*CTE Map for pnCCD (Gain = 1/{})'
.format(int(gain)))
```
%% Cell type:markdown id: tags:
### Sending the CTE and Relative Gain Maps to the Calibration Database
%% Cell type:code id: tags:
``` python
constant_maps = {
'RelativeGain' : gainmap,
'CTE' : ctemap}
md = None
for cname in constant_maps.keys():
metadata = ConstantMetaData()
det = Constants.CCD(DetectorTypes.pnCCD)
const = getattr(det, cname)()
const.data = constant_maps[cname].data
metadata.calibration_constant = const
# setting the operating condition
condition = Conditions.Illuminated.CCD(bias_voltage=bias_voltage,
integration_time=integration_time,
photon_energy=photon_energy,
gain_setting=gain,
temperature=fix_temperature_top,
pixels_x=pixels_x,
pixels_y=pixels_y)
# This should be used in case of running notebook
# by a different method other than myMDC which already
# sends CalCat info.
# TODO: Set db_module to "" by default in the first cell
if not db_module:
db_module = get_pdu_from_db(karabo_id, karabo_da, const,
condition, cal_db_interface,
snapshot_at=creation_time)[0]
if db_output:
try:
md = send_to_db(db_module, karabo_da, const, condition,
file_loc, report,
cal_db_interface,
creation_time=creation_time,
timeout=cal_db_timeout)
md = send_to_db(db_module, karabo_id, const, condition,
file_loc, report,
cal_db_interface,
creation_time=creation_time,
timeout=cal_db_timeout)
if local_output:
md = save_const_to_h5(db_module, karabo_id,
const, condition, const.data,
file_loc, report,
creation_time, out_folder)
print(f"Calibration constant {cname} is stored to {out_folder}.")
print("\nGenerated constants with conditions:\n")
print(f"• bias_voltage: {bias_voltage}\n• photon_energy: {photon_energy}\n"
f"• top_temperature: {fix_temperature_top}\ntop_temperature: {integration_time}\n"
f"• top_temperature: {fix_temperature_top}\nintegration_time: {integration_time}\n"
f"• gain_setting: {gain}\n• creation_time: {creation_time}\n")
```
%% Cell type:code id: tags:
``` python
```
......
......@@ -331,4 +331,3 @@ class Frms6Reader(object):
numberOfFrames = int(numberOfFrames)
return (frameWidth, frameHeight, numberOfFrames)
......@@ -4,7 +4,7 @@ Offline Calibration Reportservice
The Reportserivce is a service responsible for handling requests (manual or automatic triggers)
for generating the DetectorCharacterization reports based on the requested configurations.
The Reportservice mainly consists of a service, clients and YAML configuration.
The Reportservice mainly consists of a service, clients and YAML configuration.
The service keeps on listening to any ZMQ requests with a given configurations.
Then based on these configurations, it produces slurm jobs (through xfel-calibrate command line) to generate *.png plots of calibration configurations over time.
......@@ -39,7 +39,7 @@ and it should generate a very generalized DC report for the available detectors
*local* is the mode used for generating figures locally without uploading the DC report on RTD or pushing figures
to the git repository, rather generated figures are copied to the local repository and depending on the
given report-fmt(report format) argument an html or a pdf is generated in doc/_build/
given report-fmt(report format) argument an html or a pdf is generated in doc/_build/
of the report service out folder (repo-local).
*sim* is a simulation mode, which is mostly used for debugging purposes and tool development without generating any reports locally or over RTD.
......@@ -116,9 +116,9 @@ Automatic Launch:
Manual Launch:
This manual launch script is currently used for debugging purposes, only.
The available command line arguments are:
* --config-file: The path for the configuration file
* --instrument: A selected list of instruments to generate a report for. This instrument must be in the report_conf.yaml. The default for this argument is ['all]
* --overwrite-conf: A bool for indicating a new report configuration file(conf-file) should be sent instead of the default report_conf.yaml,
......
......@@ -53,13 +53,13 @@ async def auto_run(cfg, timeout=3000):
tidx = tidx + 1 if tidx != len(run_time)-1 else 0
# check every 10mins, if there is
# check every 10mins, if there is
# a need for an automatic-run.
await asyncio.sleep(3000)
arg_parser = argparse.ArgumentParser(description='Automatic Launch')
arg_parser.add_argument('--config-file', type=str,
arg_parser.add_argument('--config-file', type=str,
default='./report_conf.yaml',
help='config file path with reportservice port. '
'Default=./report_conf.yaml')
......
......@@ -60,7 +60,7 @@ arg_parser.add_argument('--report-fmt', default='html',
'Note: THIS HAS NO EFFECT IN PROD AND SIM MODES!')
arg_parser.add_argument('--log-file', type=str, default='./report.log',
help='The report log file path. Default=./report.log')
arg_parser.add_argument('--logging', type=str, default="INFO",
arg_parser.add_argument('--logging', type=str, default="INFO",
help='logging modes: INFO, DEBUG or ERROR. '
'Default=INFO',
choices=['INFO', 'DEBUG', 'ERROR'])
......
class Errors:
REQUEST_MALFORMED = "FAILED: request {} is malformed, please contact det-support@xfel.eu"
INSTRUMENT_NOT_FOUND = "FAILED: Instrument {} is not known!, please contact det-support@xfel.eu"
......@@ -638,5 +638,3 @@ HED:
out-folder: "/gpfs/exfel/data/scratch/xcal/report_service/tmp/{instrument}/{detector}/"
cal-db-timeout: 180000
cal-db-interface: "tcp://max-exfl016:8015#8025"