Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • calibration/pycalibration
1 result
Show changes
Commits on Source (60)
Showing
with 521 additions and 316 deletions
# This file contains info on which commits to ignore for git blame to work
# correctly, you can use either of these see the 'correct' blame results:
#
# - `git blame file.py --ignore-revs-file .git-blame-ignore-revs`
# - `git config blame.ignoreRevsFile .git-blame-ignore-revs`
#
# Second option is a bit better as it'll work on the whole repo all the time
# fix/pre-commit-whitespace - Whitespace fixes
e7dfadaf4e189ef0e0f67798e8984695111257e3
isort:
stage: test
stages:
- check
- test
checks:
stage: check
only: [merge_requests]
allow_failure: true
script:
- python3 -m pip install --user isort==5.6.4
- isort --diff **/*.py && isort -c **/*.py
- export PATH=/home/gitlab-runner/.local/bin:$PATH
# We'd like to run the pre-commit hooks only on files that are being
# modified by this merge request, however
# `CI_MERGE_REQUEST_TARGET_BRANCH_SHA` is a 'premium' feature according to
# GitLab... so this is a workaround for extracting the hash
- export CI_MERGE_REQUEST_TARGET_BRANCH_SHA=$(git ls-remote origin $CI_MERGE_REQUEST_TARGET_BRANCH_NAME | cut -d$'\t' -f1)
- export FILES=$(git diff $CI_COMMIT_SHA $CI_MERGE_REQUEST_TARGET_BRANCH_SHA --name-only | tr '\n' ' ')
- python3 -m pip install --user -r requirements.txt
- echo "Running pre-commit on diff from $CI_COMMIT_SHA to $CI_MERGE_REQUEST_TARGET_BRANCH_SHA ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME)"
# Pass list of modified files to pre-commit so that it only checks them
- echo $FILES | xargs pre-commit run --color=always --files
pytest:
stage: test
only: [merge_requests]
script:
- python3 -m pip install --user -r requirements.txt
- python3 -m pip install --user pytest
- python3 -m pip install --user 'pytest>=5.4.0' pytest-asyncio testpath
- pytest -vv tests/test_*
<!--- Provide a general summary of your changes in the Title above.
Indicate detector & algorithm if applicable, e.g. [AGIPD] [DARK] add plotting -->
## Description
<!--- Why is this change required? What problem does it solve?
If it fixes an open issue, please link to the issue here. -->
## How Has This Been Tested?
<!--- Please describe in detail how you tested your changes.
Include details of your testing environment, tests ran to see how
your change affects other areas of the code, etc. -->
## Relevant Documents (optional)
<!-- Include any relevant screenshot, elogs, reports, if appropriate. -->
## Types of changes
<!--- What types of changes does your code introduce? Uncomment all lines that apply: -->
<!-- - Bug fix (non-breaking change which fixes an issue) -->
<!-- - New feature (non-breaking change which adds functionality) -->
<!-- - Breaking change (fix or feature that would cause existing functionality to not work as expected) -->
## Checklist:
<!--- Go over all the following points, and uncomment all lines that apply: -->
<!-- - My code follows the code style of this project. -->
<!-- - My change requires a change to the documentation. -->
<!-- - I have updated the documentation accordingly. -->
## Reviewers
<!--- Tag a minimum of two reviewers -->
repos:
- repo: meta
hooks:
- id: identity
- repo: https://github.com/nbQA-dev/nbQA
rev: 0.3.6
hooks:
- id: nbqa-isort
additional_dependencies: [isort==5.6.4]
args: [--nbqa-mutate]
- id: nbqa-flake8
additional_dependencies: [flake8==3.8.4]
args: [--nbqa-mutate]
- repo: https://github.com/kynan/nbstripout
rev: 0.3.9
hooks:
- id: nbstripout
- repo: https://github.com/pycqa/isort
rev: 5.6.4
hooks:
- id: isort
- repo: https://gitlab.com/pycqa/flake8
rev: 3.8.4
hooks:
- id: flake8
# If `CI_MERGE_REQUEST_TARGET_BRANCH_SHA` env var is set then this will
# run flake8 on the diff from the current commit to the latest commit of
# the branch being merged into, otherwise it will run flake8 as it would
# usually execute via the pre-commit hook
entry: bash -c 'if [ -z ${CI_MERGE_REQUEST_TARGET_BRANCH_SHA} ]; then (flake8 "$@"); else (git diff $CI_MERGE_REQUEST_TARGET_BRANCH_SHA | flake8 --diff); fi' --
- repo: https://github.com/myint/rstcheck
rev: 3f92957478422df87bd730abde66f089cc1ee19b # commit where pre-commit support was added
hooks:
- id: rstcheck
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.3.0
hooks:
- id: check-added-large-files
- id: check-ast
- id: check-json
- id: check-yaml
- id: check-toml
- id: end-of-file-fixer
- id: trailing-whitespace
- id: check-docstring-first
- id: check-merge-conflict
- id: mixed-line-ending
args: [--fix=lf]
###################
Offline Calibration
===================
###################
The offline calibration is a package that consists of different services,
responsible for applying most of the offline calibration and characterization
for the detectors.
Offline calibration installation
================================
.. contents::
It's recommended to install the offline calibration (pycalibration) package
over maxwell, using anaconda/3 environment.
Installation using Anaconda
---------------------------
Offline Calibration Installation
********************************
First you need to load the anaconda/3 environment through::
It's recommended to install the offline calibration (pycalibration) package over
maxwell, using anaconda/3 environment.
1. module load anaconda/3
If installing into other python enviroments, this step can be skipped.
Installation using python virtual environment - recommended
===========================================================
Then the package for the offline calibration can be obtained from the git repository::
1. ``git clone ssh://git@git.xfel.eu:10022/detectors/pycalibration.git && cd pycalibration`` - clone the offline calibration package from EuXFEL GitLab
2. ``module load anaconda/3`` - load the anaconda/3 environment
3. ``python3 -m venv .venv`` - create the virtual environment
4. ``source .venv/bin/activate`` - activate the virtual environment
5. ``python3 -m pip install --upgrade pip`` - upgrade version of pip
6. ``python3 -m pip install -r requirements.txt`` - install dependencies
7. ``python3 -m pip install .`` - install the pycalibration package (add ``-e`` flag for editable development installation)
8. ``pip install "git+ssh://git@git.xfel.eu:10022/karaboDevices/pyDetLib.git#egg=XFELDetectorAnalysis&subdirectory=lib"``
2. git clone https://git.xfel.eu/gitlab/detectors/pycalibration.git
Copy/paste script:
.. code::
You can then install all requirements of this tool chain in your home directory by running::
git clone ssh://git@git.xfel.eu:10022/detectors/pycalibration.git
cd pycalibration
module load anaconda/3
python3 -m venv .venv
source .venv/bin/activate
python3 -m pip install --upgrade pip
python3 -m pip install -r requirements.txt
python3 -m pip install . # `-e` flag for editable install
python3 -m pip install "git+ssh://git@git.xfel.eu:10022/karaboDevices/pyDetLib.git#egg=XFELDetectorAnalysis&subdirectory=lib/"
3. pip install -r requirements.txt . --user
in pycalibration's root directory.
Installation into user home directory
=====================================
After installation, you should make sure that the home directory is in the PATH environment variable::
1. ``git clone ssh://git@git.xfel.eu:10022/detectors/pycalibration.git && cd pycalibration`` - clone the offline calibration package from EuXFEL GitLab
2. ``module load anaconda/3`` - load the anaconda/3 environment. If installing into other python environments, this step can be skipped
3. ``pip install -r requirements.txt`` - install all requirements of this tool chain in your home directory
4. ``pip install .`` - install the pycalibration package (add ``-e`` flag for editable development installation)
5. ``export PATH=$HOME/.local/bin:$PATH`` - make sure that the home directory is in the PATH environment variable
4. export PATH=$HOME/.local/bin:$PATH
Copy/paste script:
Installation using virtual python environment
---------------------------------------------
.. code::
Create virtual environment::
git clone ssh://git@git.xfel.eu:10022/detectors/pycalibration.git
cd pycalibration
module load anaconda/3
pip install -r requirements.txt --user
pip install . # `-e` flag for editable install, e.g. `pip install -e .`
export PATH=$HOME/.local/bin:$PATH
module load anaconda/3
python -m venv /path/to/new/virtual/environment
source /path/to/new/virtual/environment/bin/activate
Clone from git::
Creating an ipython kernel for virtual environments
===================================================
cd /path/to/packages
git clone https://git.xfel.eu/gitlab/detectors/pycalibration.git
cd pycalibration
To create an ipython kernel with pycalibration available you should (if using a
venv) activate the virtual environment first, and then run:
Install the package::
.. code::
pip install -r requirements.txt
python3 -m pip install ipykernel # If not using a venv add `--user` flag
python3 -m ipykernel install --user --name pycalibration --display-name "pycalibration" # If not using a venv pick different name
In additional install pyDetLib package, which is required for many notebooks::
This can be useful for Jupyter notebook tools as https://max-jhub.desy.de/hub/login
cd /path/to/packages
git clone https://git.xfel.eu/gitlab/karaboDevices/pyDetLib.git
cd pyDetLib/lib
pip install -r requirements.txt
pip install .
++++++++++++++++++++++++++++++++++++++++++++++++++
Setting an ipython kernel for virtual environments
++++++++++++++++++++++++++++++++++++++++++++++++++
Contributing
************
To set a kernel for your virtual environment::
Guidelines
==========
source /path/to/new/virtual/environment/bin/activate
pip install ipykernel
python -m ipykernel install --user --name <virtenv-name> --display-name "virtenv-display-name"
Development guidelines can be found on the GitLab Wiki page here: https://git.xfel.eu/gitlab/detectors/pycalibration/wikis/GitLab-Guidelines
This can be useful for Jupyter notebook tools as "max-jhub.desy.de".
Basics
======
Development Installation
The installation instructions above assume that you have set up SSH keys for use
with GitLab to allow for passwordless clones from GitLab, this way it's possible
to run ``pip install git+ssh...`` commands and install packages directly from
GitLab.
To do this check the settings page here: https://git.xfel.eu/gitlab/profile/keys
Pre-Commit Hooks
================
This repository uses pre-commit hooks automatically run some code quality and
standard checks, this includes the following:
a. ``identity`` - The 'identity' meta hook prints off a list of files that the hooks will execute on
b. 'Standard' file checks
1. ``check-added-large-files`` - Ensures no large files are committed to repo
2. ``check-ast`` - Checks that the python AST is parseable
3. ``check-json`` - Checks json file formatting is parseable
4. ``check-yaml`` - Checks yaml file formatting is parseable
5. ``check-toml`` - Checks toml file formatting is parseable
6. ``rstcheck`` - Checks rst file formatting is parseable
7. ``end-of-file-fixer`` - Fixes EoF to be consistent
8. ``trailing-whitespace`` - Removes trailing whitespaces from lines
9. ``check-merge-conflict`` - Checks no merge conflicts remain in the commit
10. ``mixed-line-ending`` - Fixes mixed line endings
c. Code checks
1. ``flake8`` - Code style checks
2. ``isort`` - Sorts imports in python files
3. ``check-docstring-first`` - Ensures docstrings are in the correct place
d. Notebook checks
1. ``nbqa-flake8`` - Runs flake8 on notebook cells
2. ``nbqa-isort`` - Runs isort on notebook cells
3. ``nbstripoutput`` - Strips output from ipynb files
To install these checks, set up you environment as mentioned above and then run
the command:
.. code::
pre-commit install-hooks
This will set up the hooks in git locally, so that each time you run the command
``git commit`` the hooks get executed on the **staged files only**, beware that
if the pre-commit hooks find required changes some of them will **modify your
files**, however they only modify the current working files, not the ones you
have already staged. This means that you can look at the diff between your
staged files and the ones that were modified to see what changes are suggested.
Run Checks Only On Diffs
------------------------
For a development installation, which automatically
picks up (most) changes, first install the dependencies as above,
but then install the tool-chain separately in development mode (install in home directory using --user, in case of using Anaconda/3)::
Typically ``pre-commit`` is ran on ``--all-files`` within a CI, however as this
is being set up on an existing codebase these checks will always fail with a
substantial number of issues. Using some creative workarounds, the CI has been
set up to only run on files which have changed between a PR and the target
branch.
If you want to run the pre-commit checks as they would run on the CI, then you
can use the ``bin/pre-commit-diff.sh`` to execute the checks as on the CI
pipeline.
A side effect of this is that the checks will run on **all** of the differences
between the 'local' and target branch. This means that if changes have recently
been merged into the target branch, and there is divergence between the two,
then the tests will run on all the differences.
pip install -e .
If this happens and the hooks in the CI (or via the script) run on the wrong
files then you should **rebase onto the target branch** to prevent the checks
from running on the wrong files/diffs.
Activate Offline calibration
============================
Skipping Checks
---------------
For using pycalibration package one needs to activate it through::
If the checks are failing and you want to ignore them on purpose then you have two options:
source activate
- use the ``--no-verify`` flag on your ``git commit`` command to skip them, e.g. ``git commit -m "Commit skipping hooks" --no-verify``
- use the variable ``SKIP=hooks,to,skip`` before the git commit command to list hooks to skip, e.g. ``SKIP=flake8,isort git commit -m "Commit skipping only flake8 and isort hooks"``
from inside of the pycalibration directory. This will automatically load
all needed modules and export the $PATH for the home directory.
In the CI pipeline the pre-commit check stage has ``allow_failure: true`` set so
that it is possible to ignore errors in the checks, and so that subsequent
stages will still run even if the checks have failed. However there should be a
good reason for allowing the checks to fail, e.g. checks failing due to
unmodified sections of code being looked at.
Python Scripted Calibration
===========================
***************************
First: do not run this on the Maxwell gateway. Rather, `salloc`
a node for yourself first::
**Do not run this on the Maxwell gateway**. Rather, ``salloc`` a node for
yourself first:
salloc -p exfel/upex -t 01:00:00
.. code::
where `-p` gives the partition to use: exfel or upex and `-t`
the duration the node should be allocated. Then `ssh` onto
that node.
salloc -p exfel/upex -t 01:00:00
(optionally) Set up the environment::
where `-p` gives the partition to use: exfel or upex and `-t` the duration the
node should be allocated. Then `ssh` onto that node.
module load python3
pip install --user ipython --upgrade
pip install --user ipyparallel --upgrade
pip install --user dill
If running headless (i.e. without X forwarding), be sure to set
`MPLBACKEND=Agg`, via::
Then activate your environment as described above (or just continue if you are
not using a venv).
export MPLBACKEND=Agg
If running headless (i.e. without X forwarding), be sure to set
``MPLBACKEND=Agg``, via:
Then start an `ipcluster`. If you followed the steps above this can be done
via::
.. code::
~/.local/bin/ipcluster start --n=32
export MPLBACKEND=Agg
Then start an ``ipcluster``. If you followed the steps above this can be done
via:
Run the script::
.. code::
ipcluster start --n=32
Finally run the script:
.. code::
python3 calibrate.py --input /gpfs/exfel/exp/SPB/201701/p002012/raw/r0100 \
--output ../../test_out --mem-cells 30 --detector AGIPD --sequences 0,1
--output ../../test_out --mem-cells 30 --detector AGIPD --sequences 0,1
Here `--input` should point to a directory of `RAW` files for the detector you
are calibrating. They will be output into the folder specified by `--output`,
which will have the run number or the last folder in the hiearchy of the input
appended. Additionally, you need to specify the number of `--mem-cells` used
for the run, as well as the `--detector`. Finally, you can optionally
specify to only process certain `--sequences` of files, matching the sequence
numbers of the `RAW` input. These should be given as a comma-separated list.
Finally, there is a `--no-relgain` option, which disables relative gain
are calibrating. They will be output into the folder specified by `--output`,
which will have the run number or the last folder in the hierarchy of the input
appended. Additionally, you need to specify the number of `--mem-cells` used for
the run, as well as the `--detector`. Finally, you can optionally specify to
only process certain `--sequences` of files, matching the sequence numbers of
the `RAW` input. These should be given as a comma-separated list.
Finally, there is a `--no-relgain` option, which disables relative gain
correction. This can be useful while we still further characterize the detectors
to provid accurate relative gain correction constants.
to provide accurate relative gain correction constants.
You'll get a series of plots in the output directory as well.
#!/bin/bash
# We'd like to run the pre-commit hooks only on files that are being modified by
# this merge request, however `CI_MERGE_REQUEST_TARGET_BRANCH_SHA` is a'premium'
# feature according to GitLab... so this is a workaround for extracting the hash
CI_MERGE_REQUEST_TARGET_BRANCH_NAME="${1:-master}" # Set to master or 1st input
CI_COMMIT_SHA=$(git rev-parse HEAD)
export CI_MERGE_REQUEST_TARGET_BRANCH_SHA=$(git ls-remote origin $CI_MERGE_REQUEST_TARGET_BRANCH_NAME | cut -d$'\t' -f1)
FILES=$(git diff $CI_COMMIT_SHA $CI_MERGE_REQUEST_TARGET_BRANCH_SHA --name-only | tr '\n' ' ')
echo "Running pre-commit on diff from $CI_COMMIT_SHA to $CI_MERGE_REQUEST_TARGET_BRANCH_SHA ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME)"
# Pass list of modified files to pre-commit so that it only checks them
echo $FILES | xargs pre-commit run --color=always --files
......@@ -279,17 +279,17 @@ class AgipdCorrections:
f = h5py.File(file_name, 'r')
group = f[agipd_base]["image"]
(_, first_index, last_index,
(_, first_index, last_index,
_, valid_indices) = self.get_valid_image_idx(idx_base, f)
allcells = np.squeeze(group['cellId'])
allpulses = np.squeeze(group['pulseId'])
firange = self.gen_valid_range(first_index, last_index,
self.max_cells, allcells,
allpulses, valid_indices,
apply_sel_pulses)
n_img = firange.shape[0]
data_dict['nImg'][0] = n_img
if np.all(np.diff(firange) == 1):
......@@ -382,7 +382,7 @@ class AgipdCorrections:
Both corrections are iterative and requires 4 iterations.
Correction is performed in chunks of (e.g. 512 images).
A complete array of data from one file
A complete array of data from one file
(256 trains, 352 cells) will take
256 * 352 * 128 * 512 * 4 // 1024**3 = 22 Gb in memory
......@@ -484,7 +484,7 @@ class AgipdCorrections:
self.shared_dict[i_proc]['t0_rgain'][first:last] = \
rawgain / t0[cellid, ...]
self.shared_dict[i_proc]['raw_data'][first:last] = np.copy(data)
# Often most pixels are in high-gain, so it's more efficient to
# set the whole output block to zero than select the right pixels.
gain[:] = 0
......@@ -514,7 +514,7 @@ class AgipdCorrections:
def baseline_correction(self, i_proc:int, first:int, last:int):
"""
Perform image-wise base-line shift correction for
Perform image-wise base-line shift correction for
data in shared memory via histogram or stripe
:param first: Index of the first image to be corrected
......@@ -635,9 +635,9 @@ class AgipdCorrections:
# not just set to 0
if self.corr_bools.get('blc_set_min'):
data[(data < 0) & (gain == 1)] = 0
# Do xray correction if requested
# The slopes we have in our constants are already relative
# The slopes we have in our constants are already relative
# slopeFF = slopeFFpix/avarege(slopeFFpix)
# To apply them we have to / not *
if self.corr_bools.get("xray_corr"):
......@@ -746,7 +746,7 @@ class AgipdCorrections:
:param i_proc: the index of sharedmem for a given file/module
:return n_img: number of images to correct
"""
data_dict = self.shared_dict[i_proc]
n_img = data_dict['nImg'][0]
......@@ -814,12 +814,12 @@ class AgipdCorrections:
return first_pulse, last_pulse, pulse_step
def choose_selected_pulses(self, allpulses: np.array,
def choose_selected_pulses(self, allpulses: np.array,
can_calibrate: np.array) -> np.array:
"""
Choose given selected pulse from pulseId array of
raw data. The selected pulses range is validated then
raw data. The selected pulses range is validated then
used to add a booleans in can_calibrate and guide the
later appliance.
......@@ -830,7 +830,7 @@ class AgipdCorrections:
selected pulses
"""
(first_pulse, last_pulse,
(first_pulse, last_pulse,
pulse_step) = self.validate_selected_pulses(allpulses)
# collect the pulses to be calibrated
......@@ -853,9 +853,9 @@ class AgipdCorrections:
return can_calibrate
def gen_valid_range(self, first_index: int, last_index: int,
max_cells: int, allcells: np.array, allpulses: np.array,
max_cells: int, allcells: np.array, allpulses: np.array,
valid_indices: Optional[np.array] = None,
apply_sel_pulses: Optional[bool] = True
apply_sel_pulses: Optional[bool] = True
) -> np.array:
""" Validate the arrays of image.cellId and image.pulseId
to check presence of data and to avoid empty trains.
......@@ -871,7 +871,7 @@ class AgipdCorrections:
:param valid_indices: validated indices of image.data
:param apply_sel_pulses: A flag for applying selected pulses
after validation for correction
:return firange: An array of validated image.data
:return firange: An array of validated image.data
indices to correct
# TODO: Ignore rows (32 pulse) of empty pulses even if
common-mode is selected
......@@ -1067,7 +1067,7 @@ class AgipdCorrections:
rel_low gain = _rel_medium gain * 4.48
:param cons_data: A dictionary for each retrieved constant value.
:param when: A dictionary for the creation time
:param when: A dictionary for the creation time
of each retrieved constant.
:param module_idx: A module_idx index
:return:
......@@ -1089,7 +1089,7 @@ class AgipdCorrections:
bpixels |= cons_data["BadPixelsFF"].astype(np.uint32)[...,
:bpixels.shape[2], # noqa
None]
if when["SlopesFF"]: # Checking if constant was retrieved
slopesFF = cons_data["SlopesFF"]
......@@ -1150,16 +1150,16 @@ class AgipdCorrections:
pc_med_m = slopesPC[..., :self.max_cells, 3]
pc_med_l = slopesPC[..., :self.max_cells, 4]
# calculate median for slopes
# calculate median for slopes
pc_high_med = np.nanmedian(pc_high_m, axis=(0,1))
pc_med_med = np.nanmedian(pc_med_m, axis=(0,1))
# calculate median for intercepts:
pc_high_l_med = np.nanmedian(pc_high_l, axis=(0,1))
pc_med_l_med = np.nanmedian(pc_med_l, axis=(0,1))
# sanitize PC data
# sanitize PC data
# (it should be done already on the level of constants)
# In the following loop,
# In the following loop,
# replace `nan`s across memory cells with
# the median value calculated previously.
# Then, values outside of the valid range (0.8 and 1.2)
......
import copy
from typing import Tuple
import numpy as np
from cal_tools.enums import BadPixels, SnowResolution
from scipy.signal import cwt, find_peaks_cwt, ricker
from scipy.signal import cwt, ricker
from sklearn.mixture import GaussianMixture
from sklearn.preprocessing import StandardScaler
......@@ -175,11 +176,11 @@ def baseline_correct_via_noise(d, noise, g, threshold):
the shift corrected data is returned.
"""
seln = (g == 0) & (d <= 50)
h, e = np.histogram(d[seln], bins=210, range=(-2000, 100))
c = (e[1:] + e[:-1]) / 2
try:
cwtmatr = cwt(h, ricker, [noise, 3. * noise, 5. * noise])
except:
......@@ -249,8 +250,10 @@ def correct_baseline_via_hist(d, pcm, g):
return d, 0
it += 1
def min_hist_distance(pc, bins=100, ran=(-10000, 10000), dec=20,
minbin=10):
def min_hist_distance(pc: int,
bins: int = 100,
ran: Tuple[int, int] = (-10000, 10000),
minbin: int = 10) -> float:
hh, e = np.histogram(dd[g == 0] - pc, bins=bins, range=ran)
hm, e = np.histogram((dd[g == 1] - pc) * pcm[g == 1], bins=bins,
range=ran)
......
......@@ -4,7 +4,7 @@ from enum import Enum
class BadPixels(Enum):
""" The European XFEL Bad Pixel Encoding
"""
OFFSET_OUT_OF_THRESHOLD = 0b000000000000000000001 # bit 1
NOISE_OUT_OF_THRESHOLD = 0b000000000000000000010 # bit 2
OFFSET_NOISE_EVAL_ERROR = 0b000000000000000000100 # bit 3
......@@ -26,12 +26,12 @@ class BadPixels(Enum):
OVERSCAN = 0b001000000000000000000 # bit 19
NON_SENSITIVE = 0b010000000000000000000 # bit 20
NON_LIN_RESPONSE_REGION = 0b100000000000000000000 # bit 21
class BadPixelsFF(Enum):
""" The SLopesFF Bad Pixel Encoding
"""
FIT_FAILED = 0b000000000000000000001 # bit 1
CHI2_THRESHOLD = 0b000000000000000000010 # bit 2
NOISE_PEAK_THRESHOLD = 0b000000000000000000100 # bit 3
......@@ -41,11 +41,10 @@ class BadPixelsFF(Enum):
BAD_DARK = 0b000000000000001000000 # bit 6
NO_ENTRY = 0b000000000000010000000 # bit 7
GAIN_DEVIATION = 0b000000000000100000000 # bit 8
class SnowResolution(Enum):
""" An Enum specifying how to resolve snowy pixels
"""
NONE = "none"
INTERPOLATE = "interpolate"
......@@ -68,7 +68,7 @@ class LpdCorrections:
index section
:param do_ff: perform flat field corrections
:param correct_non_linear: perform non-linear transition region corr.
:param karabo_data_mode: set to true to use data iterated with karabo
:param karabo_data_mode: set to true to use data iterated with karabo
data
"""
self.lpd_base = h5_data_path.format(channel)
......@@ -261,19 +261,19 @@ class LpdCorrections:
# correct offset
im -= og
nlf = 0
if self.mark_nonlin and self.linear_between is not None:
for gl, lr in enumerate(self.linear_between):
midx = (gain == gl) & ((im < lr[0]) | (im > lr[1]))
msk[midx] = BadPixels.NON_LIN_RESPONSE_REGION.value
numnonlin = np.count_nonzero(midx, axis=(1,2))
nlf += numnonlin
nlf = nlf/float(im.shape[0] * im.shape[1])
# hacky way of smoothening transition region between med and low
cfac = 1
if self.nlc_version == 1 and self.cnl:
cfac = 0.314 * np.exp(-im * 0.001)
......@@ -310,7 +310,7 @@ class LpdCorrections:
cf = lin_exp_fun(x, cnl['m'], cnl['b'], cnl['A'], cnl['lam'],
cnl['c'])
im[(gain == 2)] -= np.minimum(cf, 0.45) * x
# create bad pixels masks, here non-finite values
bidx = ~np.isfinite(im)
im[bidx] = 0
......@@ -547,7 +547,7 @@ class LpdCorrections:
dtype=np.uint16, fletcher32=True)
self.outfile.create_dataset(lpdbase + "image/length", fsz,
dtype=np.uint32, fletcher32=True)
if self.mark_nonlin:
self.outfile.create_dataset(lpdbase + "image/nonLinear", fsz,
dtype=np.float32, fletcher32=True)
......@@ -590,9 +590,9 @@ class LpdCorrections:
connect to
* tcp://host:port_low#port_high to specify a port range from
which a random port will be picked. E.g. specifying
tcp://max-exfl016:8015#8025
will randomly pick an address in the range max-exfl016:8015 and
max-exfl016:8025.
......
......@@ -7,28 +7,28 @@ from matplotlib import pylab as plt
def getModulePosition(metrologyFile, moduleId):
"""Position (in mm) of a module relative to the top left
"""Position (in mm) of a module relative to the top left
corner of its quadrant. In case of tile-level positions,
the the position refers to the center of the top left
the the position refers to the center of the top left
pixel.
Args
----
metrologyFile : str
Fully qualified path and filename of the metrology file
moduleId : str
Identifier of the module in question (e.g. 'Q1M2T03')
Returns
-------
ndarray:
ndarray:
(x, y)-Position of the module in its quadrant
Raises
------
ValueError: In case the moduleId contains invalid module
identifieres
"""
......@@ -38,11 +38,11 @@ def getModulePosition(metrologyFile, moduleId):
#
# QXMYTZZ
#
# where X, Y, and Z are digits. Q denotes the quadrant
# (X = 1, ..., 4), M the supermodule (Y = 1, ..., 4) and T
# where X, Y, and Z are digits. Q denotes the quadrant
# (X = 1, ..., 4), M the supermodule (Y = 1, ..., 4) and T
# the tile (Z = 1, ..., 16; with leading zeros).
modulePattern = re.compile(r'[QMT]\d+')
# Give the module identifier Q1M1T01, the moduleList splits this
# Give the module identifier Q1M1T01, the moduleList splits this
# into the associated quadrant, supermodule, and tile identifiers:
# >>> print(moduleList)
# ['Q1', 'M1', 'T01']
......@@ -53,7 +53,7 @@ def getModulePosition(metrologyFile, moduleId):
# >>> print(h5Keys)
# ['Q1', 'Q1/M1', 'Q1/M1/T01']
h5Keys = ['/'.join(moduleList[:idx+1]) for idx in range(len(moduleList))]
# Every module of the detector gives its position relative to
# the top left corner of its parent structure. Every position
# is stored in the positions array
......@@ -83,17 +83,17 @@ def getModulePosition(metrologyFile, moduleId):
def translateToModuleBL(tilePositions):
"""Tile coordinates within a supermodule with the
origin in the bottom left corner.
Parameters
----------
tilePositions : ndarray
Tile positions as retrieved from the LPD metrology
Tile positions as retrieved from the LPD metrology
file. Must have shape (16, 2)
Returns
-------
ndarray
Tile positions relative to the bottom left corner.
"""
......@@ -115,7 +115,7 @@ def translateToModuleBL(tilePositions):
# In the clockwise order of LPD tiles, the 8th
# tile in the list is the bottom left tile
bottomLeft8th = np.asarray([0., moduleCoords[8][1]])
# Translate coordinates to the bottom left corner
# Translate coordinates to the bottom left corner
# of the bottom left tile
bottomLeft = moduleCoords - bottomLeft8th
return bottomLeft
......@@ -124,44 +124,44 @@ def translateToModuleBL(tilePositions):
def plotSupermoduleData(tileData, metrologyPositions, zoom=1., vmin=100., vmax=6000.):
"""Plots data of a supermodule with tile positions
determined by the metrology data.
Parameters
----------
tileData : ndarray
Supermodule image data separated in individual tiles.
Supermodule image data separated in individual tiles.
Must have shape (16, 32, 128).
metrologyPositions : ndarray
Tile positions as retrieved from the metrology file.
metrologyPositions : ndarray
Tile positions as retrieved from the metrology file.
Must have shape (16, 2)
zoom : float, optional
Can enlarge or decrease the size of the plot. Default = 1.
vmin, vmax : float, optional
Value range. Default vmin=100., vmax=6000.
Returns
-------
matplotlib.Figure
Figure object containing the supermodule image
Figure object containing the supermodule image
"""
# Data needs to have 16 tiles, each with
# 32x128 pixels
assert tileData.shape == (16, 32, 128)
# Conversion coefficient, required since
# matplotlib does its business in inches
mmToInch = 1./25.4 # inch/mm
# Some constants
numberOfTiles = 16
numberOfRows = 8
numberOfCols = 2
tileWidth = 65.7 # in mm
tileHeight = 17.7 # in mm
# Base width and height are given by spatial
# extend of the modules. The constants 3.4 and 1
# are estimated as a best guess for gaps between
......@@ -169,26 +169,26 @@ def plotSupermoduleData(tileData, metrologyPositions, zoom=1., vmin=100., vmax=6
figureWidth = zoom * numberOfCols*(tileWidth + 3.4)*mmToInch
figureHeight = zoom * numberOfRows*(tileHeight + 1.)*mmToInch
fig = plt.figure(figsize=(figureWidth, figureHeight))
# The metrology file references module positions
# The metrology file references module positions
bottomRightCornerCoordinates = translateToModuleBL(metrologyPositions)
# The offset here accounts for the fact that there
# might be negative x,y values
offset = np.asarray(
[min(bottomRightCornerCoordinates[:, 0]),
[min(bottomRightCornerCoordinates[:, 0]),
min(bottomRightCornerCoordinates[:, 1])]
)
# Account for blank borders in the plot
borderLeft = 0.5 * mmToInch
borderBottom = 0.5 * mmToInch
# The height and width of the plot remain
# constant for a given supermodule
width = zoom * 65.7 * mmToInch / (figureWidth - 2.*borderLeft)
height = zoom * 17.7 * mmToInch / (figureHeight - 2.*borderBottom)
for i in range(numberOfTiles):
# This is the top left corner of the tile with
# respect to the top left corner of the supermodule
......@@ -200,38 +200,38 @@ def plotSupermoduleData(tileData, metrologyPositions, zoom=1., vmin=100., vmax=6
ax = fig.add_axes((ax0, ay0, width, height), frameon=False)
# Do not display axes, tick markers or labels
ax.tick_params(
axis='both', left='off', top='off', right='off', bottom='off',
axis='both', left='off', top='off', right='off', bottom='off',
labelleft='off', labeltop='off', labelright='off', labelbottom='off'
)
# Plot the image. If one wanted to have a colorbar
# the img object would be needed to produce one
img = ax.imshow(
tileData[i],
interpolation='nearest',
tileData[i],
interpolation='nearest',
vmin=vmin, vmax=vmax
)
return fig
def splitChannelDataIntoTiles(channelData, clockwiseOrder=False):
"""Splits the raw channel data into indiviual tiles
Args
----
channelData : ndarray
Raw channel data. Must have shape (256, 256)
clockwiseOrder : bool, optional
If set to True, the sequence of tiles is given
in the clockwise order starting with the top
right tile (LPD standard). If set to false, tile
data is returned in reading order
Returns
-------
ndarray
Same data, but reshaped into (12, 32, 128)
"""
......@@ -240,8 +240,8 @@ def splitChannelDataIntoTiles(channelData, clockwiseOrder=False):
orderedTiles = tiles.reshape(16, 32, 128)
if clockwiseOrder:
# Naturally, the tile data after splitting is in reading
# order (i.e. top left tile is first, top right tile is second,
# etc.). The official LPD tile order however is clockwise,
# order (i.e. top left tile is first, top right tile is second,
# etc.). The official LPD tile order however is clockwise,
# starting with the top right tile. The following array
# contains indices of tiles in reading order as they would
# be iterated in clockwise order (starting from the top right)
......@@ -253,22 +253,22 @@ def splitChannelDataIntoTiles(channelData, clockwiseOrder=False):
def splitChannelDataIntoTiles2(channelData, clockwiseOrder=False):
"""Splits the raw channel data into indiviual tiles
Args
----
channelData : ndarray
Raw channel data. Must have shape (256, 256)
clockwiseOrder : bool, optional
If set to True, the sequence of tiles is given
in the clockwise order starting with the top
right tile (LPD standard). If set to false, tile
data is returned in reading order
Returns
-------
ndarray
Same data, but reshaped into (12, 32, 128)
"""
......@@ -277,8 +277,8 @@ def splitChannelDataIntoTiles2(channelData, clockwiseOrder=False):
orderedTiles = np.moveaxis(tiles.reshape(16, 128, 32, channelData.shape[2]), 2, 1)
if clockwiseOrder:
# Naturally, the tile data after splitting is in reading
# order (i.e. top left tile is first, top right tile is second,
# etc.). The official LPD tile order however is clockwise,
# order (i.e. top left tile is first, top right tile is second,
# etc.). The official LPD tile order however is clockwise,
# starting with the top right tile. The following array
# contains indices of tiles in reading order as they would
# be iterated in clockwise order (starting from the top right)
......@@ -294,7 +294,7 @@ def returnPositioned2(geometry_file, modules, dquads):
tile_order = [1, 2, 3, 4]
cells = 0
for sm, mn in modules:
position = np.asarray([getModulePosition(geometry_file,
'Q{}/M{:d}/T{:02d}'.format(
sm//4+1,
......@@ -355,7 +355,7 @@ def positionFileList(filelist, datapath, geometry_file, quad_pos, nImages='all',
all_intersected = None
for file in files:
ch = int(re.findall(r'.*-{}([0-9]+)-.*'.format(detector), file)[0])
try:
with h5py.File(file, 'r') as f:
if trainIds is None:
......@@ -369,18 +369,18 @@ def positionFileList(filelist, datapath, geometry_file, quad_pos, nImages='all',
counts = np.squeeze(f[cpath])
nzeros = counts != 0
tid = tid[nzeros]
intersection = np.intersect1d(tid, trainIds, assume_unique=True)
if intersection.size == 0:
continue
if all_intersected is None:
all_intersected = intersection
else:
all_intersected = np.intersect1d(all_intersected, intersection, assume_unique=True)
continue
if ch not in data:
data[ch] = np.moveaxis(np.moveaxis(d, 0, 2), 1, 0)
else:
......@@ -388,7 +388,7 @@ def positionFileList(filelist, datapath, geometry_file, quad_pos, nImages='all',
except Exception as e:
print(file)
print(e)
pcounts = None
if trainIds is not None:
for file in files:
......@@ -396,7 +396,7 @@ def positionFileList(filelist, datapath, geometry_file, quad_pos, nImages='all',
try:
with h5py.File(file, 'r') as f:
tid = np.squeeze(f["/INDEX/trainId"])
spath = datapath.replace("INSTRUMENT", "INDEX").format(ch).split("/")[:-1]
......@@ -408,26 +408,26 @@ def positionFileList(filelist, datapath, geometry_file, quad_pos, nImages='all',
tid = tid[nzeros]
tid_to_use = np.in1d(tid, all_intersected)
indices = []
indices = []
firsts = f[fpath][nzeros][tid_to_use]
counts = f[cpath][nzeros][tid_to_use]
if pcounts is None:
pcounts = counts
df = firsts[1]-firsts[0]
for i in range(firsts.shape[0]):
for i in range(firsts.shape[0]):
count = counts[i] if max_counts is None else max_counts
first = firsts[i]//df*count if not nwa else firsts[i]
indices += list(np.arange(first, first+count))
if len(indices) == 0:
continue
continue
indices = np.unique(np.sort(np.array(indices).astype(np.int)))
indices = indices[indices < f[datapath.format(ch)].shape[0]]
#if all contingous just use the range
#if np.allclose(indices[1:]-indices[:-1], 1):
d = np.squeeze(f[datapath.format(ch)][indices,:,:])
......@@ -438,11 +438,11 @@ def positionFileList(filelist, datapath, geometry_file, quad_pos, nImages='all',
else:
data[ch] = np.concatenate(data[ch], np.moveaxis(np.moveaxis(d, 0, 2), 1, 0), axis=2)
except Exception as e:
print(e)
print(e)
full_data = []
dummy = next(iter(data.values()))
for i in range(16):
if i in data:
full_data.append((i, data[i]))
......@@ -453,7 +453,7 @@ def positionFileList(filelist, datapath, geometry_file, quad_pos, nImages='all',
return np.moveaxis(pos, 2, 0)
else:
return np.moveaxis(pos, 2, 0), all_intersected, pcounts
def matchedFileList(filelist, datapath, nImages='all', trainIds=None, nwa=False, max_counts=None):
import glob
detector = "LPD" if "LPD" in datapath else "AGIPD"
......@@ -462,7 +462,7 @@ def matchedFileList(filelist, datapath, nImages='all', trainIds=None, nwa=False,
all_intersected = None
for file in files:
ch = int(re.findall(r'.*-{}([0-9]+)-.*'.format(detector), file)[0])
try:
with h5py.File(file, 'r') as f:
if trainIds is None:
......@@ -476,18 +476,18 @@ def matchedFileList(filelist, datapath, nImages='all', trainIds=None, nwa=False,
counts = np.squeeze(f[cpath])
nzeros = counts != 0
tid = tid[nzeros]
intersection = np.intersect1d(tid, trainIds, assume_unique=True)
if intersection.size == 0:
continue
if all_intersected is None:
all_intersected = intersection
else:
all_intersected = np.intersect1d(all_intersected, intersection, assume_unique=True)
continue
if ch not in data:
data[ch] = np.moveaxis(np.moveaxis(d, 0, 2), 1, 0)
else:
......@@ -495,7 +495,7 @@ def matchedFileList(filelist, datapath, nImages='all', trainIds=None, nwa=False,
except Exception as e:
print(file)
print(e)
pcounts = None
if trainIds is not None:
for file in files:
......@@ -503,7 +503,7 @@ def matchedFileList(filelist, datapath, nImages='all', trainIds=None, nwa=False,
try:
with h5py.File(file, 'r') as f:
tid = np.squeeze(f["/INDEX/trainId"])
spath = datapath.replace("INSTRUMENT", "INDEX").format(ch).split("/")[:-1]
......@@ -515,26 +515,26 @@ def matchedFileList(filelist, datapath, nImages='all', trainIds=None, nwa=False,
tid = tid[nzeros]
tid_to_use = np.in1d(tid, all_intersected)
indices = []
indices = []
firsts = f[fpath][nzeros][tid_to_use]
counts = f[cpath][nzeros][tid_to_use]
if pcounts is None:
pcounts = counts
df = firsts[1]-firsts[0]
for i in range(firsts.shape[0]):
for i in range(firsts.shape[0]):
count = counts[i] if max_counts is None else max_counts
first = firsts[i]//df*count if not nwa else firsts[i]
indices += list(np.arange(first, first+count))
if len(indices) == 0:
continue
continue
indices = np.unique(np.sort(np.array(indices).astype(np.int)))
indices = indices[indices < f[datapath.format(ch)].shape[0]]
#if all contingous just use the range
#if np.allclose(indices[1:]-indices[:-1], 1):
d = np.squeeze(f[datapath.format(ch)][indices,:,:])
......@@ -545,11 +545,11 @@ def matchedFileList(filelist, datapath, nImages='all', trainIds=None, nwa=False,
else:
data[ch] = np.concatenate(data[ch], np.moveaxis(np.moveaxis(d, 0, 2), 1, 0), axis=2)
except Exception as e:
print(e)
print(e)
full_data = []
dummy = next(iter(data.values()))
for i in range(16):
if i in data:
full_data.append((i, data[i]))
......@@ -559,4 +559,4 @@ def matchedFileList(filelist, datapath, nImages='all', trainIds=None, nwa=False,
if trainIds is None:
return pos
else:
return pos, all_intersected, pcounts
\ No newline at end of file
return pos, all_intersected, pcounts
......@@ -47,7 +47,7 @@ def show_overview(d, cell_to_preview, gain_to_preview, out_folder=None, infix=No
else:
med = np.nanmedian(item[..., cell_to_preview])
medscale = med
if med == 0:
if med == 0:
medscale = 0.1
bound = 0.2
......
......@@ -33,10 +33,10 @@ def extract_slow_data(karabo_id: str, karabo_da_control: str,
bias_voltage = abs(f[os.path.join(mdl_ctrl_path,
"DAQ_MPOD/u0voltage/value")][0]) # noqa
if gain == 0.1:
gain = f[os.path.join(mdl_ctrl_path,
gain = f[os.path.join(mdl_ctrl_path,
"DAQ_GAIN/pNCCDGain/value")][0]
if fix_temperature_top == 0.:
fix_temperature_top = f[os.path.join(ctrl_path,
fix_temperature_top = f[os.path.join(ctrl_path,
"inputA/krdg/value")][0]
if fix_temperature_bot == 0.:
fix_temperature_bot = f[os.path.join(ctrl_path,
......@@ -53,5 +53,5 @@ def extract_slow_data(karabo_id: str, karabo_da_control: str,
os.path.join(ctrl_path, "inputA/krdg/value"))
print("fix_temperature_bot control h5path:",
os.path.join(ctrl_path, "inputB/krdg/value"))
return bias_voltage, gain, fix_temperature_top, fix_temperature_bot
\ No newline at end of file
return bias_voltage, gain, fix_temperature_top, fix_temperature_bot
......@@ -15,10 +15,10 @@ def histogram(cnp.ndarray[cnp.float32_t, ndim=2] data, range=(0,1), int bins=20,
"""
cdef cnp.ndarray[cnp.float32_t, ndim=2] ret
cdef double min, max
min = range[0]
max = range[1]
cdef double min, max
min = range[0]
max = range[1]
ret = np.zeros((bins,data.shape[1]), dtype=np.float32)
cdef double bin_width = (max - min) / bins
cdef double x
......@@ -31,9 +31,9 @@ def histogram(cnp.ndarray[cnp.float32_t, ndim=2] data, range=(0,1), int bins=20,
for i in xrange(data.shape[0]):
x = (data[i,j] - min) / bin_width
if 0.0 <= x < bins:
if weights is None:
if weights is None:
ret[<int>x,j] += 1.0
else:
else:
ret[<int>x,j] += weights[i,j]
return ret, np.linspace(min, max, bins+1)
......@@ -83,16 +83,16 @@ def gain_choose(cnp.ndarray[cnp.uint8_t, ndim=3] a, cnp.ndarray[cnp.float32_t, n
cdef cnp.uint8_t v
cdef cnp.ndarray[cnp.float32_t, ndim=3] out
out = np.zeros_like(a, dtype=np.float32)
assert (<object>choices).shape == (3,) + (<object>a).shape
with nogil:
for i in range(a.shape[0]):
for j in range(a.shape[1]):
for k in range(a.shape[2]):
v = a[i, j, k]
out[i, j, k] = choices[v, i, j, k]
return out
......@@ -104,16 +104,16 @@ def gain_choose_int(cnp.ndarray[cnp.uint8_t, ndim=3] a, cnp.ndarray[cnp.int32_t,
cdef cnp.uint8_t v
cdef cnp.ndarray[cnp.int32_t, ndim=3] out
out = np.zeros_like(a, dtype=np.int32)
assert (<object>choices).shape == (3,) + (<object>a).shape
with nogil:
for i in range(a.shape[0]):
for j in range(a.shape[1]):
for k in range(a.shape[2]):
v = a[i, j, k]
out[i, j, k] = choices[v, i, j, k]
return out
......@@ -130,12 +130,12 @@ def sum_and_count_in_range_asic(cnp.ndarray[float, ndim=4] arr, float lower, flo
cdef float value
cdef cnp.ndarray[unsigned long long, ndim=2] count
cdef cnp.ndarray[double, ndim=2] sum_
# Drop axes -2 & -1 (pixel dimensions within each ASIC)
out_shape = arr[:, :, 0, 0].shape
count = np.zeros(out_shape, dtype=np.uint64)
sum_ = np.zeros(out_shape, dtype=np.float64)
with nogil:
for i in range(arr.shape[0]):
for k in range(arr.shape[1]):
......@@ -161,13 +161,13 @@ def sum_and_count_in_range_cell(cnp.ndarray[float, ndim=4] arr, float lower, flo
cdef float value
cdef cnp.ndarray[unsigned long long, ndim=2] count
cdef cnp.ndarray[double, ndim=2] sum_
# Drop axes 0 & 1
out_shape = arr[0, 0, :, :].shape
count = np.zeros(out_shape, dtype=np.uint64)
sum_ = np.zeros(out_shape, dtype=np.float64)
with nogil:
for i in range(arr.shape[0]):
for k in range(arr.shape[1]):
......
......@@ -42,35 +42,35 @@ This can be useful to add user requests while running. For this:
1. create a working copy of the notebook in question, and create a commit of the the
production notebook to fall back to in case of problems::
git add production_notebook_NBC.py
git commit -m "Known working version before edits"
cp production_notebook_NBC.py production_notebook_TEST.py
2. add any feature there and *thouroughly* test them
3. when you are happy with the results, copy them over into the production notebook and
save.
.. warning::
Live editing of correction notebooks is fully at your responsiblity. Do not do it
if you are not 100% sure you know what you are doing.
4. If it fails, revert back to the original state, ideally via git::
git checkout HEAD -- production_notebook_NBC.py
4. If it fails, revert back to the original state, ideally via git::
git checkout HEAD -- production_notebook_NBC.py
5. Any runs which did not correct do to failures of the live edit can then be relaunched
manually, assuming the correction notebook allows run and overwrite paramters::
xfel-calibrate ...... --run XYZ,ZXY-YYS --overwrite
Using a Parameter Generator Function
------------------------------------
By default, the parameters to be exposed to the command line are deduced from the
first code cell of the notebook, after resolving the notebook itself from the
first code cell of the notebook, after resolving the notebook itself from the
detector and characterization type. For some applications it might be beneficial
to define a context-specific parameter range within the same notebook, based on
additional user input. This can be done via a parameter generation function which
......@@ -82,7 +82,7 @@ is defined in one of the code cell::
existing = set()
def extract_parms(cls):
args, varargs, varkw, defaults = inspect.getargspec(cls.__init__)
pList = []
pList = []
for i, arg in enumerate(args[1:][::-1]):
if arg in existing:
continue
......@@ -90,7 +90,7 @@ is defined in one of the code cell::
existing.add(arg)
if i < len(defaults):
default = defaults[::-1][i]
default = defaults[::-1][i]
if str(default).isdigit():
pList.append("{} = {}".format(arg, default))
elif default is None or default == "None":
......@@ -108,21 +108,21 @@ is defined in one of the code cell::
parms = extract_parms(getattr(condition, dtype))
[all_conditions.add(p) for p in parms]
return "\n".join(all_conditions)
.. note::
Note how all imports are inlined, as the function is executed outside the
notebook context.
In the example, the function generates a list of additional parameters depending
on the `detector_instance` given. Here, `detector_instance` is defined in the first
code cell the usual way. Any other parameters defined such, that have names matching
code cell the usual way. Any other parameters defined such, that have names matching
those of the generator function signature are passed to this function. The function
should then return a string containing additional code to be appended to the first
code cell.
To make use of this functionality, the parameter generator function needs to be
To make use of this functionality, the parameter generator function needs to be
configured in `notebooks.py`, e.g. ::
...
......@@ -136,7 +136,7 @@ configured in `notebooks.py`, e.g. ::
},
}
...
To generically query which parameters are defined in the first code cell, the
code execution history feature of iPython can be used::
......@@ -156,6 +156,6 @@ code execution history feature of iPython can be used::
parms[n] = str(v) if not isinstance(v, str) else v
if parms[n] == "None" or parms[n] == "'None'":
parms[n] = None
This will create a dictionary `parms` which contains all parameters either
as `float` or `str` values.
\ No newline at end of file
as `float` or `str` values.
......@@ -385,7 +385,7 @@ from nbconvert import RSTExporter
from xfel_calibrate import notebooks
rst_exporter = RSTExporter()
with open("available_notebooks.rst", "w") as f:
f.write(dedent("""
.. _available_notebooks:
......@@ -395,8 +395,8 @@ with open("available_notebooks.rst", "w") as f:
The following notebooks are currently integrated into the European XFEL
Offline Calibration tool chain.
"""))
for detector in sorted(notebooks.notebooks.keys()):
......@@ -404,10 +404,10 @@ with open("available_notebooks.rst", "w") as f:
f.write("{}\n".format(detector))
f.write("{}\n".format("-"*len(detector)))
f.write("\n")
for caltype in sorted(values.keys()):
data = values[caltype]
nbpath = os.path.abspath("{}/../../../{}".format(__file__, data["notebook"]))
with open(nbpath, "r") as nf:
nb = nbformat.read(nf, as_version=4)
......@@ -419,16 +419,16 @@ with open("available_notebooks.rst", "w") as f:
nb.cells = [mdcell] # we only want this single cell
body, _ = rst_exporter.from_notebook_node(nb)
adjusted = []
# adjust titles
# adjust titles
for line in body.split("\n"):
if line.startswith("=="):
line = line.replace("=", "+")
if line.startswith("--"):
line = line.replace("-", "~")
adjusted.append(line)
f.write("\n".join(adjusted))
f.write("\n".join(adjusted))
f.write("\n")
f.write("To invoke this notebook and display help use:\n\n")
f.write(".. code-block:: bash\n\n")
f.write(" xfel-calibrate {} {} --help\n\n".format(detector, caltype))
......@@ -461,18 +461,18 @@ def xml_to_rst_report(xml, git_tag, reports=[]):
rst[-1] = rst[-1].format(test_name=test_name, ex_date=ex_date)
rst += ["="*len(rst[-1])]
rst += [""]
num_tests = e.get("tests")
num_err = int(e.get("errors"))
num_fail = int(e.get("failures"))
num_skip = int(e.get("skipped"))
# create a summary header
if num_err + num_fail == 0:
rst += [":header-passed:`✓`"]
else:
rst += [":header-failed:`❌`"]
if num_skip > 0:
rst[-1] += ":header-skipped:`⚠`"
rst += [""]
......@@ -487,12 +487,12 @@ def xml_to_rst_report(xml, git_tag, reports=[]):
for rname, rpath in reports:
rst += [":Report: `{} <{}>`_".format(rname, rpath)]
rst += [""]
# now the details
rst += ["Detailed Results"]
rst += ["-"*len(rst[-1])]
rst += [""]
detailed_failures = []
rows = []
for child in e:
......@@ -515,12 +515,12 @@ def xml_to_rst_report(xml, git_tag, reports=[]):
msg = "\n".join(textwrap.wrap(msg, 20))
row = [status, name, etype, msg, extime ]
rows.append(row)
header = ["Result", "Test", "Error", "Message", "Duration (s)"]
tblrst = tabulate.tabulate(rows, headers=header, tablefmt="rst")
rst += tblrst.split("\n")
rst += [""]
for test, report in detailed_failures:
rst += ["Failure report for: {}".format(test)]
rst += ["~"*len(rst[-1])]
......@@ -528,15 +528,15 @@ def xml_to_rst_report(xml, git_tag, reports=[]):
rst += [".. code-block:: python"]
rst += textwrap.indent(report, " "*4).split("\n")
rst += [""]
do_console = False
for child in e:
if child.tag == "system-out" and len(child.text.strip()):
do_console = True
break
if do_console:
# console output
rst += ["Console Output"]
rst += ["-"*len(rst[-1])]
......@@ -549,7 +549,7 @@ def xml_to_rst_report(xml, git_tag, reports=[]):
rst += [".. code-block:: console"]
rst += textwrap.indent(child.text, " "*4).split("\n")
return "\n".join(rst)
def sorted_dir(folder):
......@@ -570,12 +570,12 @@ Contents:
.. toctree::
:maxdepth: 2
"""
if not os.path.exists("./test_rsts"):
os.makedirs("./test_rsts")
with open("test_results.rst", "w") as f:
f.write(header)
for commit, modtime in sorted_dir(test_artefact_dir):
......@@ -586,7 +586,7 @@ with open("test_results.rst", "w") as f:
rst += ["+"*len(rst[-1])]
rst += [""]
fr.write("\n".join(rst))
# copy reports
pdfs = glob.glob("{}/{}/*/*.pdf".format(test_artefact_dir, commit))
if not os.path.exists("./_static/reports/{}".format(commit)):
......@@ -600,7 +600,7 @@ with open("test_results.rst", "w") as f:
rname = os.path.basename(pdf).split(".")[0]
rlist.append((rname, "../_static/reports/{}".format(ppath)))
reports[rloc] = rlist
xmls = glob.glob("{}/{}/*/TEST*.xml".format(test_artefact_dir, commit))
for xml in xmls:
rloc = xml.split("/")[-2]
......
......@@ -28,7 +28,7 @@ python file of the form::
# the command to run this concurrently. It is prepended to the actual call
launcher_command = "sbatch -p exfel -t 24:00:00 --mem 500G --mail-type END --requeue --output {temp_path}/slurm-%j.out"
A comment is given for the meaning of each configuration parameter.
......@@ -62,11 +62,11 @@ The configuration is to be given in form of a python directory::
...
}
}
The first key is the detector that the calibration may be used for, here AGIPD. The second
key level gives the name of the task being performed (here: DARK and PC). For each of these
entries, a path to the notebook and a concurrency hint should be given. In the concurrency
hint the first entry specifies which parameter of the notebook expects a list whose integer
hint the first entry specifies which parameter of the notebook expects a list whose integer
entries, can be concurrently run (here "modules"). The second parameter state with which range
to fill this parameter if it is not given by the user. In the example a `range(16):=0,1,2,...15`
would be passed onto the notebook, which is run as 16 concurrent jobs, each processing one module.
......@@ -75,9 +75,9 @@ be derived e.g. by profiling memory usage per core, run times, etc.
.. note::
It is good practice to name command line enabled notebooks with an `_NBC` suffix as shown in
It is good practice to name command line enabled notebooks with an `_NBC` suffix as shown in
the above example.
The `CORRECT` notebook (last notebook in the example) makes use of a concurrency generating function
by setting the `use function` parameter. This function must be defined in a code cell in the notebook,
its parameters should be named like other exposed parameters. It should return a list of of parameters
......@@ -99,13 +99,13 @@ function::
len(seq_nums)//sequences_per_node+1)]
else:
return sequences
.. note::
Note how imports are inlined in the definition. This is necessary, as only the function code,
not the entire notebook is executed.
which requires as exposed parameters e.g. ::
in_folder = "/gpfs/exfel/exp/SPB/201701/p002038/raw/" # the folder to read data from, required
......
......@@ -8,14 +8,14 @@ to expose Jupyter_ notebooks to a command line interface. In the process reports
from these notebooks. The general interface is::
% xfel-calibrate DETECTOR TYPE
where `DETECTOR` and `TYPE` specify the task to be performed.
Additionally, it leverages the DESY/XFEL Maxwell cluster to run these jobs in parallel
via SLURM_.
Here is a list of :ref:`available_notebooks`. See the :ref:`advanced_topics` if you are
for details on how to use as detector group staff.
for details on how to use as detector group staff.
If you would like to integrate additional notebooks please see the :ref:`development_workflow`.
......
......@@ -10,7 +10,7 @@ Contents:
.. toctree::
:maxdepth: 2
how_it_works
installation
configuration
......
......@@ -35,9 +35,9 @@ Installation using Anaconda
First you need to load the anaconda/3 environment through::
1. module load anaconda/3
1. module load anaconda/3
If installing into other python enviroments, this step can be skipped.
If installing into other python enviroments, this step can be skipped.
Then the package for the offline calibration can be obtained from the git repository::
......@@ -75,14 +75,14 @@ folder to match your environment.
The tool-chain is then available via the::
xfel-calibrate
command.
Installation using karabo
+++++++++++++++++++++++++
If required, one can install into karabo environment. The difference would be to
If required, one can install into karabo environment. The difference would be to
first source activate the karabo envrionment::
1. source karabo/activate
......@@ -94,8 +94,8 @@ then after cloning the offline calibration package from git, the requirements ca
Development Installation
------------------------
For a development installation in your home directory, which automatically
picks up (most) changes, first install the dependencies as above,
For a development installation in your home directory, which automatically
picks up (most) changes, first install the dependencies as above,
but then install the tool-chain separately in development mode::
pip install -e . --user
......@@ -107,14 +107,12 @@ but then install the tool-chain separately in development mode::
Installation of New Notebooks
-----------------------------
To install new, previously untracked notebooks in the home directory,
repeat the installation of the the tool-chain, without requirments,
To install new, previously untracked notebooks in the home directory,
repeat the installation of the the tool-chain, without requirments,
from the package base directory::
pip install --upgrade . --user
Or, in case you are actively developing::
pip install -e . --user