Skip to content
Snippets Groups Projects
Commit e23b35c5 authored by Steffen Hauf's avatar Steffen Hauf
Browse files

More documentation

parent 0e1bb68e
No related branches found
No related tags found
1 merge request!5Clean
.. _advanced_topics:
Advanced Topics
===============
The following tasks should only be carried out by trained staff.
Initiating Automated Offline Correction
---------------------------------------
In order to initiate the automatic offline correction for a given detector and instrument
navigate to the root path of the European XFEL Offline Calibration tool chain.
It is best to run the following as a SLURM batch job in itself, as this insures it is
independent of the terminal you are on an will persistently run for a given time period:
1. do a trial run of `automode_nbc.py` to test your settings::
python3 automode_nbc.py --input /gpfs/exfel/exp/FXE/201701/p002052/raw/ \
--output /gpfs/exfel/exp/FXE/201701/p002052/proc/ \
--base-cal-store /gpfs/.../lpd_ci_store_CHANID_16_5pf.h5 \
--offset-cal-store /gpfs/.../lpd_offset_store_r0279_r0280_r0281_5pf.h5 \
--mem-cells 128 \
--detector LPD \
--type correct \
--partition exfel \
--ff-cal-store /gpfs/.../lpd_flatfield_store_r0230.h5 \
--only-new \
--start-time 2018-06-08
Here `input` should refer to the proposal `raw` path, `output` to the proposal `proc`.
The cal_stores refer to constants from files, usually they will be overwritten by
usage of the calibration database, but a fallback is currently required. Note the `CHANID`
field in the base_store constant. It signifies that module numbers are to be inserted
there. Finally, `only-new` should be used in combination with `start-time` to if previous
data in the proposal was calibrated and should be left untouched. For fresh proposals
these can be omitted.
For testing you can also omit `only-new` and `start-time` and instead use the `--run XXX`
or `--run XXX-YYY` option to trial run on a specific set of runs (possibly into a temporary
directory). Use the `--overwrite` option to do this multiple times.
Once you are satisfied with the results, place everthing in a bash script which will
initiate the actual startup::
nano automode_startup.sh
In this script make sure that a python environment with the necessary tools is available.
Straight-forward this could be the environment you've installed the tool-chain in. The
script should thus look similar to this::
#!/bin/bash
source /gpfs/path/to/karabo/activate
cd /gpfs/path/to/pycalibration
python3 automode_nbc.py --input /gpfs/exfel/exp/FXE/201701/p002052/raw/ \
--output /gpfs/exfel/exp/FXE/201701/p002052/proc/ \
--base-cal-store /gpfs/.../lpd_ci_store_CHANID_16_5pf.h5 \
--offset-cal-store /gpfs/.../lpd_offset_store_r0279_r0280_r0281_5pf.h5 \
--mem-cells 128 \
--detector LPD \
--type correct \
--partition exfel \
--ff-cal-store /gpfs/.../lpd_flatfield_store_r0230.h5 \
--only-new \
--start-time 2018-06-08
Now launch the script as a slurm job::
sbatch -p exfel -t 178:00:00 --mail-type=ALL automode_startup.sh
Make sure to give the `-t` option and set it to cover the expected beam-time duration.
Otherwise the SLURM job will be cancelled after an hour. The `--mail-type` option will
result in an email being sent to you (or rather the user currently logged in) in case
of any failures, etc.
Extending Correction Notebooks on User Request
----------------------------------------------
Internally, each automated correction run will trigger `calibrate_nbc.py` to be called
anew on the respective notebook. This means that any changes to save to this notebook
will be picked up for subsequent runs.
This can be useful to add user requests while running. For this:
1. create a working copy of the notebook in question, and create a commit of the the
production notebook to fall back to in case of problems::
git add production_notebook_NBC.py
git commit -m "Known working version before edits"
cp production_notebook_NBC.py production_notebook_TEST.py
2. add any feature there and *thouroughly* test them
3. when you are happy with the results, copy them over into the production notebook and
save.
.. warning::
Live editing of correction notebooks is fully at your responsiblity. Do not do it
if you are not 100% sure you know what you are doing.
4. If it fails, revert back to the original state, ideally via git::
git checkout HEAD -- production_notebook_NBC.py
5. Any runs which did not correct do to failures of the live edit can then be relaunched
manually, assuming the correction notebook allows run and overwrite paramters::
python calibrate_nbc.py ...... --run XYZ,ZXY-YYS --overwrite
...@@ -9,7 +9,8 @@ to expose Jupyter_ notebooks to a command line interface. ...@@ -9,7 +9,8 @@ to expose Jupyter_ notebooks to a command line interface.
Additionally, it leverages the DESY/XFEL Maxwell cluster to run these jobs in parallel Additionally, it leverages the DESY/XFEL Maxwell cluster to run these jobs in parallel
via SLURM_. via SLURM_.
Here is a list of :ref:`available_notebooks`. Here is a list of :ref:`available_notebooks`. See the :ref:`advanced_topics` if you are
for details on how to use as detector group staff.
If you would like to integrate additional notebooks please see the :ref:`development_workflow`. If you would like to integrate additional notebooks please see the :ref:`development_workflow`.
......
...@@ -16,6 +16,7 @@ Contents: ...@@ -16,6 +16,7 @@ Contents:
configuration configuration
workflow workflow
available_notebooks available_notebooks
advanced
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment