From e23b35c51ac026a44bda434c4eb4b311e8a267c4 Mon Sep 17 00:00:00 2001
From: Steffen Hauf <haufs@max-exfl057.desy.de>
Date: Mon, 11 Jun 2018 22:42:56 +0200
Subject: [PATCH] More documentation

---
 docs/source/.~how_it_works.rst |   0
 docs/source/advanced.rst       | 111 +++++++++++++++++++++++++++++++++
 docs/source/how_it_works.rst   |   3 +-
 docs/source/index.rst          |   1 +
 4 files changed, 114 insertions(+), 1 deletion(-)
 delete mode 100644 docs/source/.~how_it_works.rst
 create mode 100644 docs/source/advanced.rst

diff --git a/docs/source/.~how_it_works.rst b/docs/source/.~how_it_works.rst
deleted file mode 100644
index e69de29bb..000000000
diff --git a/docs/source/advanced.rst b/docs/source/advanced.rst
new file mode 100644
index 000000000..b9cb7a547
--- /dev/null
+++ b/docs/source/advanced.rst
@@ -0,0 +1,111 @@
+.. _advanced_topics:
+
+Advanced Topics
+===============
+
+The following tasks should only be carried out by trained staff.
+
+Initiating Automated Offline Correction
+---------------------------------------
+
+In order to initiate the automatic offline correction for a given detector and instrument
+navigate to the root path of the European XFEL Offline Calibration tool chain.
+
+It is best to run the following as a SLURM batch job in itself, as this insures it is 
+independent of the terminal you are on an will persistently run for a given time period:
+
+1. do a trial run of `automode_nbc.py` to test your settings::
+
+    python3 automode_nbc.py --input /gpfs/exfel/exp/FXE/201701/p002052/raw/ \
+        --output /gpfs/exfel/exp/FXE/201701/p002052/proc/ \
+        --base-cal-store /gpfs/.../lpd_ci_store_CHANID_16_5pf.h5 \
+        --offset-cal-store  /gpfs/.../lpd_offset_store_r0279_r0280_r0281_5pf.h5 \
+        --mem-cells 128 \
+        --detector LPD \
+        --type correct \
+        --partition exfel \
+        --ff-cal-store /gpfs/.../lpd_flatfield_store_r0230.h5 \
+        --only-new \
+        --start-time 2018-06-08
+
+Here `input` should refer to the proposal `raw` path, `output` to the proposal `proc`.
+The cal_stores refer to constants from files, usually they will be overwritten by
+usage of the calibration database, but a fallback is currently required. Note the `CHANID`
+field in the base_store constant. It signifies that module numbers are to be inserted 
+there. Finally, `only-new` should be used in combination with `start-time` to if previous
+data in the proposal was calibrated and should be left untouched. For fresh proposals
+these can be omitted.
+
+For testing you can also omit `only-new` and `start-time` and instead use the `--run XXX`
+or `--run XXX-YYY` option to trial run on a specific set of runs (possibly into a temporary
+directory). Use the `--overwrite` option to do this multiple times.
+
+Once you are satisfied with the results, place everthing in a bash script which will 
+initiate the actual startup::
+
+    nano automode_startup.sh
+   
+In this script make sure that a python environment with the necessary tools is available.
+Straight-forward this could be the environment you've installed the tool-chain in. The
+script should thus look similar to this::
+   
+    #!/bin/bash
+    source /gpfs/path/to/karabo/activate
+    cd /gpfs/path/to/pycalibration
+
+    python3 automode_nbc.py --input /gpfs/exfel/exp/FXE/201701/p002052/raw/ \
+        --output /gpfs/exfel/exp/FXE/201701/p002052/proc/ \
+        --base-cal-store /gpfs/.../lpd_ci_store_CHANID_16_5pf.h5 \
+        --offset-cal-store  /gpfs/.../lpd_offset_store_r0279_r0280_r0281_5pf.h5 \
+        --mem-cells 128 \
+        --detector LPD \
+        --type correct \
+        --partition exfel \
+        --ff-cal-store /gpfs/.../lpd_flatfield_store_r0230.h5 \
+        --only-new \
+        --start-time 2018-06-08
+        
+Now launch the script as a slurm job::
+
+    sbatch -p exfel -t 178:00:00 --mail-type=ALL automode_startup.sh
+    
+Make sure to give the `-t` option and set it to cover the expected beam-time duration.
+Otherwise the SLURM job will be cancelled after an hour. The `--mail-type` option will
+result in an email being sent to you (or rather the user currently logged in) in case
+of any failures, etc.
+
+
+Extending Correction Notebooks on User Request
+----------------------------------------------
+
+Internally, each automated correction run will trigger `calibrate_nbc.py` to be called
+anew on the respective notebook. This means that any changes to save to this notebook
+will be picked up for subsequent runs.
+
+This can be useful to add user requests while running. For this:
+
+1. create a working copy of the notebook in question, and create a commit of the the
+   production notebook to fall back to in case of problems::
+   
+   git add production_notebook_NBC.py
+   git commit -m "Known working version before edits"
+   cp production_notebook_NBC.py production_notebook_TEST.py
+   
+2. add any feature there and *thouroughly* test them
+3. when you are happy with the results, copy them over into the production notebook and
+   save.
+ 
+.. warning::
+
+    Live editing of correction notebooks is fully at your responsiblity. Do not do it
+    if you are not 100% sure you know what you are doing.
+    
+4. If it fails, revert back to the original state, ideally via git:: 
+
+       git checkout HEAD -- production_notebook_NBC.py 
+
+5. Any runs which did not correct do to failures of the live edit can then be relaunched
+   manually, assuming the correction notebook allows run and overwrite paramters::
+   
+       python calibrate_nbc.py ...... --run XYZ,ZXY-YYS --overwrite
+       
diff --git a/docs/source/how_it_works.rst b/docs/source/how_it_works.rst
index 9534ee367..82ba28d46 100644
--- a/docs/source/how_it_works.rst
+++ b/docs/source/how_it_works.rst
@@ -9,7 +9,8 @@ to expose Jupyter_ notebooks to a command line interface.
 Additionally, it leverages the DESY/XFEL Maxwell cluster to run these jobs in parallel
 via SLURM_.
 
-Here is a list of :ref:`available_notebooks`.
+Here is a list of :ref:`available_notebooks`. See the :ref:`advanced_topics` if you are
+for details on how to use as detector group staff. 
 
 If you would like to integrate additional notebooks please see the :ref:`development_workflow`.
 
diff --git a/docs/source/index.rst b/docs/source/index.rst
index c62bd4e99..b7ac0c626 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -16,6 +16,7 @@ Contents:
    configuration
    workflow
    available_notebooks
+   advanced
 
 
 
-- 
GitLab