Skip to content
Snippets Groups Projects

Advanced Topics

The following tasks should only be carried out by trained staff.

Request dark characterization

The script runs dark characterization notebook with default parameters via web service. The data needs to be transferred via the MDC, however, the web service will wait for any transfer to be completed. The detector is chosen automatically with respect to given instrument ([--instrument]). For AGIPD, LPD, Jungfrau runs for the three gain stages need to be given ([--run-high], [--run-med], [--run-low]). For FastCCD and ePIX only a single run needs to be given ([--run]).

The complete list of parameters is:

-h, --help            show this help message and exit
--proposal PROPOSAL   The proposal number, without leading p, but with
                      leading zeros
--instrument {SPB,MID,FXE,SCS,SQS,HED}
                      The instrument
--cycle CYCLE         The facility cycle
--run-high RUN_HIGH   Run number of high gain data as an integer
--run-med RUN_MED     Run number of medium gain data as an integer
--run-low RUN_LOW     Run number of low gain data as an integer
--run RUN             Run number as an integer

The path to data files is defined from script parameters. A typical data path, which can be found in the MDC is:

/gpfs/exfel/exp/MID/201930/p900071/raw

Where [MID] is an instrument name, [201930] is a cycle, [900071] is a proposal number.

Extending Correction Notebooks on User Request

Internally, each automated correction run will trigger [calibrate_nbc.py] to be called anew on the respective notebook. This means that any changes to save to this notebook will be picked up for subsequent runs.

This can be useful to add user requests while running. For this:

  1. create a working copy of the notebook in question, and create a commit of the production notebook to fall back to in case of problems:

    ` git add production_notebook_NBC.py git commit -m "Known working version before edits" cp production_notebook_NBC.py production_notebook_TEST.py`

  2. add any feature there and thoroughly test them

  3. when you are happy with the results, copy them over into the production notebook and save.

???+ warning

Live editing of correction notebooks is fully at your responsibility. Do
not do it if you are not 100% sure you know what you are doing.
  1. If it fails, revert to the original state, ideally via git:

    git checkout HEAD -- production_notebook_NBC.py
  2. Any runs which did not correct do to failures of the live edit can then be relaunched manually, assuming the correction notebook allows run and overwrite parameters:

    xfel-calibrate ...... --run XYZ,ZXY-YYS --overwrite

Using a Parameter Generator Function

By default, the parameters to be exposed to the command line are deduced from the first code cell of the notebook, after resolving the notebook itself from the detector and characterization type. For some applications it might be beneficial to define a context-specific parameter range within the same notebook, based on additional user input. This can be done via a parameter generation function which is defined in one of the code cell:

def extend_parms(detector_instance):
    from iCalibrationDB import Conditions
    import inspect
    existing = set()
    def extract_parms(cls):
        args, varargs, varkw, defaults = inspect.getargspec(cls.__init__)
        pList = []
        for i, arg in enumerate(args[1:][::-1]):
            if arg in existing:
                continue

            existing.add(arg)

            if i < len(defaults):
                default = defaults[::-1][i]
                if str(default).isdigit():
                    pList.append("{} = {}".format(arg, default))
                elif default is None or default == "None":
                    pList.append("{} = \"None\"".format(arg))
                else:
                    pList.append("{} = \"{}\"".format(arg, default))
            else:
                pList.append("{} = 0.  # required".format(arg))
        return set(pList[::-1])  # mandatories first
    dtype = "LPD" if "LPD" in detector_instance.upper() else "AGIPD"
    all_conditions = set()
    for c in dir(Conditions):
        if c[:2] != "__":
            condition = getattr(Conditions, c)
            parms = extract_parms(getattr(condition, dtype))
            [all_conditions.add(p) for p in parms]
    return "\n".join(all_conditions)

???+ note

Note how all imports are inlined, as the function is executed outside
the notebook context.

In the example, the function generates a list of additional parameters depending on the [detector_instance] given. Here, [detector_instance] is defined in the first code cell the usual way. Any other parameters defined such, that have names matching those of the generator function signature are passed to this function. The function should then return a string containing additional code to be appended to the first code cell.

To make use of this functionality, the parameter generator function needs to be configured in [notebooks.py], e.g. :

...
"GENERIC": {
    "DBTOH5": {
        "notebook": "notebooks/generic/DB_Constants_to_HDF5_NBC.ipynb",
        "concurrency": {"parameter": None,
                        "default concurrency": None,
                        "cluster cores": 32},
        "extend parms": "extend_parms",
    },
}
...

To generically query which parameters are defined in the first code cell, the code execution history feature of iPython can be used:

ip = get_ipython()
session = ip.history_manager.get_last_session_id()
first_cell = next(ip.history_manager.get_range(session, 1, 2, raw=True))
_, _, code = first_cell
code = code.split("\n")
parms = {}
for c in code:
    n, v = c.split("=")
    n = n.strip()
    v = v.strip()
    try:
        parms[n] = float(v)
    except:
        parms[n] = str(v) if not isinstance(v, str) else v
    if parms[n] == "None" or parms[n] == "'None'":
        parms[n] = None

This will create a dictionary [parms] which contains all parameters either as [float] or [str] values.