Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • SCS/ToolBox
  • kluyvert/ToolBox
2 results
Show changes
Showing
with 23019 additions and 0 deletions
source diff could not be displayed: it is too large. Options to address this: view the blob.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
SLURM, sbatch, partition, reservation
-------------------------------------
Scripts launched by ``sbatch`` command can employ magic cookie with
``#SBATCH`` to pass options SLURM, such as which partition to run on.
To work, the magic cookie has to be at the beginning of the line.
This means that:
* to comment out a magic cookie, adding another "#" before it is sufficient
* to comment a line to detail what the option does, it is best practice
to put the comment on the line before
Reserved partition are of the form "upex_003333" where 3333 is the proposal
number. To check what reserved partition are existing, their start and end
date, one can ``ssh`` to ``max-display`` and use the command ``sview``.
.. image:: sview.png
To use a reserved partition with ``sbatch``, one can use the magic cookie
``#SBATCH --reservation=upex_003333``
instead of the usual
``#SBATCH --partition=upex``
This diff is collapsed.
Reading the bunch pattern
=========================
The bunch pattern table is an array of 2700 values per train (the maximum number of pulses at 4.5 MHz provided by the machine) and contains information on how the pulses are distributed among SASE 1, 2, 3, and the various lasers at European XFEL.
The data stored in the bunch pattern table (mnemonic *bunchPatternTable*) can be extracted using the wrappers to the `euxfel_bunch_pattern <https://pypi.org/project/euxfel-bunch-pattern/>`_ package as follows:
.. code:: ipython3
import toolbox_scs as tb
proposalNB = 2953
runNB = 507
run, ds = tb.load(proposalNB, runNB, "bunchPatternTable")
ppl_mask = tb.is_ppl(ds["bunchPatternTable"])
ppl_mask is a boolean DataArray of dimensions trainId x 2700, where True values indicate where a laser pulse from the PP laser was triggered.
.. note::
The position of the PP laser pulses with respect to that of the SASE 3 pulses is arbitrary. The PP laser pattern always starts at pulse Id 0, while that of SASE 3 can vary, depending on the machine parameters. One can use the ToolBox function `align_ol_to_fel_pId()` to shift the PP laser pulse Ids to match those of SASE 3.
From this mask, one can obtain the number of pulses per train by summing along the 'pulse_slot' dimension:
.. code:: ipython3
ppl_npulses = ppl_mask.sum(dim='pulse_slot')
There is also the function `get_sase_pId()` that can be used to directly extract the pulse Ids of a particular location:
.. code:: ipython3
ppl_pId = tb.get_sase_pId(run, loc='scs_ppl')
This provides a list of the pulse Ids used during a run but does not track changes of number of pulses. For this, the mnemonic `npulses_laser` can be loaded to get the number of pulses in each trainId and the mnemonic `laser` can be loaded to get the pulse Ids of the scs_ppl in each trainId:
.. code:: ipython3
run, ds = tb.load(proposalNB, runNB, ['laser', 'npulses_laser'])
Release Notes
=============
unreleased
----------
- **Bug fixes**
- fix :issue:`75` regarding pulse Id assignment when reading BAM data :mr:`272`
- fix :issue:`80` regarding new bunch pattern table size when reading BAM data :mr:`279`
- fix :issue:`84` regarding loading only a given amount of BOZ DSSC data in memory :mr:`288`
- fix :issue:`86` regarding loading XGM :mr:`291`
- fix :issue:`88` regarding loading data from multiple folders :mr:`303`
- **Improvements**
- update documentation on knife-edge scan and fluence calcuation :mr:`276`, :mr:`278`
- update scannerY mnemonics :mr:`281`
- move loading logic to mnemonics :mr:`283`, :mr:`286`
- add MaranaX mnemonics :mr:`285`
- add Chem diagnostics and Gotthard II mnemonics :mr:`292`
- make hRIXS centroid threshold configurable :mr:`298`
- drop MaranaX non-existing mnemonics and add attributes on dataset :mr:`300`
- streamline digitizer functions :mr:`304`
- add BAM average and BAM feedbacks mnemonics, Viking mnemonics :mr:`305`, :mr:`306`
- improved function to load PES spectra :mr:`309`
- Gotthard-II mnemonics and pulse alignment :mr:`310`, :mr:`311`, `mr:330`
- Adds ``AppleX`` polarization mnemonics :mr:`313`
- Add BAM 2955_S3 and update BAM mnemonics :mr:`315`
- improve peak-finding algorithm for digitizer trace peak extraction :mr:`312`
- update documentation on digitizer peak extraction :mr: `312`
- split documentation howtos and change theme :mr:`328`
- **New Features**
- fix :issue:`75` add frame counts when aggregating RIXS data :mr:`274`
- BOZ normalization using 2D flat field polylines for S K-edge :mr:`293`
- BOZ GPU notebook :mr:`294`
- update hRIXS class for MaranaX :mr:`287`
- introduce MaranaX class for parallelized centroiding :mr:`297`
- New PES functions to save / load average traces :mr:`309`
- save peaks from raw digitizer traces as processed data :mr:`312`
- allow different modes for peak integration :mr:`329`
1.7.0
-----
- **Bug fixes**
- fix :issue:`61` regarding sign of XAS in some cases :mr:`207`
- Use xarray.values instead of .to_numpy() for backward-compatibility :mr:`214`
- fix :issue:`23` regarding API documentation generation :mr:`221`
- fix :issue:`64` regarding loading a subset of trains :mr:`226`, :mr:`230`
- fix :issue:`67` regarding bunch pattern :mr:`245`
- fix :issue:`44` regarding notebook for DSSC geometry quadrants alignment :mr:`247`
- fix :issue:`52` use extra-geom detector helper function for azimuthal integration :mr:`248`
- fix :issue:`18` regarding using reserved partition with sbatch :mr:`250`
- fix :issue:`68` now using findxfel command :mr:`256`
- fix :issue:`69` to speed up BOZ Small data and deal with missing intra-darks :mr:`256`
- fix :issue:`71` regarding rounding error in BOZ analysis related to timing :mr:`256`
- fix :issue:`40` update documentation regarding toolbox deployment folder in scratch :mr:`268`
- **Improvements**
- remove calls to matplotlib tight_layout :mr:`206`
- Improved hRIXS class and utilities :mr:`182`
- Documentation on extracting digitizer peaks, clean up of digitizer functions :mr:`215`
- Improved peak-finding algorithm for digitizer traces :mr:`216`, :mr:`227`
- Only load bunch pattern table when necessary :mr:`234`, :mr:`245`
- Document the HRIXS class :mr:`238`
- notebook example of DSSC azimuthal integration for time delay scans :mr:`249`
- provide drop-intra-darks option in BOZ analysis :mr:`256`
- improve automatic ROIs finding for BOZ analysis :mr:`256`
- prevent flat field correction from turning negative in BOZ analysis :mr:`259`
- update documentation to the new exfel-python environment :mr:`266`
- **New Features**
- Read signal description from Fast ADC and ADQ412 digitizers :mr:`209`, :mr:`212`
- Mnemonics for XRD devices :mr:`208`
- Add function to align OL to FEL pulse Id :mr:`218`
- Add reflectivity routine :mr:`218`
- Possibility to extract run values of mnemonics :mr:`220`, :mr:`232`
- Add get_undulator_config function :mr:`225`
- Document the HRIXS class :mr:`238`
- Include double counts for hRIXS SPC algorithms :mr:`239`
- Add Viking spectrometer analysis class :mr:`240`
- Add GPU accelaration for BOZ correction determination :mr:`254`
- Issues warning when loading data with > 5% missing trains :mr:`263`
1.6.0
-----
- **Bug fixes**
- fix :issue:`45` SLURM scripts embedded in and download link available
from documentation :mr:`171`
- fix :issue:`8` regarding azimuthal integration with pyFAI and hexagonal
DSSC pixel splitting by providing an example notebook :mr:`174`
- fix :issue:`46` with a change in dask groupby mean behavior :mr:`174`
- fix :issue:`47` SLURM script not using the correct kernel :mr:`176`
- fix :issue:`51` make sure that BAM units are in ps :mr:`183`
- fix :issue:`50` and :issue:`54` relating to package dependencies
- fix :issue:`57` adds target mono energy mnemonic
- fix :issue:`55` implementingd dask auto rechunking in notebooks
- fix :issue:`53` wrong flat field correction sometimes being calculated
- fix :issue:`56` future warning on xarray.ufuncs :mr:`189`
- **Improvements**
- update version of BAM mnemonics :mr:`175`
- update version GATT-related mnemonics, add `transmission_col2` :mr:`172`
- reorganize the Howto section :mr:`169`
- improve SLURM scripts with named arguments :mr:`176`
- adds notebook for DSSC fine timing analysis :mr:`184` and :mr:`185`
- numerous improvements for the flat field correction calculation
in the BOZ analysis, including fitting domain functions, hexagonal
pixel lattice, possibility to switch off flat field symmetry constraints
and a refine fit function with regularization term :mr:`186`
- simplifies flat field calculation by using directly the refined fit
procedure which works with far fewer input parameters :mr:`202`
- **New Features**
- add routine for fluence calibration :mr:`180`
- add Fast ADC 2 mnemonics :mr:`200`
1.5.0
-----
- **Bug fixes**
- fix :issue:`39` providing a changelog in the documentation :mr:`164`
- fix :issue:`37` BOZ analysis :mr:`158`
- fix :issue:`36` mnemonics :mr:`159`
- fix :issue:`35` BOZ notebook dependencies :mr:`157`
- fix :issue:`34` BOZ time delay calculations and plotting :mr:`154`
- fix :issue:`32` significantly speeding up in XAS binning calculation :mr:`151`
- fix :issue:`27` improving BOZ analysis :mr:`146`
- fix :issue:`28` pp pattern in DSSC dask binning :mr:`144`
- fix :issue:`26` several BOZ analysis improvements :mr:`135`
- **Improvements**
- checks if single-version mnemonics is in all_sources :mr:`163`
- add :code:`get_bam_params()` :mr:`160`
- only check keys if menmonic has more than one version :commit:`ae724d3c`
- add FFT focus lens mnemonics :mr:`156`
- add dask as dependency :mr:`155`
- renamed FFT sample Z mnemonics :mr:`153`
- add virtual sample camera LLC_webcam1 into mnemonics :mr:`152`
- fix digitizer check params :mr:`149`
- improve installation instruction :mr:`145`
- add Newton camera :mr:`142`
- simplified :code:`mnemonics_for_run()` :commit:`3cc98c16`
- adds Horizontal FDM to mnemonics :mr:`141`
- add setup documentation :mr:`140`
- numerous PES fixes :mr:`143`, :mr:`130`, :mr:`129`, :mr:`138`, :mr:`137`
- change in FFT sample Z mnemonics :mr:`125` and :mr:`124`
- add MTE3 camera :mr:`123`
- add KB benders averager :mr:`120` and :mr:`119`
- **New Features**
- implement the Beam-splitting Off-axis Zone-plate analysis:
:mr:`150`, :mr:`139`, :mr:`136`, :mr:`134`, :mr:`133`, :mr:`132`,
:mr:`131`, :mr:`128`, :mr:`127`, :mr:`126`, :mr:`115`
- introduce dask assisted DSSC binning, fixing :issue:`24` and :issue:`17`
1.4.0
-----
- **Bug fixes**
- fix :issue:`22` using extra-data read machinery :mr:`105`
- fix :issue:`21` and :issue:`12` introducing mnemonics version :mr:`104`
- **Improvements**
- fix :code:`get_array()`, add wrappers to some of `extra_data` basic
functions :mr:`116`
- new FastADC mnemonics :mr:`112`
- refactor packaging :mr:`106`
- add :code:`load_bpt()` function :commit:`9e2c1107`
- add XTD10 MCP mnemonics :commit:`8b550c9b`
- add :code:`digitizer_type()` function :commit:`75eb0bca`
- separate FastADC and ADQ412 code :commit:`939d32b9`
- documentation centralized on rtd.xfel.eu :mr:`103`
- simplify digitizer functions and pulseId coordinates assignment for XGM
and digitizers :mr:`100`
- **New Features**
- base knife-edge scan analysis implementation :mr:`107`
- add PES :mr:`108`
- integrate documentation to rtd.xfel.eu :mr:`103` and :mr:`99`
1.1.2rc1
--------
- **Bug Fixes**
- **Improvements**
- **New Feature**
- introduce change in package structure, sphinx documentation and
DSSC binning class :mr:`87`
# -*- coding: utf-8 -*-
#
# Configuration file for the Sphinx documentation builder.
#
# This file does only contain a selection of the most common options. For a
# full list see the documentation:
# http://www.sphinx-doc.org/en/master/config
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
sys.path.insert(0, os.path.abspath('..'))
# -- Project information -----------------------------------------------------
project = 'SCS Toolbox'
copyright = '2025, SCS'
author = 'SCS'
# The short X.Y version
version = ''
# The full version, including alpha/beta/rc tags
release = ''
# -- General configuration ---------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.mathjax',
'sphinx.ext.viewcode',
'sphinx.ext.coverage',
'sphinx.ext.napoleon',
'sphinx.ext.extlinks',
'autoapi.extension',
'nbsphinx'
]
nbsphinx_execute = 'never'
autoapi_dirs = ['../src/toolbox_scs']
autoapi_ignore = ['*/deprecated/*']
extlinks = {'issue': ('https://git.xfel.eu/SCS/ToolBox/-/issues/%s',
'issue:%s'),
'mr': ('https://git.xfel.eu/SCS/ToolBox/-/merge_requests/%s',
'MR:%s'),
'commit': ('https://git.xfel.eu/SCS/ToolBox/-/commit/%s',
'commit:%s')
}
# Don't add .txt suffix to source files:
html_sourcelink_suffix = ''
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = None
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'pydata_sphinx_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
#
# The default sidebars (for documents that don't match any pattern) are
# defined by theme itself. Builtin themes are using these templates by
# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
# 'searchbox.html']``.
#
# html_sidebars = {}
# -- Options for HTMLHelp output ---------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'SCSToolboxdoc'
# -- Options for LaTeX output ------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'SCSToolbox.tex', 'SCS Toolbox Documentation',
'SCS', 'manual'),
]
# -- Options for manual page output ------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'scstoolbox', 'SCS Toolbox Documentation',
[author], 1)
]
# -- Options for Texinfo output ----------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'SCSToolbox', 'SCS Toolbox Documentation',
author, 'SCSToolbox', 'One line description of project.',
'Miscellaneous'),
]
# -- Options for Epub output -------------------------------------------------
# Bibliographic Dublin Core info.
epub_title = project
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
#
# epub_identifier = ''
# A unique identification for the text.
#
# epub_uid = ''
# A list of files that should not be packed into the epub file.
epub_exclude_files = ['search.html']
# -- Extension configuration -------------------------------------------------
Digitizers
==========
.. toctree::
Extracting digitizer peaks <How to extract peaks from digitizer traces>
Point detectors
---------------
Detectors that produce one point per pulse, or 0D detectors, are all handled in
a similar way. Such detectors are, for instance, the X-ray Gas Monitor (XGM),
the Transmitted Intensity Monitor (TIM), the electron Bunch Arrival
Monitor (BAM) or the photo diodes monitoring the PP laser.
Basic principle
===============
In a DSSC binner object we basically define maps, here called 'binners'. They define into which bucket a value along a certain dimension will be put into.
map: coordinate array along specified dimension -> array of buckets
Example 1
=========
Bin static run
~~~~~~~~~~~~~~
.. code:: ipython3
import os
import logging
import importlib
import numpy as np
import xarray as xr
import pandas as pd
import extra_data as ed
import toolbox_scs as tb
import toolbox_scs.detectors as tbdet
import toolbox_scs.detectors.dssc_plot as dssc_plot
#logging.basicConfig(level=logging.INFO)
.. code:: ipython3
# run settings
proposal_nb = 2711
run_nb = 97
.. code:: ipython3
# create a extra_data run object and collect detector information
run = tb.load_run(proposal_nb, run_nb)
dssc_info = tbdet.load_dssc_info(proposal_nb, run_nb)
.. code:: ipython3
# create toolbox bin object
bin_obj = tbdet.DSSCBinner(proposal_nb, run_nb)
.. code:: ipython3
# create array that will map the pulse dimension into
# buckets 'image' and 'dark' -> resulting dimension length = 2
buckets_pulse = ['image', 'dark'] * 30 # put all 60 pulses into buckets image and dark
# create array that will map the trainId dimension into
# buckets [0] -> resulting dimension length = 1
buckets_train = np.zeros(len(run.train_ids)).astype(int) # put all train Ids to the same bucket called 0.
.. code:: ipython3
#create binners (xarray data arrays that basically act as a map)
fpt = dssc_info['frames_per_train']
binnertrain = tbdet.create_dssc_bins("trainId",run.train_ids,buckets_train)
binnerpulse = tbdet.create_dssc_bins("pulse",np.linspace(0,fpt-1,fpt, dtype=int),buckets_pulse)
.. code:: ipython3
# add binners to bin object
bin_obj.add_binner('trainId', binnertrain)
bin_obj.add_binner('pulse', binnerpulse)
.. code:: ipython3
# get a prediction of how the data will look like (minus module dimension)
bin_obj.get_info()
.. parsed-literal::
Frozen(SortedKeysDict({'trainId': 1, 'pulse': 2, 'x': 128, 'y': 512}))
.. code:: ipython3
# bin 2 modules in parallel
mod_list = [0,15]
bin_obj.bin_data(chunksize=248, modules=mod_list, filepath='./')
.. code:: ipython3
# Save metadata into file
fname = 'testfile.h5'
tbdet.save_xarray(fname, binnertrain, group='binner1', mode='a')
tbdet.save_xarray(fname, binnerpulse, group='binner2', mode='a')
Example2
========
bin pump-probe data
~~~~~~~~~~~~~~~~~~~
.. code:: ipython3
# run settings
proposal_nb = 2212
run_nb = 235
.. code:: ipython3
# Collect information about run
run_obj = tb.load_run(proposal_nb, run_nb)
detector_info = tbdet.load_dssc_info(proposal_nb, run_nb)
.. code:: ipython3
bin_obj = tbdet.DSSCBinner(proposal_nb, run_nb)
.. code:: ipython3
# define buckets
buckets_trainId = (tb.get_array(run_obj, 'PP800_PhaseShifter', 0.03)).values
buckets_pulse = ['pumped', 'unpumped'] * 10
# create binner
binnerTrain = tbdet.create_dssc_bins("trainId",
detector_info['trainIds'],
buckets_trainId)
binnerPulse = tbdet.create_dssc_bins("pulse",
np.linspace(0,19,20, dtype=int),
buckets_pulse)
.. code:: ipython3
bin_obj.add_binner('trainId', binnerTrain)
bin_obj.add_binner('pulse', binnerPulse)
.. code:: ipython3
# get a prediction of how the data will look like (minus module dimension)
bin_obj.get_info()
.. parsed-literal::
Frozen(SortedKeysDict({'trainId': 271, 'pulse': 2, 'x': 128, 'y': 512}))
.. code:: ipython3
# bin 2 modules using the joblib module to precess the two modules
# in parallel.
bin_obj.bin_data(chunksize=248, modules=mod_list, filepath='./')
doc/dssc/hist1D.png

10.1 KiB

doc/dssc/plot1D.png

30.3 KiB

doc/dssc/xgm_threshold.png

38.4 KiB

.. _how to's:
How to's
~~~~~~~~
.. toctree::
:maxdepth: 1
Loading data <load>
Reading bunch pattern <bunch_pattern_decoding>
Knife-edge and fluence calculation <Knife edge scan and fluence calculation>
Extracting digitizers traces <digitizers>
Fine timing <transient reflectivity>
Processing DSSC data <DSSC>
BOZ analysis <BOZ>
PES analysis <PES_spectra_extraction>
HRIXS analysis <HRIXS>
Viking analysis <Analysis_of_Viking_spectrometer_data>
SLURM
SCS Toolbox
===========
.. toctree::
:caption: Contents:
:maxdepth: 2
howtos.rst
maintainers.rst
changelog.rst
Installation
------------
Recommended: Proposal Environment setup
+++++++++++++++++++++++++++++++++++++++
The proposal specific environment is installed by launching in a shell
terminal on Maxwell:
.. code:: bash
module load exfel exfel-python
scs-activate-toolbox-kernel --proposal 2780
where in this example 2780 is the proposal number. After this and refreshing the
browser, a new kernel named ``SCS Toolbox (p002780)`` is available and should
be used to run jupyter notebooks on the Maxwell Jupyter hub.
Figures setup: enabling matplotlib constrained layout
+++++++++++++++++++++++++++++++++++++++++++++++++++++
To get the best looking figures generated by the SCS Toolbox, you
need to enable the experimental constrained_layout_ solver in matplotlib. This
is done in jupyter notebook with adding at the start the following lines:
.. code:: python
import matplotlib.pyplot as plt
plt.rcParams['figure.constrained_layout.use'] = True
.. _constrained_layout: https://matplotlib.org/stable/tutorials/intermediate/constrainedlayout_guide.html
Alternative: Manual ToolBox Installation
++++++++++++++++++++++++++++++++++++++++
The ToolBox may be installed in any environment. However, it depends on the
extra_data and the euxfel_bunch_pattern package, which are no official third
party python modules. Within environments where the latter are not present, they
need to be installed by hand.
In most cases, you will want to install it in the *exfel_python* environment
which is the one corresponding to the *xfel* kernel on max-jhub.
To do so, first activate that environment:
.. code:: bash
module load exfel exfel-python
Then, check that the scs_toolbox is not already installed:
.. code:: bash
pip show toolbox_scs
If the toolbox has been installed in your home directory previously, everything
is set up. If you need to upgrade the toolbox to a more recent version, you
have to either uninstall the current version:
.. code:: bash
pip uninstall toolbox_scs
or if it was installed in development mode, go in the toolbox directory and
pull the last commits from git:
.. code:: bash
cd #toolbox top folder
git pull
Otherwise it needs to be installed (only once). In that you first need to
clone the toolbox somewhere (in your home directory for example) and install
the package. Here the -e flag install the package in development mode, which
means no files are copied but only linked such that any changes made to the
toolbox files will have effect:
.. code:: bash
cd ~
git clone https://git.xfel.eu/SCS/ToolBox.git
cd ~/ToolBox
pip install --user -e ".[maxwell]"
Transferring Data
-----------------
The DAQ system save data on the online cluster. To analyze data on the
Maxwell offline cluster, they need to be transferred there. This is achieved by
login at:
https://in.xfel.eu/metadata
and then navigating to the proposal, then to the ``Runs`` tab from where runs can
be transferred to the offline cluster by marking them as ``Good`` in the
``Data Assessment``. Depending on the amount of data in the run, this can take a
while.
.. image:: metadata.png
Processing Data
---------------
On the Maxwell Jupyter hub:
https://max-jhub.desy.de
notebooks can be executed using the corresponding Proposal Environment
kernel or the ``xfel`` kernel if the toolbox was manually installed. For quick
startup, example notebooks (.ipynb files) can be directly downloaded from the
:ref:`How to's` section by clicking on the ``View page source``.
Contribute
----------
For reasons of readability, contributions preferrably comply with the PEP8_ code structure guidelines.
.. _PEP8: https://www.python.org/dev/peps/pep-0008/#a-foolish-consistency-is-the-hobgoblin-of-little-minds
The associated code checker, called 'flake8', can be installed via PyPi.
exflqr30526,131.169.220.113 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCcPp1A0EeC72+tSXKqfVfIlxAKq3Bp7CA2LfSbYD9hw4SKp6UhGcM8R5eQ5wnoMCoJ4IwqMqA0Mi24K9kgQxpAao6GOh3Yx+YznLtKGZRx3EYUM+9s+fDEzmZvymZF5aBqfwx6j1aUiSKciVbo/DNjU0PxuTjliicZqqOaaPVZPMHJA9GZuvwvfxcaFp8fDWAMSRKr01rihcJZljXLe8ZLBZTYt8i+W561WrgZiGztLxaBMT9o+MbEkHsUor5xBidW6NOH1TkQw3wiVp0vl6/ca9VskMIr3OuWCdce6YqR3GC3wBJMMgUeFxyKr3t0k1b4Voxbcy1dh0ObaFPniBAv
Loading run data
================
.. toctree::
Loading_data_in_memory
Short version
-------------
Loading data in memory is performed as follows:
**Option 1**:
.. code:: python3
import toolbox_scs as tb
# optional, check available mnemonics
# print(tb.mnemonics)
# fields is a list of available mnemonics, representing the data
# to be loaded
fields = ["FastADC4raw", "scannerX"]
proposalNr = 2565
runNr = 19
run, data = tb.load(proposalNr, runNr, fields)
run is an extra_data dataCollection and data is an xarray Dataset containing all variables listed in fields. All variables are aligned by train Id.
**Option 2**:
.. code:: python3
import toolbox_scs as tb
# get entry for single data source defined in mnemonics
proposalNr = 2565
runNr = 19
run = tb.open_run(proposalNr, runNr)
data = tb.get_array(run, "scannerX")
run is an extra_data dataCollection and data an xarray dataArray for a single data source.
.. _maintainers:
Maintainers
~~~~~~~~~~~
Creating a Toolbox environment in the proposal folder
-----------------------------------------------------
A Toolbox environment can be created by running the following commands
in Maxwell:
.. code:: shell
module load exfel exfel-python
scs-create-toolbox-env --proposal <PROPOSAL>
where ``<PROPOSAL>`` is the desired proposal number. This will create a Python
environment and will download and install the Toolbox source code. It will
result to the creation of the following folders in the path
``<PROPOSAL_PATH>/scratch``.
.. code-block::
<PROPOSAL_PATH>/scratch/
├─ checkouts/
│ ├─ toolbox_<PROPOSAL>/
│ │ ├─ <source code>
├─ envs/
│ ├─ toolbox_<PROPOSAL>/
│ │ ├─ <Python files>
The ``checkouts`` folder contains the Toolbox source codes, correspondingly
labeled according to the environment identifier, which is the proposal number
by default. The downloaded code defaults to the **master** version at the
time when the environment is created.
The ``envs`` folder contains the Python environment with the packages necessary
to run the Toolbox. It is also correspondingly labeled according to the
environment identifier, which is the proposal number by default.
.. note::
One can find the proposal path by running ``findxfel <PROPOSAL>``.
It is a good practice to tag the Toolbox version at a given milestone.
This version can be then supplied to the script as:
.. code:: shell
scs-create-toolbox-env --proposal <PROPOSAL> --version <VERSION>
It might also be helpful to supply an identifier to distinguish environments
from each other. This can be done by running:
.. code:: shell
scs-create-toolbox-env --proposal <PROPOSAL> --identifier <IDENTIFIER>
The environment would then be identified as ``toolbox_<IDENTIFIER>`` instead
of ``toolbox_<PROPOSAL>``.
Installing additional packages
------------------------------
In order to install additional packages in a Toolbox environment, one should
run the following commands:
.. code:: shell
cd <PROPOSAL_PATH>/scratch/
source envs/toolbox_<IDENTIFIER>/bin/activate
pip install ...
There's no need to load the ``exfel`` module.
Updating the source codes
--------------------------
Should there be desired changes in the Toolbox codes, may it be bug fixes or
additional features during beamtime, one can freely modify the source codes in
the following path: ``<PROPOSAL_PATH>/scratch/checkouts/toolbox_<IDENTIFIER>``.
The contents of this folder should be a normal git repository. Any changes can
be easily done (e.g., editing a line of code, checking out a different
branch, etc.) and such changes are immediately reflected on the environment.