diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index ffd5e68a5a315ab72d3fac0a8dd76a895d1b1e17..f9cef0dba0fe74df91148db73f84c1a4527c4151 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -8,6 +8,7 @@ pages: - apt-get install -y pandoc - pip3 install sphinx-autoapi - pip3 install nbsphinx + - pip3 install pydata-sphinx-theme - sphinx-build -b html doc public pages: True artifacts: diff --git a/doc/BOZ.rst b/doc/BOZ.rst new file mode 100644 index 0000000000000000000000000000000000000000..212b3fdf4512e56e127164316e4d3a40c3363277 --- /dev/null +++ b/doc/BOZ.rst @@ -0,0 +1,55 @@ +BOZ: Beam-Splitting Off-axis Zone plate analysis +------------------------------------------------ + +The BOZ analysis consists of 4 notebooks and a script. The first notebook +:doc:`BOZ analysis part I.a Correction determination <BOZ analysis part I.a Correction determination>` +is used to determine all the necessary correction, that is the flat field +correction from the zone plate optics and the non-linearity correction from the +DSSC gain. The inputs are a dark run and a run with X-rays on three broken or +empty membranes. For the latter, an alternative is to use pre-edge data on an +actual sample. The result is a JSON file that contains the flat field and +non-linearity correction as well as the parameters used for their determination +such that this can be reproduced and investigated in case of issues. The +determination of the flat field correction is rather quick, few minutes and is +the most important correction for the change in XAS computed from the -1st and ++1st order. For quick correction of the online preview one can bypass the +non-linearity calculation by taking the JSON file as soon as it appears. +The determination of the non-linearity correction is a lot longer and can take +some 2 to 8 hours depending on the number of pulses in the +train. For this reason, the computation can also be done on GPUs in 30min +instead. A GPU notebook adapted for CHEM experiment with liquid jet and +normalization implement for S K-edge is available at +:doc:`OnlineGPU BOZ analysis part I.a Correction determination S K-egde <OnlineGPU BOZ analysis part I.a Correction determination S K-egde>`. + +The other option is to use a script +that can be downloaded from :download:`scripts/boz_parameters_job.sh` and +reads as: + +.. literalinclude:: scripts/boz_parameters_job.sh + :language: bash + :linenos: + +It uses the first notebook and is launched via slurm: + +``sbatch ./boz_parameters_job.sh -p 2937 -d 615 -r 614 -g 3`` + +where 2937 is the proposal run number, where 615 is the dark run number, +614 is the run on 3 broken membranes and 3 is +the DSSC gain in photon per bin. The proposal run number is defined inside the +script file. + +The second notebook +:doc:`BOZ analysis part I.b Correction validation <BOZ analysis part I.b Correction validation>` can be used to check how well the calculated correction still +work on a characterization run recorded later, i.e. on 3 broken membrane or empty membranes. + +The third notebook +:doc:`BOZ analysis part II.1 Small data <BOZ analysis part II.1 Small data>` +then use the JSON correction file to load all needed corrections and +process an run, saving the rois extracted DSSC as well as aligning them to +photon energy and delay stage in a small data h5 file. + +That small data h5 file can then be loaded and the data binned to compute a +spectrum or a time resolved XAS scan using the fourth and final notebook +:doc:`BOZ analysis part II.2 Binning <BOZ analysis part II.2 Binning>` + + diff --git a/doc/DSSC.rst b/doc/DSSC.rst new file mode 100644 index 0000000000000000000000000000000000000000..f4738b9c162500daf447921bd935309f4dc10337 --- /dev/null +++ b/doc/DSSC.rst @@ -0,0 +1,103 @@ +DSSC +---- + +DSSC data binning +################# + +In scattering experiment one typically wants to bin DSSC image data versus +time delay between pump and probe or versus photon energy. After this first +data reduction steps, one can do azimuthal integration on a much smaller +amount of data. + +The DSSC data binning procedure is based on the notebook +:doc:`Dask DSSC module binning <Dask DSSC module binning>`. It performs +DSSC data binning against a coordinate specified by *xaxis* which can +be *nrj* for the photon energy, *delay* in which case the delay stage +position will be converted in picoseconds and corrected but the BAM, or +another slow data channel. Specific pulse pattern can be defined, such as: + +.. code:: python + + ['pumped', 'unpumped'] + +which will be repeated. XGM data will also be binned similarly to the DSSC +data. + +Since this data reduction step can be quite time consuming for large datasets, +it is recommended to launch the notebook via a SLURM script. The script can be +downloaded from :download:`scripts/bin_dssc_module_job.sh` and reads as: + +.. literalinclude:: scripts/bin_dssc_module_job.sh + :language: bash + :linenos: + +It is launched with the following: + +.. code:: bash + + sbatch ./bin_dssc_module_job.sh -p 2719 -d 180 -r 179 -m 0 -x delay -b 0.1 + sbatch ./bin_dssc_module_job.sh -p 2719 -d 180 -r 179 -m 1 -x delay -b 0.1 + sbatch ./bin_dssc_module_job.sh -p 2719 -d 180 -r 179 -m 2 -x delay -b 0.1 + sbatch ./bin_dssc_module_job.sh -p 2719 -d 180 -r 179 -m 3 -x delay -b 0.1 + +where 2719 is the proposal number, 180 is the dark run number, 179 is the run +nummber and 0, 1, 2 and 3 are the 4 module group, each job processing a set of +4 DSSC module, delay is the bin axis and 0.1 is the bin width. + +The result will be 16 \*.h5 files, one per module, saved in the folder specified +in the script, a copy of which can be found in the *scripts* folder in the +toolbox source. This files can then be loaded and combined with: + +.. code:: python + + import xarray as xr + data = xr.open_mfdataset(path + '/*.h5', parallel=True, join='inner') + + +DSSC azimuthal integration +########################## + +Azimuthal integration can be performed with pyFAI_ which can utilize the +hexagonal pixel shape information from the DSSC geometry to split +the intensity in a pixel in the bins covered by it. An example notebook +:doc:`Azimuthal integration of DSSC with pyFAI.ipynb <Azimuthal integration of DSSC with pyFAI>` is available. + +A second example notebook +:doc:`DSSC scattering time-delay.ipynb <DSSC scattering time-delay>` +demonstrates how to: + +- refine the geometry such that the scattering pattern is centered before + azimuthal integration + +- perform azimuthal integration on a time delay + dataset with ``xr.apply_ufunc`` for multiprocessing. + +- plot a two-dimensional map of the scattering change as function of + scattering vector and time delay + +- integrate certain scattering vector range and plot a time trace + +DSSC fine timing +################ + +When DSSC is reused after a period of inactivity or when the DSSC gain setting +use a different operation frequency the DSSC fine trigger delay needs to be +checked. To analysis runs recorded with different fine delay, one can use +the notebook :doc:`DSSC fine delay with SCS toolbox.ipynb <DSSC fine delay with SCS toolbox>`. + +DSSC quadrant geometry +###################### + +To check or refined the DSSC geometry or quadrants position, the following +notebook can be used :doc:`DSSC create geometry.ipynb <DSSC create geometry>`. + +Legacy DSSC binning procedure +############################# + +Most of the functions within toolbox_scs.detectors can be accessed directly. This is useful during development, or when working in a non-standardized way, which is often neccessary during data evaluation. For frequent routines there is the possibility to use dssc objects that guarantee consistent data structure, and reduce the amount of recurring code within the notebook. + +* bin data using toolbox_scs.tbdet -> *to be documented*. +* :doc:`bin data using the DSSCBinner <dssc/DSSCBinner>`. +* post processing, data analysis -> *to be documented* + +.. _pyFAI: https://pyfai.readthedocs.io diff --git a/doc/Knife edge scan and fluence calculation.ipynb b/doc/Knife edge scan and fluence calculation.ipynb index 02a0493f00b59209ca75dca1ac4d5df191d1971c..47f17d800f39b102870db382063e71c5d96ff0db 100644 --- a/doc/Knife edge scan and fluence calculation.ipynb +++ b/doc/Knife edge scan and fluence calculation.ipynb @@ -4,7 +4,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# In a nutshell" + "# Knife-edge scan analysis and fluence calculation" ] }, { @@ -3112,9 +3112,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Detailed notebook\n", + "## Fluence calculation\n", "\n", - "## Peak fluence calculation using knife-edge scans:\n", + "### Peak fluence calculation using knife-edge scans:\n", "\n", "\n", "The peak fluence of a Gaussian beam is defined as:\n", diff --git a/doc/SLURM.rst b/doc/SLURM.rst new file mode 100644 index 0000000000000000000000000000000000000000..8e89efe6e1b514983843915cbf9a2fd99c5c757f --- /dev/null +++ b/doc/SLURM.rst @@ -0,0 +1,25 @@ +SLURM, sbatch, partition, reservation +------------------------------------- + +Scripts launched by ``sbatch`` command can employ magic cookie with +``#SBATCH`` to pass options SLURM, such as which partition to run on. +To work, the magic cookie has to be at the beginning of the line. +This means that: + +* to comment out a magic cookie, adding another "#" before it is sufficient +* to comment a line to detail what the option does, it is best practice + to put the comment on the line before + +Reserved partition are of the form "upex_003333" where 3333 is the proposal +number. To check what reserved partition are existing, their start and end +date, one can ``ssh`` to ``max-display`` and use the command ``sview``. + +.. image:: sview.png + +To use a reserved partition with ``sbatch``, one can use the magic cookie + +``#SBATCH --reservation=upex_003333`` + +instead of the usual + +``#SBATCH --partition=upex`` diff --git a/doc/bunch_pattern_decoding.rst b/doc/bunch_pattern_decoding.rst index 5538df610ab2863b08af96d57c2a08420b5dadd5..22949f7b0ba5f8af532a362158e9be806bd6d75e 100644 --- a/doc/bunch_pattern_decoding.rst +++ b/doc/bunch_pattern_decoding.rst @@ -1,3 +1,6 @@ +Reading the bunch pattern +========================= + The bunch pattern table is an array of 2700 values per train (the maximum number of pulses at 4.5 MHz provided by the machine) and contains information on how the pulses are distributed among SASE 1, 2, 3, and the various lasers at European XFEL. The data stored in the bunch pattern table (mnemonic *bunchPatternTable*) can be extracted using the wrappers to the `euxfel_bunch_pattern <https://pypi.org/project/euxfel-bunch-pattern/>`_ package as follows: diff --git a/doc/changelog.rst b/doc/changelog.rst index a6bf81bd735535e3c7bb406c84a5f64149aa63d6..2bc7af39b11096eaa857b81353075b107c8175ed 100644 --- a/doc/changelog.rst +++ b/doc/changelog.rst @@ -29,6 +29,7 @@ unreleased - Add BAM 2955_S3 and update BAM mnemonics :mr:`315` - improve peak-finding algorithm for digitizer trace peak extraction :mr:`312` - update documentation on digitizer peak extraction :mr: `312` + - split documentation howtos and change theme :mr:`328` - **New Features** diff --git a/doc/conf.py b/doc/conf.py index 9b5c60ce0c03dcf28efa337b5900be4a1132af1d..b1db58576d738b10c33066b36ba531b449af28c6 100644 --- a/doc/conf.py +++ b/doc/conf.py @@ -19,7 +19,7 @@ sys.path.insert(0, os.path.abspath('..')) # -- Project information ----------------------------------------------------- project = 'SCS Toolbox' -copyright = '2021, SCS' +copyright = '2025, SCS' author = 'SCS' # The short X.Y version @@ -96,7 +96,7 @@ pygments_style = None # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # -html_theme = 'classic' +html_theme = 'pydata_sphinx_theme' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the diff --git a/doc/digitizers.rst b/doc/digitizers.rst new file mode 100644 index 0000000000000000000000000000000000000000..a8e957d59f40837b57d45379a7be14f20f95116a --- /dev/null +++ b/doc/digitizers.rst @@ -0,0 +1,16 @@ +Digitizers +========== + +.. toctree:: + + Extracting digitizer peaks <How to extract peaks from digitizer traces> + + +Point detectors +--------------- +Detectors that produce one point per pulse, or 0D detectors, are all handled in +a similar way. Such detectors are, for instance, the X-ray Gas Monitor (XGM), +the Transmitted Intensity Monitor (TIM), the electron Bunch Arrival +Monitor (BAM) or the photo diodes monitoring the PP laser. + + diff --git a/doc/howtos.rst b/doc/howtos.rst index ec7af4a395f9abbd754742a32423268b28de2e83..09b79e0f8561f4101f4a8ff7c22975552b9c0cbd 100644 --- a/doc/howtos.rst +++ b/doc/howtos.rst @@ -3,237 +3,19 @@ How to's ~~~~~~~~ -Loading run data ----------------- +.. toctree:: + :maxdepth: 1 + + Loading data <load> + Reading bunch pattern <bunch_pattern_decoding> + Knife-edge and fluence calculation <Knife edge scan and fluence calculation> + Extracting digitizers traces <digitizers> + Fine timing <transient reflectivity> + Processing DSSC data <DSSC> + BOZ analysis <BOZ> + PES analysis <PES_spectra_extraction> + HRIXS analysis <HRIXS> + Viking analysis <Analysis_of_Viking_spectrometer_data> + SLURM -* :doc:`load run and data <load>`. -* :doc:`load data in memory <Loading_data_in_memory>`. -Reading the bunch pattern -------------------------- - -* :doc:`bunch pattern decoding <bunch_pattern_decoding>`. - -Extracting peaks from digitizers --------------------------------- -* :doc:`How to extract peaks from digitizer traces <How to extract peaks from digitizer traces>`. - - -Determining the FEL or OL beam size and the fluence ---------------------------------------------------- -* :doc:`Knife-edge scan and fluence calculation <Knife edge scan and fluence calculation>`. - - -Finding time overlap by transient reflectivity ----------------------------------------------- - -Transient reflectivity of the optical laser measured on a large bandgap material pumped by the FEL is often used at SCS to find the time overlap between the two beams. The example notebook - -* :doc:`Transient reflectivity measurement <Transient reflectivity measurement>` - -shows how to analyze such data, including correcting the delay by the bunch arrival monitor (BAM). - -DSSC ----- - -DSSC data binning -################# - -In scattering experiment one typically wants to bin DSSC image data versus -time delay between pump and probe or versus photon energy. After this first -data reduction steps, one can do azimuthal integration on a much smaller -amount of data. - -The DSSC data binning procedure is based on the notebook -:doc:`Dask DSSC module binning <Dask DSSC module binning>`. It performs -DSSC data binning against a coordinate specified by *xaxis* which can -be *nrj* for the photon energy, *delay* in which case the delay stage -position will be converted in picoseconds and corrected but the BAM, or -another slow data channel. Specific pulse pattern can be defined, such as: - -.. code:: python - - ['pumped', 'unpumped'] - -which will be repeated. XGM data will also be binned similarly to the DSSC -data. - -Since this data reduction step can be quite time consuming for large datasets, -it is recommended to launch the notebook via a SLURM script. The script can be -downloaded from :download:`scripts/bin_dssc_module_job.sh` and reads as: - -.. literalinclude:: scripts/bin_dssc_module_job.sh - :language: bash - :linenos: - -It is launched with the following: - -.. code:: bash - - sbatch ./bin_dssc_module_job.sh -p 2719 -d 180 -r 179 -m 0 -x delay -b 0.1 - sbatch ./bin_dssc_module_job.sh -p 2719 -d 180 -r 179 -m 1 -x delay -b 0.1 - sbatch ./bin_dssc_module_job.sh -p 2719 -d 180 -r 179 -m 2 -x delay -b 0.1 - sbatch ./bin_dssc_module_job.sh -p 2719 -d 180 -r 179 -m 3 -x delay -b 0.1 - -where 2719 is the proposal number, 180 is the dark run number, 179 is the run -nummber and 0, 1, 2 and 3 are the 4 module group, each job processing a set of -4 DSSC module, delay is the bin axis and 0.1 is the bin width. - -The result will be 16 \*.h5 files, one per module, saved in the folder specified -in the script, a copy of which can be found in the *scripts* folder in the -toolbox source. This files can then be loaded and combined with: - -.. code:: python - - import xarray as xr - data = xr.open_mfdataset(path + '/*.h5', parallel=True, join='inner') - - -DSSC azimuthal integration -########################## - -Azimuthal integration can be performed with pyFAI_ which can utilize the -hexagonal pixel shape information from the DSSC geometry to split -the intensity in a pixel in the bins covered by it. An example notebook -:doc:`Azimuthal integration of DSSC with pyFAI.ipynb <Azimuthal integration of DSSC with pyFAI>` is available. - -A second example notebook -:doc:`DSSC scattering time-delay.ipynb <DSSC scattering time-delay>` -demonstrates how to: - -- refine the geometry such that the scattering pattern is centered before - azimuthal integration - -- perform azimuthal integration on a time delay - dataset with ``xr.apply_ufunc`` for multiprocessing. - -- plot a two-dimensional map of the scattering change as function of - scattering vector and time delay - -- integrate certain scattering vector range and plot a time trace - -DSSC fine timing -################ - -When DSSC is reused after a period of inactivity or when the DSSC gain setting -use a different operation frequency the DSSC fine trigger delay needs to be -checked. To analysis runs recorded with different fine delay, one can use -the notebook :doc:`DSSC fine delay with SCS toolbox.ipynb <DSSC fine delay with SCS toolbox>`. - -DSSC quadrant geometry -###################### - -To check or refined the DSSC geometry or quadrants position, the following -notebook can be used :doc:`DSSC create geometry.ipynb <DSSC create geometry>`. - -Legacy DSSC binning procedure -############################# - -Most of the functions within toolbox_scs.detectors can be accessed directly. This is useful during development, or when working in a non-standardized way, which is often neccessary during data evaluation. For frequent routines there is the possibility to use dssc objects that guarantee consistent data structure, and reduce the amount of recurring code within the notebook. - -* bin data using toolbox_scs.tbdet -> *to be documented*. -* :doc:`bin data using the DSSCBinner <dssc/DSSCBinner>`. -* post processing, data analysis -> *to be documented* - -Photo-Electron Spectrometer (PES) ---------------------------------- - -* :doc:`Basic analysis of PES spectra <PES_spectra_extraction>`. - -BOZ: Beam-Splitting Off-axis Zone plate analysis ------------------------------------------------- - -The BOZ analysis consists of 4 notebooks and a script. The first notebook -:doc:`BOZ analysis part I.a Correction determination <BOZ analysis part I.a Correction determination>` -is used to determine all the necessary correction, that is the flat field -correction from the zone plate optics and the non-linearity correction from the -DSSC gain. The inputs are a dark run and a run with X-rays on three broken or -empty membranes. For the latter, an alternative is to use pre-edge data on an -actual sample. The result is a JSON file that contains the flat field and -non-linearity correction as well as the parameters used for their determination -such that this can be reproduced and investigated in case of issues. The -determination of the flat field correction is rather quick, few minutes and is -the most important correction for the change in XAS computed from the -1st and -+1st order. For quick correction of the online preview one can bypass the -non-linearity calculation by taking the JSON file as soon as it appears. -The determination of the non-linearity correction is a lot longer and can take -some 2 to 8 hours depending on the number of pulses in the -train. For this reason, the computation can also be done on GPUs in 30min -instead. A GPU notebook adapted for CHEM experiment with liquid jet and -normalization implement for S K-edge is available at -:doc:`OnlineGPU BOZ analysis part I.a Correction determination S K-egde <OnlineGPU BOZ analysis part I.a Correction determination S K-egde>`. - -The other option is to use a script -that can be downloaded from :download:`scripts/boz_parameters_job.sh` and -reads as: - -.. literalinclude:: scripts/boz_parameters_job.sh - :language: bash - :linenos: - -It uses the first notebook and is launched via slurm: - -``sbatch ./boz_parameters_job.sh -p 2937 -d 615 -r 614 -g 3`` - -where 2937 is the proposal run number, where 615 is the dark run number, -614 is the run on 3 broken membranes and 3 is -the DSSC gain in photon per bin. The proposal run number is defined inside the -script file. - -The second notebook -:doc:`BOZ analysis part I.b Correction validation <BOZ analysis part I.b Correction validation>` can be used to check how well the calculated correction still -work on a characterization run recorded later, i.e. on 3 broken membrane or empty membranes. - -The third notebook -:doc:`BOZ analysis part II.1 Small data <BOZ analysis part II.1 Small data>` -then use the JSON correction file to load all needed corrections and -process an run, saving the rois extracted DSSC as well as aligning them to -photon energy and delay stage in a small data h5 file. - -That small data h5 file can then be loaded and the data binned to compute a -spectrum or a time resolved XAS scan using the fourth and final notebook -:doc:`BOZ analysis part II.2 Binning <BOZ analysis part II.2 Binning>` - - -Point detectors ---------------- -Detectors that produce one point per pulse, or 0D detectors, are all handled in a similar way. Such detectors are, for instance, the X-ray Gas Monitor (XGM), the Transmitted Intensity Monitor (TIM), the electron Bunch Arrival Monitor (BAM) or the photo diodes monitoring the PP laser. - -HRIXS ------ - -* :doc:`Analyzing HRIXS data <HRIXS>` - -Viking spectrometer -------------------- - -* :doc:`Analysis of Viking spectrometer data <Analysis_of_Viking_spectrometer_data>` - -SLURM, sbatch, partition, reservation -------------------------------------- - -Scripts launched by ``sbatch`` command can employ magic cookie with -``#SBATCH`` to pass options SLURM, such as which partition to run on. -To work, the magic cookie has to be at the beginning of the line. -This means that: - -* to comment out a magic cookie, adding another "#" before it is sufficient -* to comment a line to detail what the option does, it is best practice - to put the comment on the line before - -Reserved partition are of the form "upex_003333" where 3333 is the proposal -number. To check what reserved partition are existing, their start and end -date, one can ``ssh`` to ``max-display`` and use the command ``sview``. - -.. image:: sview.png - -To use a reserved partition with ``sbatch``, one can use the magic cookie - -``#SBATCH --reservation=upex_003333`` - -instead of the usual - -``#SBATCH --partition=upex`` - - -.. _pyFAI: https://pyfai.readthedocs.io diff --git a/doc/load.rst b/doc/load.rst index 8817c5ffaacb622c979b7c205cf6e27f8d8725d7..31d7010a01a10a4625731a0e43dce2693ff9e2ba 100644 --- a/doc/load.rst +++ b/doc/load.rst @@ -1,3 +1,13 @@ +Loading run data +================ + +.. toctree:: + + Loading_data_in_memory + +Short version +------------- + Loading data in memory is performed as follows: **Option 1**: diff --git a/doc/transient reflectivity.rst b/doc/transient reflectivity.rst new file mode 100644 index 0000000000000000000000000000000000000000..d8bba7cce5b52bfc08d259690029a906b33e8c84 --- /dev/null +++ b/doc/transient reflectivity.rst @@ -0,0 +1,9 @@ +Finding time overlap by transient reflectivity +---------------------------------------------- + +Transient reflectivity of the optical laser measured on a large bandgap material pumped by the FEL is often used at SCS to find the time overlap between the two beams. The example notebook + +* :doc:`Transient reflectivity measurement <Transient reflectivity measurement>` + +shows how to analyze such data, including correcting the delay by the bunch arrival monitor (BAM). + diff --git a/setup.py b/setup.py index dfe172d71e2a556476112d116797231de3c5022a..a382ad1820f71b29918d21094945853f4c5128c9 100644 --- a/setup.py +++ b/setup.py @@ -15,7 +15,7 @@ interactive_reqs = ['ipykernel', 'matplotlib', 'tqdm',] maxwell_reqs = ['joblib', 'papermill', 'dask[diagnostics]', 'extra_data', 'extra_geom', 'euxfel_bunch_pattern>=0.6', 'pyFAI',] -docs_reqs = ['sphinx', 'nbsphinx'] +docs_reqs = ['sphinx', 'nbsphinx', 'sphinx-autoap', 'pydata-sphinx-theme'] setup(name='toolbox_scs', version=_version,