Skip to content
Snippets Groups Projects
Commit 3a73831f authored by Loïc Le Guyader's avatar Loïc Le Guyader
Browse files

Split documentation howtos and change sphinx theme

See merge request !328
parents edf35773 64b7a948
No related branches found
No related tags found
1 merge request!328Split documentation howtos and change sphinx theme
Pipeline #168119 passed
......@@ -8,6 +8,7 @@ pages:
- apt-get install -y pandoc
- pip3 install sphinx-autoapi
- pip3 install nbsphinx
- pip3 install pydata-sphinx-theme
- sphinx-build -b html doc public
pages: True
artifacts:
......
BOZ: Beam-Splitting Off-axis Zone plate analysis
------------------------------------------------
The BOZ analysis consists of 4 notebooks and a script. The first notebook
:doc:`BOZ analysis part I.a Correction determination <BOZ analysis part I.a Correction determination>`
is used to determine all the necessary correction, that is the flat field
correction from the zone plate optics and the non-linearity correction from the
DSSC gain. The inputs are a dark run and a run with X-rays on three broken or
empty membranes. For the latter, an alternative is to use pre-edge data on an
actual sample. The result is a JSON file that contains the flat field and
non-linearity correction as well as the parameters used for their determination
such that this can be reproduced and investigated in case of issues. The
determination of the flat field correction is rather quick, few minutes and is
the most important correction for the change in XAS computed from the -1st and
+1st order. For quick correction of the online preview one can bypass the
non-linearity calculation by taking the JSON file as soon as it appears.
The determination of the non-linearity correction is a lot longer and can take
some 2 to 8 hours depending on the number of pulses in the
train. For this reason, the computation can also be done on GPUs in 30min
instead. A GPU notebook adapted for CHEM experiment with liquid jet and
normalization implement for S K-edge is available at
:doc:`OnlineGPU BOZ analysis part I.a Correction determination S K-egde <OnlineGPU BOZ analysis part I.a Correction determination S K-egde>`.
The other option is to use a script
that can be downloaded from :download:`scripts/boz_parameters_job.sh` and
reads as:
.. literalinclude:: scripts/boz_parameters_job.sh
:language: bash
:linenos:
It uses the first notebook and is launched via slurm:
``sbatch ./boz_parameters_job.sh -p 2937 -d 615 -r 614 -g 3``
where 2937 is the proposal run number, where 615 is the dark run number,
614 is the run on 3 broken membranes and 3 is
the DSSC gain in photon per bin. The proposal run number is defined inside the
script file.
The second notebook
:doc:`BOZ analysis part I.b Correction validation <BOZ analysis part I.b Correction validation>` can be used to check how well the calculated correction still
work on a characterization run recorded later, i.e. on 3 broken membrane or empty membranes.
The third notebook
:doc:`BOZ analysis part II.1 Small data <BOZ analysis part II.1 Small data>`
then use the JSON correction file to load all needed corrections and
process an run, saving the rois extracted DSSC as well as aligning them to
photon energy and delay stage in a small data h5 file.
That small data h5 file can then be loaded and the data binned to compute a
spectrum or a time resolved XAS scan using the fourth and final notebook
:doc:`BOZ analysis part II.2 Binning <BOZ analysis part II.2 Binning>`
DSSC
----
DSSC data binning
#################
In scattering experiment one typically wants to bin DSSC image data versus
time delay between pump and probe or versus photon energy. After this first
data reduction steps, one can do azimuthal integration on a much smaller
amount of data.
The DSSC data binning procedure is based on the notebook
:doc:`Dask DSSC module binning <Dask DSSC module binning>`. It performs
DSSC data binning against a coordinate specified by *xaxis* which can
be *nrj* for the photon energy, *delay* in which case the delay stage
position will be converted in picoseconds and corrected but the BAM, or
another slow data channel. Specific pulse pattern can be defined, such as:
.. code:: python
['pumped', 'unpumped']
which will be repeated. XGM data will also be binned similarly to the DSSC
data.
Since this data reduction step can be quite time consuming for large datasets,
it is recommended to launch the notebook via a SLURM script. The script can be
downloaded from :download:`scripts/bin_dssc_module_job.sh` and reads as:
.. literalinclude:: scripts/bin_dssc_module_job.sh
:language: bash
:linenos:
It is launched with the following:
.. code:: bash
sbatch ./bin_dssc_module_job.sh -p 2719 -d 180 -r 179 -m 0 -x delay -b 0.1
sbatch ./bin_dssc_module_job.sh -p 2719 -d 180 -r 179 -m 1 -x delay -b 0.1
sbatch ./bin_dssc_module_job.sh -p 2719 -d 180 -r 179 -m 2 -x delay -b 0.1
sbatch ./bin_dssc_module_job.sh -p 2719 -d 180 -r 179 -m 3 -x delay -b 0.1
where 2719 is the proposal number, 180 is the dark run number, 179 is the run
nummber and 0, 1, 2 and 3 are the 4 module group, each job processing a set of
4 DSSC module, delay is the bin axis and 0.1 is the bin width.
The result will be 16 \*.h5 files, one per module, saved in the folder specified
in the script, a copy of which can be found in the *scripts* folder in the
toolbox source. This files can then be loaded and combined with:
.. code:: python
import xarray as xr
data = xr.open_mfdataset(path + '/*.h5', parallel=True, join='inner')
DSSC azimuthal integration
##########################
Azimuthal integration can be performed with pyFAI_ which can utilize the
hexagonal pixel shape information from the DSSC geometry to split
the intensity in a pixel in the bins covered by it. An example notebook
:doc:`Azimuthal integration of DSSC with pyFAI.ipynb <Azimuthal integration of DSSC with pyFAI>` is available.
A second example notebook
:doc:`DSSC scattering time-delay.ipynb <DSSC scattering time-delay>`
demonstrates how to:
- refine the geometry such that the scattering pattern is centered before
azimuthal integration
- perform azimuthal integration on a time delay
dataset with ``xr.apply_ufunc`` for multiprocessing.
- plot a two-dimensional map of the scattering change as function of
scattering vector and time delay
- integrate certain scattering vector range and plot a time trace
DSSC fine timing
################
When DSSC is reused after a period of inactivity or when the DSSC gain setting
use a different operation frequency the DSSC fine trigger delay needs to be
checked. To analysis runs recorded with different fine delay, one can use
the notebook :doc:`DSSC fine delay with SCS toolbox.ipynb <DSSC fine delay with SCS toolbox>`.
DSSC quadrant geometry
######################
To check or refined the DSSC geometry or quadrants position, the following
notebook can be used :doc:`DSSC create geometry.ipynb <DSSC create geometry>`.
Legacy DSSC binning procedure
#############################
Most of the functions within toolbox_scs.detectors can be accessed directly. This is useful during development, or when working in a non-standardized way, which is often neccessary during data evaluation. For frequent routines there is the possibility to use dssc objects that guarantee consistent data structure, and reduce the amount of recurring code within the notebook.
* bin data using toolbox_scs.tbdet -> *to be documented*.
* :doc:`bin data using the DSSCBinner <dssc/DSSCBinner>`.
* post processing, data analysis -> *to be documented*
.. _pyFAI: https://pyfai.readthedocs.io
%% Cell type:markdown id: tags:
 
# In a nutshell
# Knife-edge scan analysis and fluence calculation
 
%% Cell type:code id: tags:
 
``` python
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.constrained_layout.use'] = True
%matplotlib notebook
import toolbox_scs as tb
```
 
%% Output
 
Cupy is not installed in this environment, no access to the GPU
 
%% Cell type:markdown id: tags:
 
### plot and fit a knife-edge run
 
%% Cell type:code id: tags:
 
``` python
proposal = 4476
runNB = 130
fields = ['XRD_STZ', 'FastADC2_5raw']
run, ds = tb.load(proposal, runNB, fields, fadc2_bp='scs_ppl')
 
w0_x = tb.knife_edge(ds, axisKey='XRD_STZ',
signalKey='FastADC2_5peaks', plot=True)[0]
```
 
%% Output
 
 
 
fitting function: a*erfc(np.sqrt(2)*(x-x0)/w0) + b
w0 = (653.5 +/- 0.4) um
x0 = (-7.327 +/- 0.000) mm
a = 1.135880e+05 +/- 1.950313e+01
b = -3.857367e+02 +/- 2.580742e+01
 
%% Cell type:markdown id: tags:
 
### OL power measurement and fluence calibration
 
%% Cell type:code id: tags:
 
``` python
# the following two arrays comes from laser power measurement
rel_powers = np.array([100, 75, 50, 25, 15, 12, 10, 8, 6, 4, 2, 1, 0.75, 0.5, 0.25, 0.1])
power_mW = np.array([505, 384, 258, 130, 81, 67, 56.5, 45.8, 35.6, 24.1, 14.1, 9.3, 8.0, 6.5, 4.8, 4.1]) #in mW
npulses = 336
# Here we assume that w0y = w0x
F, F_fit, E, E_fit = tb.fluenceCalibration(rel_powers, power_mW, npulses,
w0x=w0_x*1e-3, fit_order=1)
```
 
%% Output
 
 
 
%% Cell type:markdown id: tags:
 
### Beam size versus scanning parameter (focus lens position, KB bending, ...)
 
%% Cell type:code id: tags:
 
``` python
proposal = 900291
runList = range(9, 16)
fields = ['chem_Y', 'HFM_BENDING', 'VFM_BENDING', 'FastADC9raw']
data = {}
for runNB in runList:
run, ds = tb.load(proposal, runNB, fields, fadc_bp='sase3')
data[runNB] = ds
```
 
%% Output
 
chem_Y: only 70.3% of trains (4650 out of 6618) contain data.
HFM_BENDING: only 52.9% of trains (3499 out of 6618) contain data.
VFM_BENDING: only 52.9% of trains (3499 out of 6618) contain data.
FastADC9raw: only 12.9% of trains (852 out of 6618) contain data.
chem_Y: only 76.6% of trains (4748 out of 6201) contain data.
HFM_BENDING: only 52.8% of trains (3273 out of 6201) contain data.
VFM_BENDING: only 52.8% of trains (3273 out of 6201) contain data.
FastADC9raw: only 16.3% of trains (1011 out of 6201) contain data.
chem_Y: only 91.7% of trains (5284 out of 5762) contain data.
HFM_BENDING: only 52.9% of trains (3046 out of 5762) contain data.
VFM_BENDING: only 52.9% of trains (3046 out of 5762) contain data.
FastADC9raw: only 25.3% of trains (1458 out of 5762) contain data.
HFM_BENDING: only 53.1% of trains (3073 out of 5791) contain data.
VFM_BENDING: only 53.1% of trains (3073 out of 5791) contain data.
FastADC9raw: only 29.3% of trains (1698 out of 5791) contain data.
chem_Y: only 85.0% of trains (5136 out of 6040) contain data.
HFM_BENDING: only 53.0% of trains (3204 out of 6040) contain data.
VFM_BENDING: only 53.0% of trains (3204 out of 6040) contain data.
FastADC9raw: only 21.5% of trains (1299 out of 6040) contain data.
chem_Y: only 80.8% of trains (4850 out of 6005) contain data.
HFM_BENDING: only 54.3% of trains (3258 out of 6005) contain data.
VFM_BENDING: only 54.3% of trains (3258 out of 6005) contain data.
FastADC9raw: only 19.9% of trains (1196 out of 6005) contain data.
chem_Y: only 38.4% of trains (3200 out of 8333) contain data.
HFM_BENDING: only 73.6% of trains (6136 out of 8333) contain data.
VFM_BENDING: only 73.6% of trains (6136 out of 8333) contain data.
FastADC9raw: only 6.7% of trains (559 out of 8333) contain data.
 
%% Cell type:code id: tags:
 
``` python
w0_vs_bender = []
for runNB, ds in data.items():
w0 = tb.knife_edge(ds, signalKey='FastADC9peaks', axisKey='chem_Y')[0]
w0_vs_bender.append([ds['VFM_BENDING'].mean().values, w0])
w0_vs_bender = np.array(w0_vs_bender)
```
 
%% Cell type:code id: tags:
 
``` python
plt.figure()
plt.plot(w0_vs_bender[:,0], w0_vs_bender[:, 1]*1e3, 'o')
plt.ylabel('FEL radius @ 1/e^2 [$\mu$m]')
plt.xlabel('VFM BENDING')
```
 
%% Output
 
 
 
Text(0.5, 0, 'VFM BENDING')
 
%% Cell type:markdown id: tags:
 
# Detailed notebook
## Fluence calculation
 
## Peak fluence calculation using knife-edge scans:
### Peak fluence calculation using knife-edge scans:
 
 
The peak fluence of a Gaussian beam is defined as:
 
$F_0 = \frac{2 E_P}{\pi w_{0,x} w_{0,y}}$
with $E_P$ the pulse energy, $w_{0,x}$ and $w_{0,y}$ the beam radii at $1/e^2$ in $x$ and $y$ axes, respectively.
 
 
### Demonstration
For a Gaussian beam at the waist position, the intensity distribution is defined as:
 
$I(x,y) = I_0 e^{\left(-\frac{2x^2}{w_{0,x}^2} -\frac{2y^2}{w_{0,y}^2}\right)}$
 
with $I_0$ the peak intensity of the Gaussian beam, $w_{0,x}$ and $w_{0,y}$ the beam radii at $1/e^2$ in $x$ and $y$ axes, respectively.
 
Let's consider a knife-edge "corner" at position $(x,y)$ that blocks the beam for $x^\prime < x$ and $y^\prime < y$. The detected power behind the knife-edge is given by:
 
$P(x,y) = I_0 \int_{x}^\infty \int_{y}^\infty e^{\left(-\frac{2x^{\prime 2}}{w_{0,x}^2} -\frac{2y^{\prime 2}}{w_{0,y}^2}\right)} dx^\prime dy^\prime $
 
Now let's consider a purely vertical knife-edge. The power in the $y$ axis is integrated from $-\infty$ to $\infty$, thus:
 
$P(x) = I_0 \sqrt\frac{\pi}{2}w_{0,y}\int_x^\infty e^{\left(-\frac{2x^{\prime 2}}{w_{0,x}^2}\right)}dx^\prime$.
 
Similarly, for a purely horizontal knife-edge:
 
$P(y) = I_0 \sqrt\frac{\pi}{2}w_{0,x}\int_y^\infty e^{\left(-\frac{2y^{\prime 2}}{w_{0,y}^2}\right)}dy^\prime$
 
By fitting the knife edge scans to these functions, we extract the waist in $x$ and $y$ axes.
 
The total power of one pulse is then:
 
$P_{TOT} = \frac{I_0 \pi w_{0,x} w_{0,y}}{2}$
 
Hence, the peak fluence, defined as $F_0=I_0\tau$ with $\tau$ the pulse duration of the laser, is given as:
 
$F_0 = \frac{2 E_P}{\pi w_{0,x} w_{0,y}}$
 
where $E_P$ is the pulse energy.
 
%% Cell type:markdown id: tags:
 
## Optical laser beam profile and fluence
 
%% Cell type:markdown id: tags:
 
### Knife-edge measurement (OL)
 
%% Cell type:markdown id: tags:
 
#### Load scanning motor and laser photodiode data
 
%% Cell type:markdown id: tags:
 
The laser used in these knife-edge scans was the SCS backup laser, for which there was no bunch pattern table, so, whenever a function needs the bunch pattern as input argument, we set it to **'None'**. If the PP laser is used, then the corresponding bunch pattern key is **'scs_ppl'**.
 
%% Cell type:code id: tags:
 
``` python
proposal = 900094
runNB = 384
fields = ["FastADC4raw", "scannerX"]
runX, dsX = tb.load(proposal, runNB, fields, laser_bp='None')
 
fields = ["FastADC4raw", "scannerY"]
runNB = 385
runY, dsY = tb.load(proposal, runNB, fields, laser_bp='None')
```
 
%% Output
 
FastADC4raw: only 94.1% of trains (768 out of 816) contain data.
 
%% Cell type:markdown id: tags:
 
#### Check diode traces and region of integration for peak calculations
 
%% Cell type:markdown id: tags:
 
Using **'show_all=True'** displays the entire trace and all integration regions. We can then zoom and pan in the interactive figure.
 
%% Cell type:code id: tags:
 
``` python
paramsX = tb.check_peak_params(runX, 'FastADC4raw', show_all=True, bunchPattern='None')
print(paramsX)
```
 
%% Output
 
Auto-find peak params: 118 pulses, 324 samples between two pulses
 
 
 
{'pulseStart': 5393, 'pulseStop': 5424, 'baseStart': 5371, 'baseStop': 5387, 'period': 324, 'npulses': 118}
 
%% Cell type:markdown id: tags:
 
using the default **'show_all=False'** parameter only shows the first and last pulses of the trace
 
%% Cell type:code id: tags:
 
``` python
tb.check_peak_params(runX, 'FastADC4raw', bunchPattern='None')
```
 
%% Output
 
Auto-find peak params: 118 pulses, 324 samples between two pulses
 
 
 
{'pulseStart': 5393,
'pulseStop': 5424,
'baseStart': 5371,
'baseStop': 5387,
'period': 324,
'npulses': 118}
 
%% Cell type:markdown id: tags:
 
#### Plot intensity vs. knife-edge positions, fit with erfc function
 
%% Cell type:code id: tags:
 
``` python
w0_x = tb.knife_edge(dsX, axisKey='scannerX', plot=True)[0]*1e-3
w0_y = tb.knife_edge(dsY, axisKey='scannerY', plot=True)[0]*1e-3
 
print('X-FWHM [um]:', w0_x * np.sqrt(2*np.log(2))*1e6)
print('Y-FWHM [um]:', w0_y * np.sqrt(2*np.log(2))*1e6)
```
 
%% Output
 
 
 
fitting function: a*erfc(np.sqrt(2)*(x-x0)/w0) + b
w0 = (131.9 +/- 0.1) um
x0 = (18.757 +/- 0.000) mm
a = 1.752229e+05 +/- 2.642387e+01
b = 5.670915e+03 +/- 3.665072e+01
 
 
 
fitting function: a*erfc(np.sqrt(2)*(x-x0)/w0) + b
w0 = (297.3 +/- 0.5) um
x0 = (12.093 +/- 0.000) mm
a = -1.737334e+05 +/- 1.014418e+02
b = 3.424424e+05 +/- 1.518455e+02
X-FWHM [um]: 155.33422804507632
Y-FWHM [um]: 350.08711535421213
 
%% Cell type:markdown id: tags:
 
### Fluence calculation (OL)
 
%% Cell type:markdown id: tags:
 
#### OL power measurement
 
%% Cell type:code id: tags:
 
``` python
#measurement performed in Exp Hutch before in-coupling window
rel_powers = np.array([100, 75, 50, 25, 15, 12, 10, 8, 6, 4, 2, 1, 0.75, 0.5, 0.25, 0.1])
power_mW = np.array([505, 384, 258, 130, 81, 67, 56.5, 45.8, 35.6, 24.1, 14.1, 9.3, 8.0, 6.5, 4.8, 4.1]) #in mW
npulses = 336
```
 
%% Cell type:markdown id: tags:
 
#### Fluence vs. laser power
 
Note that it is possible to change the polynomial order of the fit (for example to 2), which is useful if the output is SHG (400 nm) or THG (266 nm) but the power control is changing the 800 nm pump before conversion.
 
%% Cell type:code id: tags:
 
``` python
F, F_fit, E, E_fit = tb.fluenceCalibration(rel_powers, power_mW, npulses,
w0x=w0_x, w0y=w0_y, fit_order=1)
```
 
%% Output
 
 
 
%% Cell type:markdown id: tags:
 
## X-ray beam profile and fluence
 
%% Cell type:markdown id: tags:
 
### Knife-edge measurement
 
%% Cell type:markdown id: tags:
 
#### Load runs with horizontal and vertical scans, normalize the transmitted signal by XGM
 
%% Cell type:code id: tags:
 
``` python
proposal = 900094
fields = ["MCP2apd", "scannerX", "SCS_SA3"]
runNB = 687
runX, dsX = tb.load(proposal, runNB, fields)
#normalization by XGM fast signal
dsX['Tr'] = -dsX['MCP2peaks'] / dsX['SCS_SA3']
 
fields = ["MCP2apd", "scannerY", "SCS_SA3"]
runNB = 688
runY, dsY = tb.load(proposal, runNB, fields)
#normalization by XGM fast signal
dsY['Tr'] = -dsY['MCP2peaks'] / dsY['SCS_SA3']
 
dsY
```
 
%% Output
 
<xarray.Dataset>
Dimensions: (trainId: 808, sa3_pId: 25, pulse_slot: 2700)
Coordinates:
* trainId (trainId) uint64 566591561 566591562 ... 566592369
* sa3_pId (sa3_pId) int64 322 338 354 370 386 ... 658 674 690 706
Dimensions without coordinates: pulse_slot
Data variables:
scannerY (trainId) float32 14.13 14.13 14.13 ... 12.41 12.41 12.6
MCP2peaks (trainId, sa3_pId) float64 -3.976e+03 ... -13.49
bunchPatternTable (trainId, pulse_slot) uint32 2146089 0 2097193 ... 0 0 0
SCS_SA3 (trainId, sa3_pId) float32 2.784e+03 ... 4.811e+03
Tr (trainId, sa3_pId) float64 1.428 1.226 ... 0.002804
Attributes:
runFolder: /gpfs/exfel/exp/SCS/201931/p900094/raw/r0688
 
%% Cell type:markdown id: tags:
 
#### Fit scan to erfc function and plot
 
%% Cell type:code id: tags:
 
``` python
w0_x = tb.knife_edge(dsX, axisKey='scannerX', signalKey='Tr', plot=True)[0]*1e-3
w0_y = tb.knife_edge(dsY, axisKey='scannerY', signalKey='Tr', plot=True)[0]*1e-3
print(w0_x, w0_y)
 
print('X-FWHM [um]:', w0_x * np.sqrt(2*np.log(2))*1e6)
print('Y-FWHM [um]:', w0_y * np.sqrt(2*np.log(2))*1e6)
```
 
%% Output
 
 
 
fitting function: a*erfc(np.sqrt(2)*(x-x0)/w0) + b
w0 = (177.9 +/- 0.6) um
x0 = (17.953 +/- 0.000) mm
a = 6.956484e-01 +/- 5.093209e-04
b = 3.562529e-03 +/- 7.279907e-04
 
 
 
fitting function: a*erfc(np.sqrt(2)*(x-x0)/w0) + b
w0 = (114.6 +/- 0.5) um
x0 = (13.328 +/- 0.000) mm
a = -6.919485e-01 +/- 4.776717e-04
b = 1.385036e+00 +/- 7.320520e-04
0.00017787613683544284 0.00011464177548822842
X-FWHM [um]: 209.4331462763844
Y-FWHM [um]: 134.98037545880902
 
%% Cell type:markdown id: tags:
 
### Fluence calculation
 
%% Cell type:markdown id: tags:
 
#### Load XGM data
 
%% Cell type:code id: tags:
 
``` python
proposal = 900094
runNB = 647
run, ds = tb.load(proposal, runNB, 'SCS_SA3')
ds
```
 
%% Output
 
<xarray.Dataset>
Dimensions: (trainId: 3095, pulse_slot: 2700, sa3_pId: 25)
Coordinates:
* trainId (trainId) uint64 566392087 566392088 ... 566395181
* sa3_pId (sa3_pId) int64 322 338 354 370 386 ... 658 674 690 706
Dimensions without coordinates: pulse_slot
Data variables:
bunchPatternTable (trainId, pulse_slot) uint32 2113321 0 2097193 ... 0 0 0
SCS_SA3 (trainId, sa3_pId) float32 1.165e+04 ... 7.357e+03
Attributes:
runFolder: /gpfs/exfel/exp/SCS/201931/p900094/raw/r0647
 
%% Cell type:markdown id: tags:
 
#### Calibrate XGM fast data using photon flux and plot
 
%% Cell type:code id: tags:
 
``` python
f_scs = tb.calibrate_xgm(run, ds, plot=True)
f_xtd10 = tb.calibrate_xgm(run, ds, xgm='SCS')
 
pulse_energy = ds['SCS_SA3'].mean().values*f_scs*1e-6 #average energy in J
ds['pulse_energy'] = ds['SCS_SA3']*f_scs*1e-6
ds['fluence'] = 2*ds['pulse_energy']/(np.pi*w0_x*w0_y)
print('Pulse energy [J]:',pulse_energy)
F0 = 2*pulse_energy/(np.pi*w0_x*w0_y)
print('Fluence [mJ/cm^2]:', F0*1e-1)
 
plt.figure()
plt.hist(ds['pulse_energy'].values.flatten()*1e6,
bins=50, rwidth=0.7, color='orange', label='Run 645')
plt.axvline(pulse_energy*1e6, color='r', lw=2, ls='--', label='average')
plt.xlabel('Pulse energy [$\mu$J]', size=14)
plt.ylabel('Number of pulses', size=14)
plt.legend()
ax = plt.gca()
plt.twiny()
plt.xlim(np.array(ax.get_xlim())*2e-6/(np.pi*w0_x*w0_y)*1e-1)
plt.xlabel('Fluence [mJ/cm$^2$]', labelpad=10, size=14)
plt.tight_layout()
```
 
%% Output
 
 
 
Pulse energy [J]: 1.8343257904052732e-05
Fluence [mJ/cm^2]: 57.26588845276785
 
 
 
%% Cell type:code id: tags:
 
``` python
```
SLURM, sbatch, partition, reservation
-------------------------------------
Scripts launched by ``sbatch`` command can employ magic cookie with
``#SBATCH`` to pass options SLURM, such as which partition to run on.
To work, the magic cookie has to be at the beginning of the line.
This means that:
* to comment out a magic cookie, adding another "#" before it is sufficient
* to comment a line to detail what the option does, it is best practice
to put the comment on the line before
Reserved partition are of the form "upex_003333" where 3333 is the proposal
number. To check what reserved partition are existing, their start and end
date, one can ``ssh`` to ``max-display`` and use the command ``sview``.
.. image:: sview.png
To use a reserved partition with ``sbatch``, one can use the magic cookie
``#SBATCH --reservation=upex_003333``
instead of the usual
``#SBATCH --partition=upex``
Reading the bunch pattern
=========================
The bunch pattern table is an array of 2700 values per train (the maximum number of pulses at 4.5 MHz provided by the machine) and contains information on how the pulses are distributed among SASE 1, 2, 3, and the various lasers at European XFEL.
The data stored in the bunch pattern table (mnemonic *bunchPatternTable*) can be extracted using the wrappers to the `euxfel_bunch_pattern <https://pypi.org/project/euxfel-bunch-pattern/>`_ package as follows:
......
......@@ -29,6 +29,7 @@ unreleased
- Add BAM 2955_S3 and update BAM mnemonics :mr:`315`
- improve peak-finding algorithm for digitizer trace peak extraction :mr:`312`
- update documentation on digitizer peak extraction :mr: `312`
- split documentation howtos and change theme :mr:`328`
- **New Features**
......
......@@ -19,7 +19,7 @@ sys.path.insert(0, os.path.abspath('..'))
# -- Project information -----------------------------------------------------
project = 'SCS Toolbox'
copyright = '2021, SCS'
copyright = '2025, SCS'
author = 'SCS'
# The short X.Y version
......@@ -96,7 +96,7 @@ pygments_style = None
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'classic'
html_theme = 'pydata_sphinx_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
......
Digitizers
==========
.. toctree::
Extracting digitizer peaks <How to extract peaks from digitizer traces>
Point detectors
---------------
Detectors that produce one point per pulse, or 0D detectors, are all handled in
a similar way. Such detectors are, for instance, the X-ray Gas Monitor (XGM),
the Transmitted Intensity Monitor (TIM), the electron Bunch Arrival
Monitor (BAM) or the photo diodes monitoring the PP laser.
......@@ -3,237 +3,19 @@
How to's
~~~~~~~~
Loading run data
----------------
.. toctree::
:maxdepth: 1
Loading data <load>
Reading bunch pattern <bunch_pattern_decoding>
Knife-edge and fluence calculation <Knife edge scan and fluence calculation>
Extracting digitizers traces <digitizers>
Fine timing <transient reflectivity>
Processing DSSC data <DSSC>
BOZ analysis <BOZ>
PES analysis <PES_spectra_extraction>
HRIXS analysis <HRIXS>
Viking analysis <Analysis_of_Viking_spectrometer_data>
SLURM
* :doc:`load run and data <load>`.
* :doc:`load data in memory <Loading_data_in_memory>`.
Reading the bunch pattern
-------------------------
* :doc:`bunch pattern decoding <bunch_pattern_decoding>`.
Extracting peaks from digitizers
--------------------------------
* :doc:`How to extract peaks from digitizer traces <How to extract peaks from digitizer traces>`.
Determining the FEL or OL beam size and the fluence
---------------------------------------------------
* :doc:`Knife-edge scan and fluence calculation <Knife edge scan and fluence calculation>`.
Finding time overlap by transient reflectivity
----------------------------------------------
Transient reflectivity of the optical laser measured on a large bandgap material pumped by the FEL is often used at SCS to find the time overlap between the two beams. The example notebook
* :doc:`Transient reflectivity measurement <Transient reflectivity measurement>`
shows how to analyze such data, including correcting the delay by the bunch arrival monitor (BAM).
DSSC
----
DSSC data binning
#################
In scattering experiment one typically wants to bin DSSC image data versus
time delay between pump and probe or versus photon energy. After this first
data reduction steps, one can do azimuthal integration on a much smaller
amount of data.
The DSSC data binning procedure is based on the notebook
:doc:`Dask DSSC module binning <Dask DSSC module binning>`. It performs
DSSC data binning against a coordinate specified by *xaxis* which can
be *nrj* for the photon energy, *delay* in which case the delay stage
position will be converted in picoseconds and corrected but the BAM, or
another slow data channel. Specific pulse pattern can be defined, such as:
.. code:: python
['pumped', 'unpumped']
which will be repeated. XGM data will also be binned similarly to the DSSC
data.
Since this data reduction step can be quite time consuming for large datasets,
it is recommended to launch the notebook via a SLURM script. The script can be
downloaded from :download:`scripts/bin_dssc_module_job.sh` and reads as:
.. literalinclude:: scripts/bin_dssc_module_job.sh
:language: bash
:linenos:
It is launched with the following:
.. code:: bash
sbatch ./bin_dssc_module_job.sh -p 2719 -d 180 -r 179 -m 0 -x delay -b 0.1
sbatch ./bin_dssc_module_job.sh -p 2719 -d 180 -r 179 -m 1 -x delay -b 0.1
sbatch ./bin_dssc_module_job.sh -p 2719 -d 180 -r 179 -m 2 -x delay -b 0.1
sbatch ./bin_dssc_module_job.sh -p 2719 -d 180 -r 179 -m 3 -x delay -b 0.1
where 2719 is the proposal number, 180 is the dark run number, 179 is the run
nummber and 0, 1, 2 and 3 are the 4 module group, each job processing a set of
4 DSSC module, delay is the bin axis and 0.1 is the bin width.
The result will be 16 \*.h5 files, one per module, saved in the folder specified
in the script, a copy of which can be found in the *scripts* folder in the
toolbox source. This files can then be loaded and combined with:
.. code:: python
import xarray as xr
data = xr.open_mfdataset(path + '/*.h5', parallel=True, join='inner')
DSSC azimuthal integration
##########################
Azimuthal integration can be performed with pyFAI_ which can utilize the
hexagonal pixel shape information from the DSSC geometry to split
the intensity in a pixel in the bins covered by it. An example notebook
:doc:`Azimuthal integration of DSSC with pyFAI.ipynb <Azimuthal integration of DSSC with pyFAI>` is available.
A second example notebook
:doc:`DSSC scattering time-delay.ipynb <DSSC scattering time-delay>`
demonstrates how to:
- refine the geometry such that the scattering pattern is centered before
azimuthal integration
- perform azimuthal integration on a time delay
dataset with ``xr.apply_ufunc`` for multiprocessing.
- plot a two-dimensional map of the scattering change as function of
scattering vector and time delay
- integrate certain scattering vector range and plot a time trace
DSSC fine timing
################
When DSSC is reused after a period of inactivity or when the DSSC gain setting
use a different operation frequency the DSSC fine trigger delay needs to be
checked. To analysis runs recorded with different fine delay, one can use
the notebook :doc:`DSSC fine delay with SCS toolbox.ipynb <DSSC fine delay with SCS toolbox>`.
DSSC quadrant geometry
######################
To check or refined the DSSC geometry or quadrants position, the following
notebook can be used :doc:`DSSC create geometry.ipynb <DSSC create geometry>`.
Legacy DSSC binning procedure
#############################
Most of the functions within toolbox_scs.detectors can be accessed directly. This is useful during development, or when working in a non-standardized way, which is often neccessary during data evaluation. For frequent routines there is the possibility to use dssc objects that guarantee consistent data structure, and reduce the amount of recurring code within the notebook.
* bin data using toolbox_scs.tbdet -> *to be documented*.
* :doc:`bin data using the DSSCBinner <dssc/DSSCBinner>`.
* post processing, data analysis -> *to be documented*
Photo-Electron Spectrometer (PES)
---------------------------------
* :doc:`Basic analysis of PES spectra <PES_spectra_extraction>`.
BOZ: Beam-Splitting Off-axis Zone plate analysis
------------------------------------------------
The BOZ analysis consists of 4 notebooks and a script. The first notebook
:doc:`BOZ analysis part I.a Correction determination <BOZ analysis part I.a Correction determination>`
is used to determine all the necessary correction, that is the flat field
correction from the zone plate optics and the non-linearity correction from the
DSSC gain. The inputs are a dark run and a run with X-rays on three broken or
empty membranes. For the latter, an alternative is to use pre-edge data on an
actual sample. The result is a JSON file that contains the flat field and
non-linearity correction as well as the parameters used for their determination
such that this can be reproduced and investigated in case of issues. The
determination of the flat field correction is rather quick, few minutes and is
the most important correction for the change in XAS computed from the -1st and
+1st order. For quick correction of the online preview one can bypass the
non-linearity calculation by taking the JSON file as soon as it appears.
The determination of the non-linearity correction is a lot longer and can take
some 2 to 8 hours depending on the number of pulses in the
train. For this reason, the computation can also be done on GPUs in 30min
instead. A GPU notebook adapted for CHEM experiment with liquid jet and
normalization implement for S K-edge is available at
:doc:`OnlineGPU BOZ analysis part I.a Correction determination S K-egde <OnlineGPU BOZ analysis part I.a Correction determination S K-egde>`.
The other option is to use a script
that can be downloaded from :download:`scripts/boz_parameters_job.sh` and
reads as:
.. literalinclude:: scripts/boz_parameters_job.sh
:language: bash
:linenos:
It uses the first notebook and is launched via slurm:
``sbatch ./boz_parameters_job.sh -p 2937 -d 615 -r 614 -g 3``
where 2937 is the proposal run number, where 615 is the dark run number,
614 is the run on 3 broken membranes and 3 is
the DSSC gain in photon per bin. The proposal run number is defined inside the
script file.
The second notebook
:doc:`BOZ analysis part I.b Correction validation <BOZ analysis part I.b Correction validation>` can be used to check how well the calculated correction still
work on a characterization run recorded later, i.e. on 3 broken membrane or empty membranes.
The third notebook
:doc:`BOZ analysis part II.1 Small data <BOZ analysis part II.1 Small data>`
then use the JSON correction file to load all needed corrections and
process an run, saving the rois extracted DSSC as well as aligning them to
photon energy and delay stage in a small data h5 file.
That small data h5 file can then be loaded and the data binned to compute a
spectrum or a time resolved XAS scan using the fourth and final notebook
:doc:`BOZ analysis part II.2 Binning <BOZ analysis part II.2 Binning>`
Point detectors
---------------
Detectors that produce one point per pulse, or 0D detectors, are all handled in a similar way. Such detectors are, for instance, the X-ray Gas Monitor (XGM), the Transmitted Intensity Monitor (TIM), the electron Bunch Arrival Monitor (BAM) or the photo diodes monitoring the PP laser.
HRIXS
-----
* :doc:`Analyzing HRIXS data <HRIXS>`
Viking spectrometer
-------------------
* :doc:`Analysis of Viking spectrometer data <Analysis_of_Viking_spectrometer_data>`
SLURM, sbatch, partition, reservation
-------------------------------------
Scripts launched by ``sbatch`` command can employ magic cookie with
``#SBATCH`` to pass options SLURM, such as which partition to run on.
To work, the magic cookie has to be at the beginning of the line.
This means that:
* to comment out a magic cookie, adding another "#" before it is sufficient
* to comment a line to detail what the option does, it is best practice
to put the comment on the line before
Reserved partition are of the form "upex_003333" where 3333 is the proposal
number. To check what reserved partition are existing, their start and end
date, one can ``ssh`` to ``max-display`` and use the command ``sview``.
.. image:: sview.png
To use a reserved partition with ``sbatch``, one can use the magic cookie
``#SBATCH --reservation=upex_003333``
instead of the usual
``#SBATCH --partition=upex``
.. _pyFAI: https://pyfai.readthedocs.io
Loading run data
================
.. toctree::
Loading_data_in_memory
Short version
-------------
Loading data in memory is performed as follows:
**Option 1**:
......
Finding time overlap by transient reflectivity
----------------------------------------------
Transient reflectivity of the optical laser measured on a large bandgap material pumped by the FEL is often used at SCS to find the time overlap between the two beams. The example notebook
* :doc:`Transient reflectivity measurement <Transient reflectivity measurement>`
shows how to analyze such data, including correcting the delay by the bunch arrival monitor (BAM).
......@@ -15,7 +15,7 @@ interactive_reqs = ['ipykernel', 'matplotlib', 'tqdm',]
maxwell_reqs = ['joblib', 'papermill', 'dask[diagnostics]',
'extra_data', 'extra_geom', 'euxfel_bunch_pattern>=0.6',
'pyFAI',]
docs_reqs = ['sphinx', 'nbsphinx']
docs_reqs = ['sphinx', 'nbsphinx', 'sphinx-autoap', 'pydata-sphinx-theme']
setup(name='toolbox_scs',
version=_version,
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment