Skip to content
Snippets Groups Projects

Add a pytest to run a dict of CALLAB test runs before releases

Merged Karim Ahmed requested to merge test/automated_notebooks_tests_pre_deployments into master
1 file
+ 11
25
Compare changes
  • Side-by-side
  • Inline
+ 11
25
"""
1- RUN AGIPD xfel-calibrate (Run multiple threads/asyncio to run for different detectors/calibration-types)
2- AWAIT the slurm jobs to finish
3- DO healthy checks:
a- Check available report.
b- Check no errors in the rst files/slurm status.
c- Check number of corrected files? local constant files?
d- check expected time performance from time-summary files??
e- Copy latest out-data for comparison and validity check after ending the test???
"""
# 04.01.2022
"""[summary]
1. Read a dict of all detectors and calibration types to test.
2. Run xfel-calibrate for each detector and watch the SLURM jobs until it ends (TODO Asyncio/Background threads??).
2. Run xfel-calibrate for each detector and watch the SLURM jobs until it ends.
3. Validate that all SLURM jobs were COMPLETED.
4. Validate the presence of the report within the last 10 seconds
(TODO: Slurm folder and calibration metadata as well?)
5. Validate number of generated h5files.
6. Validate that the h5files are exactly the same.
4. Validate the presence of the PDF report.
5. Validate number of generated h5files against reference data.
6. Validate that the h5files are exactly the same against reference data.
7. TODO Validate expected time performace from the time-summary files?
8. Write/Output a report for each test result? [HTML pytest output?]
8. Write/Output a report for each test result, by running pytest using --html=<report_name>.html
"""
import asyncio
import hashlib
import io
import multiprocessing
import pathlib
import time
from contextlib import redirect_stdout
from subprocess import PIPE, run
from typing import Any, Dict, List
import pytest
import yaml
from typing import List, Any, Dict
import xfel_calibrate.calibrate as calibrate
with open("./callab_tests.yaml", "r") as f:
callab_test_dict = yaml.load(f)
import multiprocessing
def validate_h5files(f, reference):
def filemd5(filename, block_size=2**20):
@@ -56,6 +42,7 @@ def validate_h5files(f, reference):
return filemd5(f) == filemd5(reference / f.name), f
def parse_config(cmd: List[str], config: Dict[str, Any]) -> List[str]:
"""Convert a dictionary to a list of arguments.
@@ -84,7 +71,6 @@ def parse_config(cmd: List[str], config: Dict[str, Any]) -> List[str]:
@pytest.mark.manual_run
@pytest.mark.parametrize("calibration_test", list(callab_test_dict.items()))
@pytest.mark.asyncio
def test_xfel_calibrate(calibration_test):
test_key, val_dict = calibration_test
@@ -148,4 +134,4 @@ def test_xfel_calibrate(calibration_test):
if i[0]:
non_valid_files.append(i[1])
assert all(all_files_validated), f"{test_key} failure, while validating {non_valid_files}" # noqa
print(f"{test_key}'s calibration h5files are validated successfully.")
\ No newline at end of file
print(f"{test_key}'s calibration h5files are validated successfully.")
Loading