diff --git a/docs/development/testing_pipeline.md b/docs/development/testing_pipeline.md index 174d2d7a1db486637dc318e7687f4ee22ab83aee..9fe8a8ce3bea8ffc1f658bc8d6c2553822732743 100644 --- a/docs/development/testing_pipeline.md +++ b/docs/development/testing_pipeline.md @@ -40,7 +40,25 @@ These tests are meant to run on all detector calibrations before any release. As } ``` -- Tests are triggered using CLI: +- Test are triggered using GitLab manual trigger CI pipeline. + +After opening a merge request on GitLab the CI initiates the unit tests pipeline automatically. After these tests are finished, you get an option to trigger a manual test. + + + +Using this Gitlab feature you can run the tests with no configuration, this will result in running all the runs for the automated tests. This is usually good if the change is effecting all detectors and all calibration notebooks. Or before deploying new releases. + +Otherwise, you can configure the test to a specific `CALIBRATION` (DARK or CORRECT) or/and configure the list of detectors to run the tests for. + + + +!!! warning + +It is not recommended to run multiple automated tests on more than a merge request at the same time. As these test are memory consuming and all tests run on the same test node. + +The GitLab test pipeline jobs progress report is used to collect useful information about the test result and the reason of failure for any of the tested calibration runs. + +- Tests are triggered using CLI locally: ```bash pytest tests/test_reference_runs/test_pre_deployment.py \ @@ -61,7 +79,6 @@ These tests are meant to run on all detector calibrations before any release. As - detector: this can be used to pick detectors to run the test for and skip the rest. - no-numerical-validation: as the numerical validation is done by default. This argument can be used to skip it and stop at executing the calibration and making sure that the SLURM jobs were COMPLETED. - validation-only: in case the test output and reference files were already available and only a validation check is needed. this argument can be used to only run the validation checks without executing any calibration jobs. - - find-difference: by default files are compared to reference data. If more information is needed to know the difference between reference and test files, this argument can be helpful. It should be expected that more time is consumed to check the files that fails the validation, to get a report for the different datasets/attributes. - Below are the steps taken to fully test the calibration files: 1. Run `xfel-calibrate DET CAL ...`, this will result in Slurm calibration jobs. @@ -72,24 +89,3 @@ These tests are meant to run on all detector calibrations before any release. As 2. Find the datatsets/attributes that are different in both files. In case a test fails the whole test fails and the next test starts. - -## Challenge for manually triggering the automated tests. - -Currently these tests run by only one person before each release. Unfortunately, it is not use on individual branches and MRs as it is not exposed to all contributors. - -Automating this test can solve the dependence on only one user to run the tests manually. Additionally, it can help in identifying bugs/errors faster, instead of delaying the tests to one duration before the release. This can as well help in not delaying the releases by some hours as the full test can take a lot of hours. - -It is preferred to run this automated test in a common place that is accessed by the calibration team to have more operators/monitors. This would result in a more active approach in solving any problem related to the testing pipeline or the tested detector calibrations. -Moreover exposing the running test would help in collecting more ideas and efforts in improving the testing pipeline. - -## Automating triggering the testing pipeline - -To automate the triggering: - -- Decide on a max-node to run the tests. - - - Currently the tests are manually triggered on a selected node and after the calibration execution is done all checks and validations are done on this selected node. - - - Create a cron job to schedule the automatic triggering for the tests. - - - Keep a logging format as a reporting tool for the test results. diff --git a/docs/static/tests/given_argument_example.png b/docs/static/tests/given_argument_example.png new file mode 100644 index 0000000000000000000000000000000000000000..cfcf7ea865ac313bfcad27e5ce5610821dd56d1f Binary files /dev/null and b/docs/static/tests/given_argument_example.png differ diff --git a/docs/static/tests/manual_action.png b/docs/static/tests/manual_action.png new file mode 100644 index 0000000000000000000000000000000000000000..7c37a56f012ed5a0f267407fa1657f5605f180e8 Binary files /dev/null and b/docs/static/tests/manual_action.png differ diff --git a/mkdocs.yml b/mkdocs.yml index 5b08073c2798d551768bae899135f3b0f0fe69b1..697b683b87864186bd6d9c4b0eea435d8891c115 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -65,20 +65,20 @@ plugins: - glightbox - search - autorefs - - gen-files: - scripts: - - docs/gen_ref_pages.py + # - gen-files: + # scripts: + # - docs/gen_ref_pages.py - literate-nav: nav_file: SUMMARY.md - section-index - - mkdocstrings: - handlers: - python: - import: - - https://docs.python-requests.org/en/master/objects.inv - paths: [src] - docstring_style: "sphinx" - docstring_section_style: "list" + # - mkdocstrings: + # handlers: + # python: + # import: + # - https://docs.python-requests.org/en/master/objects.inv + # paths: [src] + # docstring_style: "sphinx" + # docstring_section_style: "list" repo_url: https://git.xfel.eu/calibration/pycalibration