Skip to content
Snippets Groups Projects
Commit ca89c24c authored by David Hammer's avatar David Hammer
Browse files

Refactor: improve abstractions between device and kernels

- Move shared memory buffer from kernel runner / buffer device to correction
device itself
  - Wrap the ring buffer around shmem for convenience
  - Reused shared memory segment, just change shape of ndarray view of it
- Rename kernel runner (was PyCudaPipeline) to DsscGpuRunner
- Move all gpu interaction, to DsscGpuRunner
  - Explicit load methods (should control more FSM-like)
- Pulse filter removed temporarily
  - Part of "splitter" section, gets in the way of next refactoring step
  - Will add again after switching to correct first, reshape second operation
parent 5bc4774c
No related branches found
No related tags found
2 merge requests!12Snapshot: field test deployed version as of end of run 202201,!3Base correction device, CalCat interaction, DSSC and AGIPD devices
Loading
  • Author Owner

    Data flow within DsscGpuRunner: dssc-gpu-runner-data Blue outlines means data on host (numpy ndarrays). Red arrow indicates computation triggered here. In drawing this diagram, I realized there was actually redundant only_cast, so diagram is more applicable to latest commit.

    Edited by David Hammer
  • Author Owner

    WIP FSM-like diagram illustrating DsscGpuRunner. There's no checking of the workflow happening within the class itself, so user (that would be the correction device) should just call things in correct order. state-diagram-kernel-runner (Not very close to FSM in terms of missing transitions. There are a ton of back edges missing because they just get messy and repetitive. I realized I like looking at data flow better.)

    Edited by David Hammer
  • David Hammer @hammerd

    mentioned in commit 3566342d

    ·

    mentioned in commit 3566342d

    Toggle commit list
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment