Look at the '--suite' options. You can tell teuthology to use a modified local directory for '/qa' On Tue, Apr 21, 2020 at 1:49 AM Rishabh Dave <ridave@xxxxxxxxxx> wrote: > > Hi all, > > I am testing my "multifs auth caps" PR[1] with teuthology. The issue > is that just to discover mistakes in my patch for files like > kernel_mount.py[2], mount.py[3], and fuse_mount.py[4], I need to push > the branch on ceph-ci, wait several hours for the build process to > complete, trigger the teuthology tests and again wait for them to be > executed and repeat all these steps again until all the issues in my > patch are fixed. > > Since [2][3][4] are Python programs, they don't play any role in the > build process AFAIK. If that's absolutely true, is there a way to > circumvent the "waiting for build process to complete" part and > trigger the tests directly using the binaries from previous build? > This would save several hours for me and also take the boredom out of > testing these changes. If previous build are wiped out on updating my > copy of PR branch on ceph-ci, I can maintain two branches on ceph-ci > too: one for builds and other for Python changes. > > I did run my tests with vstart_runner.py locally to reduce my number > of round trips to pulpito.ceph.com but the changes in [2][3][4] don't > get tested with vstart_runner.py since vstart_runner.py uses its own > classes for handling CephFS mounts. > > Thanks, > - Rishabh > > [1] https://github.com/ceph/ceph/pull/32581 > [2] https://github.com/rishabh-d-dave/ceph/blob/wip-djf-15070/qa/tasks/cephfs/kernel_mount.py > [3] https://github.com/rishabh-d-dave/ceph/blob/wip-djf-15070/qa/tasks/cephfs/mount.py > [4] https://github.com/rishabh-d-dave/ceph/blob/wip-djf-15070/qa/tasks/cephfs/fuse_mount.py > _______________________________________________ > Dev mailing list -- dev@xxxxxxx > To unsubscribe send an email to dev-leave@xxxxxxx > -- Cheers, Brad _______________________________________________ Dev mailing list -- dev@xxxxxxx To unsubscribe send an email to dev-leave@xxxxxxx