On Tue, Jul 24, 2018 at 8:33 AM, John Spray <jspray@xxxxxxxxxx> wrote: > On Tue, Jul 24, 2018 at 1:15 PM Alfredo Deza <adeza@xxxxxxxxxx> wrote: >> >> On Tue, Jul 24, 2018 at 7:46 AM, Laura Paduano <lpaduano@xxxxxxxx> wrote: >> > Hi, >> > >> > On Montag, 23. Juli 2018 16:10:53 CEST Gregory Farnum wrote: >> >> On Mon, Jul 23, 2018 at 6:13 AM Laura Paduano <lpaduano@xxxxxxxx> wrote: >> >> > Hi all, >> >> > >> >> > I've created a pull request which is supposed run our Ceph mgr dashboard >> >> > frontend (e2e) tests. [1] >> >> > In oder to run those tests, we need a compiled Ceph around on the system. >> >> > So I was wondering how to integrate this into the Jenkins job.. >> >> > I guess one option would be to include the run-make-check script >> >> > like it's used in the ceph-pull-request Jenkins job [2] ? >> >> > Or is there any other way? >> >> >> >> This isn't my expertise, but there definitely isn't an established >> >> pattern to follow here. You may have noticed that so far the >> >> Jenkins-based tests predominantly either >> >> 1) build ceph, as in the build-and-make-check jobs that run on PRs, or >> >> 2) are used for ceph-ansible, which itself deploys Ceph from packages. >> >> >> >> There are also the ceph-volume tests, which probably do turn on a >> >> cluster, so you could look to see if they're doing something else? >> > >> > I'll take a look into the jobs and try to figure out how it's done there. >> > Thanks! >> >> >> >> Otherwise, there are two basic paths: >> >> 1) You can install from packages, either full releases or dev packages >> >> built off master branch etc, >> >> 2) You can build it again locally based on the git repo you're testing >> >> the dashboard for. >> >> >> > I think #2 would be the preferred way (or at least that's how we did it with >> > pull requests in openATTIC). >> > >> >> Those will have different tradeoffs and I'm not sure what dominates >> >> for the dashboard. (For instance, how often do dashboard PRs include >> >> required changes to other Ceph systems?) Hopefully if you do decide to >> >> do your own builds you can somehow integrate with the existing per-PR >> >> tests that happen, or turn those off in favor of your larger job. It's >> >> not critical but those builds do take up some time so it'd be nice not >> >> to re-do them unless necessary. >> > >> > IIRC we thought about integrating the tests into the existing per PR tests, >> > but I think there were also some disadvantages (for example having a >> > dependency on google-chrome, as well as it's might not be desired to have the >> > e2e tests executed on *every* pull request, especially when the number of >> > tests is growing it'll be very time consuming. >> > I was wondering if it's possible to have something like a "if the PR that is >> > being tested has the "dashboard"-label go ahead, install Chrome and execute >> > the e2e tests, otherwise skip"-thingy in a Jenkins job configuration.. >> >> This is a similar situation for ceph-volume. We have highly functional >> tests that require binaries and repos for the various different >> tests we support, and we don't get them for every pull request. So at >> the moment, it is on a per-case basis, where we manually >> create these repos and then we trigger the jobs to test against those. >> >> The lack of automation is somewhat annoying, and the wait times are >> now about 1.5 hours to get repos, which is the unfortunate effect of >> the ever-growing >> packages and dependencies in the tree, but we went this way to prevent >> building repos unnecessarily for every PR and running ceph-volume >> tests that were not >> really needed. > > Is the dependency specifically on having package repos, or just having > some binaries? > Repos. Because we test against both CentOS7 and Xenial, and we are soon to add a few more, multiplied by our supported OSD scenarios (e.g. dmcrypt). Since we test with ceph-ansible, we want to be sure that the whole deployment works as if this was a release. > One thing I'd love would be for the "make check" process to stash its > built tree, so that follow-on jobs (tests that work with a vstart > cluster, like qa/mgr, qa/cephfs) could run without rebuilding > binaries, or waiting for packages. This wouldn't work well for our use case because we deploy a cluster. Not sure how we would do that with just the binaries, unless you have some ideas here > > John > >> >> > >> > Or another way could be to trigger the dashboard tests after the existing per >> > PR tests passed by accessing the system on which the PR tests have been run >> > already. But I imagine this could be difficult since those systems will be >> > destroyed after the tests, won't they? >> > >> > >> > >> > Thanks a lot for your feedback, Greg! >> > >> > Laura >> > >> > >> >> -Greg >> >> >> >> > [1] https://github.com/ceph/ceph-build/pull/1086 >> >> > [2] >> >> > https://github.com/ceph/ceph-build/blob/master/ceph-pull-requests/config/ >> >> > definitions/ceph-pull-requests.yml#L56 >> >> > >> >> > >> >> > Feedback/ideas/reviews are much appreciated! :) >> >> > >> >> > Thanks in advance, >> >> > Laura >> >> > >> >> > >> >> > -- >> >> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in >> >> > the body of a message to majordomo@xxxxxxxxxxxxxxx >> >> > More majordomo info at http://vger.kernel.org/majordomo-info.html >> >> >> >> -- >> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in >> >> the body of a message to majordomo@xxxxxxxxxxxxxxx >> >> More majordomo info at http://vger.kernel.org/majordomo-info.html >> > >> > >> > -- >> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in >> > the body of a message to majordomo@xxxxxxxxxxxxxxx >> > More majordomo info at http://vger.kernel.org/majordomo-info.html >> -- >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in >> the body of a message to majordomo@xxxxxxxxxxxxxxx >> More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html