Abhishek Lekshmanan <abhishek@xxxxxxxx> writes: > kefu chai <tchaikov@xxxxxxxxx> writes: > >> On Sun, Nov 5, 2017 at 8:17 PM, Abhishek L >> <abhishek.lekshmanan@xxxxxxxxx> wrote: >>> On Sat, Nov 4, 2017 at 8:51 AM, Nathan Cutler <ncutler@xxxxxxx> wrote: >>>>> all or nothing. if we want to backport the fix of >>>>> http://tracker.ceph.com/issues/21762, we will have to backport the >>>>> "debian: fix package relationships after" fixes. so i think pull/18589 >>>>> is self-contained. and no additional step is needed. >>>>> >>>>> and per http://tracker.ceph.com/issues/21762, we need to have it in >>>>> luminous. but i have no strong opinion on including it in 12.2.2 or >>>>> post 12.2.2. Vikhyat and Nathan, what do you think? >>>> >>>> >>>> Yes, I tried to make 18589 self-contained by pulling in Kefu's and >>>> Abhishek's follow-on fixes. I think it would be nice to have it in 12.2.2. >>> >>> Rebuilt the integration branch with the newer PR and it still fails >>> with the same reason >>> http://pulpito.ceph.com/abhi-2017-11-05_11:53:02-ceph-disk-wip-abhi-testing-2017-11-05-0933-distro-basic-vps/ >>> for eg. >>> >>> Kefu, are there any more commits we need to solve the dependency issue? >> >> well, yes and no. >> >> we need to "dch" the changelog, to bump up the version number to >> 12.2.2 to address this issue. but i thought this was a step of our >> release process: see >> https://github.com/ceph/ceph-build/blob/2cb4f4069c7a0fac1abecd76e6014272f22cf139/ansible/roles/ceph-release/tasks/release/stable.yml#L3. >> >> that's why i believed that PR#18589 was self-contained. anyway, we can >> "fix" this issue in two ways: >> >> 1. "dch" the changelog in a commit only for testing, and it will not >> be merged into luminous. run that the upgrade suites using dch'ed >> version, and wait until the jenkins builder to tag and dch >> automatically for the official release of 12.2.2 >> 2. just drop the PR#18589. so and include it in post 12.2.2. so the >> problem will be resolved w/o any efforts by then. > > I'm inclined to do #2, either way I've scheduled suites (which has > completed now) without this PR since I couldn't get moving with the PR > included, I'll post the results soon & hopefully we can handover to QE > today itself update: refer http://tracker.ceph.com/issues/21830 for update information RADOS, RBD suites were green, ran into a couple of possible env. errors which didn't reproduce in the rerun. A possible timing issue (didn't reproduce) http://tracker.ceph.com/issues/22047 CephFS runs seem to hit http://tracker.ceph.com/issues/22039 consistently, the other failures look environmental. run info http://tracker.ceph.com/issues/21830#note-23 @Patrick can you take a look. RGW initially hit some failures due to s3-tests branch which was fixed, the tests are still running though failures so far have been environmental or known. http://tracker.ceph.com/issues/21830#note-18 Saw a valgrind leak on mons in one of the RGW runs, reported at http://tracker.ceph.com/issues/22052 ceph-disk consistently seems to run into a nearfull failure (was present in the first integration branch run as well) possibly related to dmcrypt. (a python backtrace is thrown as unknown key managment-mode) tracker.ceph.com/issues/21830#note-19, don't see any recent patches that went into master/luminous that could explain this failure unfortunately. Upgrade luminous-x seems to consistently fail with being unable to find ceph-common packages, eg. run: http://pulpito.ceph.com/abhi-2017-11-06_15:35:39-upgrade:luminous-x-wip-abhi-testing-2017-11-05-1320-distro-basic-smithi/ -- Abhishek Lekshmanan SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html