Re: [Ceph-qa] 12.2.10 QE Luminous validation status

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Vase, is it it ? https://tracker.ceph.com/issues/22463
On Thu, Nov 8, 2018 at 2:48 PM Sage Weil <sweil@xxxxxxxxxx> wrote:
>
> On Thu, 8 Nov 2018, Vasu Kulkarni wrote:
> > On Thu, Nov 8, 2018 at 10:42 AM Vasu Kulkarni <vakulkar@xxxxxxxxxx> wrote:
> >
> > > On Thu, Nov 8, 2018 at 8:13 AM Yuri Weinstein <yweinste@xxxxxxxxxx> wrote:
> > > >
> > > > This release is to address regressions in this ticket
> > > > http://tracker.ceph.com/issues/36686
> > > >
> > > > Distro runs are still in progress
> > > >
> > > > Details of this release summarized here:
> > > > http://tracker.ceph.com/issues/36711#note-2
> > > >
> > > > rados - PASSED
> > > > rgw - Casey Matt approve? see known bugs
> > > > rbd - Jason approved
> > > > krbd - PASSED
> > > > fs - Patrick approve?
> > > > kcephfs - Patrick approve?
> > > > multimds - Patrick approve?
> > > > knfs - depreceted
> > > > ceph-deploy - Sage, Vasu approve? see bugs
> > > As per sage from past analysis it looks like disks are not cleaning up
> > > the partition and they are mostly bluestore tests that failed(there
> > > are some that have passed as well), we have been running *zap* before
> > > every osd create, probably zap is not doing its job + cm-ansible also
> > > does the osd partition cleanup. I will run this today in ovh and if it
> > > passes there it can confirm its a disk cleanup issue or its outside
> > > ceh-deploy.
> > >
> > > All faillures due to osd's not up, we have old tracker for this.
> > >
> > > http://qa-proxy.ceph.com/teuthology/yuriw-2018-11-06_17:15:26-ceph-deploy-luminous-distro-basic-mira/3230597/teuthology.log
> > > osd: 2 osds: 0 up, 0 in
> > >
> > Ran most of the tests that failed above in ovh and they pass, so it
> > confirms issue with zap/disk cleanup,
> > http://pulpito.ceph.com/vasu-2018-11-08_21:05:08-ceph-deploy-luminous-distro-basic-ovh/
>
> Thanks for checking this, Vasu!
>
> +1 on ceph-deploy
>
> sage
>
>
> >
> > >
> > >
> > >
> > > > ceph-disk PASSED
> > > > upgrade/client-upgrade-hammer (luminous) PASSED
> > > > upgrade/client-upgrade-jewel (luminous) PASSED
> > > > upgrade/luminous-p2p PASSED
> > > > upgrade/jewel-x (luminous) PASSED
> > > > upgrade/kraken-x (luminous) -  Josh approved ?, EOL
> > > > powercycle - Neha approve?
> > > > ceph-ansible - Sebastian
> > > > https://github.com/ceph/ceph-ansible/issues/3306 needs fixing.
> > > > ceph-volume - PASSED Alfredo approved?
> > > > (please speak up if something is missing)
> > > >
> > > > Thx
> > > > YuriW
> > > >
> > >
> >



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux