Re: 12.2.3 QE Luminous validation status

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2018-02-13 at 16:41 -0500, Casey Bodley wrote:
> 
> On 02/13/2018 04:36 PM, Yuri Weinstein wrote:
> > Details of this release summarized here
> > http://tracker.ceph.com/issues/22665#note-4
> > 
> > The following suites included:
> > 
> > rados
> > rgw
> > rbd
> > krbd
> > fs
> > kcephfs
> > multimds
> > knfs
> > hadoop - EXCLUDED
> > samba - EXCLUDED
> > ceph-deploy
> > ceph-disk
> > upgrade/client-upgrade-hammer (luminous)
> > upgrade/client-upgrade-kraken (luminous)
> > upgrade/client-upgrade-jewel (luminous)
> > upgrade/jewel-x (luminous)
> > upgrade/kraken-x (luminous)
> > upgrade/luminous-x (master) - EXCLUDED
> > powercycle
> > ceph-ansible
> > ceph-volume
> > (please speak up if something is missing)
> > 
> > Please see all details in the ticket and add comments in the tracker.
> > 
> > Seeking approval from the dev leads.
> > 
> > Issues:
> > 
> > rados - passed, Josh pls confirm approval.
> > 
> > rgw - Casey, Abhishek - do we want to ad//merge
> > https://github.com/ceph/ceph/pull/20407 ?
> 
> yes please - that one was identified as a qa suite fix for a multisite 
> test failure in the last round of testing
> 
> > 
> > rbd, krbd - approved by Jason
> > 
> > fs, kcephfs, multimds - approved by Patrick
> > 
> > knfs - pending review/approval from Jeff (http://tracker.ceph.com/issues/22995)
> > 

So, we have two recent test runs, both with 2 failures:

The first run had one failure due to OSD_DOWN being in the logs, and
another due to what looks like a softlockup in the kernel on one of the
nodes. No stack trace to go with the softlockup, so I have no idea what
went wrong there. The second run just shows two OSD_DOWN failures. For
now I don't see anything that looks directly related to knfsd on either
of these.

Yuri looked and saw that we have a lot of failing runs on this suite so
it may be that this is just a broken test. I'll look more closely
tomorrow and see if I can figure out whether that's the case or whether
there is a real bug here.

Cheers,
 
> > ceph-deploy - approved by Vasu
> > 
> > upgrade/client-upgrade-kraken - Nathan, do/can we include your fix
> > into this release ?
> > 
> > upgrade/kraken-x (luminous) - some jobs still rerunning
> > 
> > upgrade/luminous-x (master) - Sage you to exclude this suite for now?
> > 
> > ceph-volume - pending approval from  Alfredo, Andrew
> > 
> > Pls reply
> > 
> > Thx
> > YuriW
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > the body of a message to majordomo@xxxxxxxxxxxxxxx
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

-- 
Jeff Layton <jlayton@xxxxxxxxxx>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux