Re: Status of luminous v12.2.1 QE validation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2017-09-25 19:33, Patrick Donnelly wrote:
On Mon, Sep 25, 2017 at 10:28 AM, Sage Weil <sweil@xxxxxxxxxx> wrote:
On Mon, 25 Sep 2017, Abhishek wrote:
On 2017-09-25 11:43, Abhishek wrote:
> On 2017-09-25 03:08, Yan, Zheng wrote:
> > On 2017/9/24 22:03, Abhishek wrote:
> > > On 2017-09-19 23:11, Patrick Donnelly wrote:
> > > > On Tue, Sep 19, 2017 at 1:10 PM, Yuri Weinstein <yweinste@xxxxxxxxxx>
> > > > wrote:
> > > > > kcephfs - needs analysis - Patrick, Zheng FYI
> > > >
> > > > http://tracker.ceph.com/issues/21463
> > > > http://tracker.ceph.com/issues/21462
> > > > http://tracker.ceph.com/issues/21466
> > > > http://tracker.ceph.com/issues/21467
> > > > http://tracker.ceph.com/issues/21468
> > > >
> > > > Last 3 look like blockers. Zheng will have more input.
> > >
> > > Here are the results of a rerun with -k testing,
> > > http://pulpito.ceph.com/abhi-2017-09-22_19:35:04-kcephfs-luminous-testing-basic-smithi/
> > > & details http://tracker.ceph.com/issues/21296#note-20
> > > There are a few failures which look environmental but a few look like
> > > they are related to the changeset (the ones with cache)
> > >
> > "MDS cache is too large" issue should be fixed by
> > https://github.com/ceph/ceph/pull/17922
> Ah alright this is a qa suite fix, we can run a the suite with a qa
> suite argument just to be sure.

alternatively we can just merge this in luminous branch and this gets tested in the final qe run anyway, since we may have to do that since there are some
rados and cephfs changes?

Yes, let's do that.

I think all of the other blockers are resolved now?

These whitelist QA fixes would be good to merge to silence spurious
failures for QE:

https://github.com/ceph/ceph/pull/17945
https://github.com/ceph/ceph/pull/17821

About QE validation, my vote is to just go through cephfs & rados suites as we have a rados and a few (mostly qe related) cephfs prs that went in.

Sage/Yuri/Patrick thoughts?


Otherwise CephFS looks good to go.

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux