Re: Status of luminous v12.2.1 QE validation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Do we do 2d round of QE validation ?

On Mon, Sep 25, 2017 at 10:28 AM, Sage Weil <sweil@xxxxxxxxxx> wrote:
> On Mon, 25 Sep 2017, Abhishek wrote:
>> On 2017-09-25 11:43, Abhishek wrote:
>> > On 2017-09-25 03:08, Yan, Zheng wrote:
>> > > On 2017/9/24 22:03, Abhishek wrote:
>> > > > On 2017-09-19 23:11, Patrick Donnelly wrote:
>> > > > > On Tue, Sep 19, 2017 at 1:10 PM, Yuri Weinstein <yweinste@xxxxxxxxxx>
>> > > > > wrote:
>> > > > > > kcephfs - needs analysis - Patrick, Zheng FYI
>> > > > >
>> > > > > http://tracker.ceph.com/issues/21463
>> > > > > http://tracker.ceph.com/issues/21462
>> > > > > http://tracker.ceph.com/issues/21466
>> > > > > http://tracker.ceph.com/issues/21467
>> > > > > http://tracker.ceph.com/issues/21468
>> > > > >
>> > > > > Last 3 look like blockers. Zheng will have more input.
>> > > >
>> > > > Here are the results of a rerun with -k testing,
>> > > > http://pulpito.ceph.com/abhi-2017-09-22_19:35:04-kcephfs-luminous-testing-basic-smithi/
>> > > > & details http://tracker.ceph.com/issues/21296#note-20
>> > > > There are a few failures which look environmental but a few look like
>> > > > they are related to the changeset (the ones with cache)
>> > > >
>> > > "MDS cache is too large" issue should be fixed by
>> > > https://github.com/ceph/ceph/pull/17922
>> > Ah alright this is a qa suite fix, we can run a the suite with a qa
>> > suite argument just to be sure.
>>
>> alternatively we can just merge this in luminous branch and this gets tested
>> in the final qe run anyway, since we may have to do that since there are some
>> rados and cephfs changes?
>
> Yes, let's do that.
>
> I think all of the other blockers are resolved now?
>
> sage
>
>
>> >
>>  Are there any other cephfs patches that needs to go in to the luminous
>> > brancha as such? I believe we would want the qe suites after the RADOS
>> > PR went in (https://github.com/ceph/ceph/pull/17796)
>> > Sage, Josh, Yehuda, Patrick
>> > Is there anything else that needs to go in before we start a second round?
>> >
>> >
>> >
>> > > Regards
>> > > Yan, Zheng
>> > >
>> > > > Best,
>> > > > Abhishek
>> >
>> > --
>> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> > the body of a message to majordomo@xxxxxxxxxxxxxxx
>> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux