Re: 13.2.5 QE Mimic validation status

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sage, ready for your final approval

David, Abhishek FYI

On Tue, Mar 12, 2019 at 2:15 AM Sebastien Han <shan@xxxxxxxxxx> wrote:
>
> Thanks Brad! =D
>
> –––––––––
> Sébastien Han
> Principal Software Engineer, Storage Architect
>
> "Always give 100%. Unless you're giving blood."
>
> On Tue, Mar 12, 2019 at 9:19 AM Brad Hubbard <bhubbard@xxxxxxxxxx> wrote:
> >
> > I can't reproduce this so can't really take it any further so, if I'm
> > the guy that signs off on ceph-ansible then I'm signing off on
> > ceph-ansible.
> >
> > On Tue, Mar 5, 2019 at 7:59 PM Brad Hubbard <bhubbard@xxxxxxxxxx> wrote:
> > >
> > > The CA failure looks like this.
> > >
> > > 2019-03-04T20:11:45.761
> > > INFO:teuthology.orchestra.run.ovh068.stdout:*******************************************************************************
> > > 2019-03-04T20:11:45.761 INFO:teuthology.orchestra.run.ovh068.stdout:
> > > 2019-03-04T20:11:45.761 INFO:teuthology.orchestra.run.ovh068.stdout:
> > > warnings.warn(DEPRECATION_WARNING)
> > > 2019-03-05T06:27:30.938
> > > INFO:teuthology.orchestra.run.ovh068.stdout:failed:
> > > [ovh099.front.sepia.ceph.com] (item=/dev/sdc) => {
> > > 2019-03-05T06:27:30.965 INFO:teuthology.orchestra.run.ovh068.stdout:
> > >  "changed": false,
> > > 2019-03-05T06:27:30.965 INFO:teuthology.orchestra.run.ovh068.stdout:    "cmd": [
> > > 2019-03-05T06:27:30.965 INFO:teuthology.orchestra.run.ovh068.stdout:
> > >      "ceph-disk",
> > > 2019-03-05T06:27:30.965 INFO:teuthology.orchestra.run.ovh068.stdout:
> > >      "activate",
> > > 2019-03-05T06:27:30.965 INFO:teuthology.orchestra.run.ovh068.stdout:
> > >      "/dev/sdc1"
> > > 2019-03-05T06:27:30.965 INFO:teuthology.orchestra.run.ovh068.stdout:    ],
> > > 2019-03-05T06:27:30.965 INFO:teuthology.orchestra.run.ovh068.stdout:
> > >  "delta": "10:15:44.896287",
> > > 2019-03-05T06:27:30.966 INFO:teuthology.orchestra.run.ovh068.stdout:
> > >  "end": "2019-03-05 06:27:30.868619",
> > > 2019-03-05T06:27:30.966 INFO:teuthology.orchestra.run.ovh068.stdout:
> > >  "item": "/dev/sdc",
> > > 2019-03-05T06:27:30.966 INFO:teuthology.orchestra.run.ovh068.stdout:    "rc": 1,
> > > 2019-03-05T06:27:30.966 INFO:teuthology.orchestra.run.ovh068.stdout:
> > >  "start": "2019-03-04 20:11:45.972332"
> > > 2019-03-05T06:27:30.966 INFO:teuthology.orchestra.run.ovh068.stdout:}
> > > 2019-03-05T06:27:30.966 INFO:teuthology.orchestra.run.ovh068.stdout:
> > > 2019-03-05T06:27:30.966 INFO:teuthology.orchestra.run.ovh068.stdout:STDERR:
> > > 2019-03-05T06:27:30.966 INFO:teuthology.orchestra.run.ovh068.stdout:
> > > 2019-03-05T06:27:30.967
> > > INFO:teuthology.orchestra.run.ovh068.stdout:/usr/lib/python2.7/dist-packages/ceph_disk/main.py:5689:
> > > UserWarning:
> > > 2019-03-05T06:27:30.967
> > > INFO:teuthology.orchestra.run.ovh068.stdout:*******************************************************************************
> > > 2019-03-05T06:27:30.967
> > > INFO:teuthology.orchestra.run.ovh068.stdout:This tool is now
> > > deprecated in favor of ceph-volume.
> > > 2019-03-05T06:27:30.967 INFO:teuthology.orchestra.run.ovh068.stdout:It
> > > is recommended to use ceph-volume for OSD deployments. For details
> > > see:
> > > 2019-03-05T06:27:30.967 INFO:teuthology.orchestra.run.ovh068.stdout:
> > > 2019-03-05T06:27:30.967 INFO:teuthology.orchestra.run.ovh068.stdout:
> > >  http://docs.ceph.com/docs/master/ceph-volume/#migrating
> > > 2019-03-05T06:27:30.967 INFO:teuthology.orchestra.run.ovh068.stdout:
> > > 2019-03-05T06:27:30.968
> > > INFO:teuthology.orchestra.run.ovh068.stdout:*******************************************************************************
> > > 2019-03-05T06:27:30.968 INFO:teuthology.orchestra.run.ovh068.stdout:
> > > 2019-03-05T06:27:30.968 INFO:teuthology.orchestra.run.ovh068.stdout:
> > > warnings.warn(DEPRECATION_WARNING)
> > > 2019-03-05T06:27:30.968
> > > INFO:teuthology.orchestra.run.ovh068.stdout:Removed symlink
> > > /run/systemd/system/ceph-osd.target.wants/ceph-osd@5.service.
> > > 2019-03-05T06:27:30.968
> > > INFO:teuthology.orchestra.run.ovh068.stdout:Created symlink from
> > > /run/systemd/system/ceph-osd.target.wants/ceph-osd@5.service to
> > > /lib/systemd/system/ceph-osd@.service.
> > > 2019-03-05T06:27:30.968
> > > INFO:teuthology.orchestra.run.ovh068.stdout:Warning! D-Bus connection
> > > terminated.
> > > 2019-03-05T06:27:30.968
> > > INFO:teuthology.orchestra.run.ovh068.stdout:Failed to wait for
> > > response: Connection reset by peer
> > >
> > > Note that the jobs take 7-11 hours to timeout (see time difference in
> > > above log excerpt) and this only seems to happen on Ubuntu 16.04 At
> > > this time I have no idea whether this has much, if anything, to do
> > > with CA or not. I'll try and look further into it when I get time.
> > > Note that the return code is 1, /* Operation not permitted */
> > >
> > > On Tue, Mar 5, 2019 at 7:49 PM Ilya Dryomov <idryomov@xxxxxxxxx> wrote:
> > > >
> > > > On Mon, Mar 4, 2019 at 10:27 PM Yuri Weinstein <yweinste@xxxxxxxxxx> wrote:
> > > > >
> > > > > Details of this release summarized here:
> > > > >
> > > > > http://tracker.ceph.com/issues/38435#note-3
> > > > >
> > > > > rados - PASSED
> > > > > rgw - Casey approved?
> > > > > rbd- Jason approved?
> > > > > fs - Patrick, Venky approved?
> > > > > kcephfs - Patrick, Venky approved?
> > > > > multimds - Patrick, Venky approved? (still re-runnig on -k testing)
> > > > > krbd - Ilya, Jason approved? (same as in same as in v13.2.3)
> > > >
> > > > Approved.
> > > >
> > > > Thanks,
> > > >
> > > >                 Ilya
> > >
> > >
> > >
> > > --
> > > Cheers,
> > > Brad
> >
> >
> >
> > --
> > Cheers,
> > Brad



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux