Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"How will that go for the next run of upgrade/giant-x ?"

I was thinking that as soon as for example this suite passed, #11189 gets resolved as thus indicates that it's ready for for the hammer release cut. 


Thx
YuriW

----- Original Message -----
From: "Loic Dachary" <loic@xxxxxxxxxxx>
To: "Yuri Weinstein" <yweinste@xxxxxxxxxx>
Cc: "Sage Weil" <sweil@xxxxxxxxxx>, "Ceph Development" <ceph-devel@xxxxxxxxxxxxxxx>
Sent: Sunday, March 22, 2015 5:35:19 PM
Subject: Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases



On 22/03/2015 17:16, Yuri Weinstein wrote:
> Loic, I think the idea was to do more process driven approach for releasing hammer, e.g. keep track of suites vs. results and open issues, so we can have a high level view on status at any time before the final cut day.
> 
> Do you have any suggestions or objections?

Reading http://tracker.ceph.com/issues/11189 I see it has one run, and a run of failed tests, and got resolved because all passed. The title is hammer: upgrade/giant-x. How will that go for the next run of upgrade/giant-x ?

I use a python snippet to display the errors in a redmine format (http://workbench.dachary.org/dachary/ceph-workbench/issues/2)

$ python ../fail.py teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps
** *'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/cls/test_cls_rgw.sh'*
*** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/parallel_run/{ec-rados-parallel.yaml rados_api.yaml rados_loadgenbig.yaml test_cache-pool-snaps.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814081
** *"2015-03-20 23:04:51.042345 mon.0 10.214.130.49:6789/0 3 : cluster [WRN] message from mon.1 was stamped 14400.248297s in the future, clocks not synchronized" in cluster log*
*** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814155
** *Could not reconnect to ubuntu@xxxxxxxxxxxxxxxxxxxxxxxxxxx*
*** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814108
** *Could not reconnect to ubuntu@xxxxxxxxxxxxxxxxxxxxxxxxxxx*
*** "upgrade:giant-x/stress-split-erasure-code/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814194
** *'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i a'*
*** "upgrade:giant-x/stress-split-erasure-code-x86_64/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814197
** *timed out waiting for admin_socket to appear after osd.13 restart*
*** "upgrade:giant-x/stress-split/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/rhel_6.5.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814186

> 
> Thx
> YuriW
> 
> ----- Original Message -----
> From: "Loic Dachary" <loic@xxxxxxxxxxx>
> To: "Sage Weil" <sweil@xxxxxxxxxx>
> Cc: "Ceph Development" <ceph-devel@xxxxxxxxxxxxxxx>
> Sent: Sunday, March 22, 2015 1:54:06 AM
> Subject: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
> 
> Hi Sage,
> 
> You have created a few hammer related tasks at http://tracker.ceph.com/projects/ceph-releases/issues . What did you have in mind ?
> 
> Cheers
> 

-- 
Loïc Dachary, Artisan Logiciel Libre
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux