Yuri: the ceph-disk suite had 2 pass, 1 fail in OVH. The failure
happened in the ceph-cm-ansible (VM provisioning) phase - "Failure
downloading
http://satellite.front.sepia.ceph.com/pub/katello-ca-consumer-latest.noarch.rpm,
Request failed: <urlopen error timed out>".
I'm not sure how the OVH hosts are supposed to be able to download stuff
from sepia.ceph.com, but it apparently works some of the time since two
jobs succeeded.
I would suggest re-running this one job, or just re-run the entire suite
on VPS (ceph-disk suite was always very stable on VPS).
On 07/19/2018 11:47 PM, Yuri Weinstein wrote:
Details of this release summarized here
http://tracker.ceph.com/issues/24981#note-2
The following suites included:
rados
rgw
rbd PASSED
fs
kcephfs
multimds
krbd
knfs PASSED
ceph-deploy
ceph-disk
upgrade/client-upgrade-jewel (mimic) PASSED
upgrade/client-upgrade-luminous (mimic) PASSED
upgrade/luminous-x (mimic)
upgrade/luminous-p2p
powercycle
ceph-ansible
ceph-volume
(please speak up if something is missing)
Please review results (w/o 'PASSED' mark), also consider resolving
noted tickets, add more if needed.
Infrastructure related issues:
- Ability to rerun "failed" jobs is limited, Kefu is fixing, Zakc FYI
(https://github.com/ceph/teuthology/pull/1182 is not completely
working)
- p2p upgrades can not run due to http://tracker.ceph.com/issues/24760
Thx
YuriW
--
Nathan Cutler
Software Engineer Distributed Storage
SUSE LINUX, s.r.o.
Tel.: +420 284 084 037
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html