interpreting rados run failures with the teuthology OpenStack backend

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[cc'ing ceph-devel in case someone else tries to run the rados suite on OpenStack]

Hi Kefu,

Your rados run at http://149.202.175.39:8081/ubuntu-2015-07-26_14:28:51-rados-wip-kefu-t3sting---basic-openstack/ has a number of failed / dead jobs. Some of them are environmental failures, others happen because things run on bare metal machines that run slower, have smaller disks and shorter timeouts. A few of them I can't explain right now and maybe they are just bugs ;-)

There is an inventory of all the failures found in a rados run against next today, also using OpenStack at http://tracker.ceph.com/issues/12471

* timeout after 4h : http://tracker.ceph.com/issues/12471#note-8
* mysterious rgw DNSError which never happens in sepia : http://tracker.ceph.com/issues/12471#note-4
* out of disk (OpenStack machines only have 40GB) : http://tracker.ceph.com/issues/12471#note-14

For the record here are the sepia runs archives: http://pulpito.ceph.com/?suite=rados&branch=next

Cheers

-- 
Loïc Dachary, Artisan Logiciel Libre

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux