Preparing the suites to run with the OpenStack backend

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Josh, Yehuda and Greg,

It is my understanding that there is a chance we may need to use the OpenStack teuthology backend as a backup while machines in the sepia lab migrate from one data center to another. Zack has setup a new teuthology cluster that will transparently behave as the cluster in the sepia lab does: the only difference being that you would --machine-type openstack instead. Or the new teuthology-openstack command could be used if one feels like learning about it.

Despite our best efforts, OpenStack provisionning is not 100% transparent. In the past few months we made various attempts at running suites on OpenStack to verify that they do not massively fail and identify possible show stoppers. When possible the OpenStack backend was adapted but in some cases the suites themselves had to be modified. For instance, a number of jobs in the rados suite run fine with no attached disks, which is the default. But all jobs in rados/thrash need three attached disks per target and that had to be set in the ceph-qa-suite files as follows[1]:

openstack:
  machine:
    disk: 40 # GB
    ram: 8000 # MB
    cpus: 1
  volumes: # attached to each instance
    count: 3
    size: 30 # GB

The rados suite for hammer now runs cleanly on OpenStack [2] and I'll work on making it run on infernalis as well [3]. The rbd suite for hammer runs cleanly (no changes :-) on OpenStack [4] but needs work to run on infernalis: an inventory of the problems was made[5].

A similar verification needs to be done for the rgw and fs suites (the upgrade / ceph-deploy / ceph-disk are not a concern as they already run on virtual machines). The first problem that need attention for the rgw suite is http://tracker.ceph.com/issues/12471#note-4 (which also happens with the infernalis rados suite because it has some rgw workload). AFAIK, there are no other outstanding issues.

I will not be able to run and fix all the suites all by myself, it's too much work and would divert me for too long from my rados duties. I'm however available to help as much as you need to make it work :-)

Cheers


[1] resource hint for rados/thrash https://github.com/ceph/ceph-qa-suite/blob/wip-12329-resources-hint-hammer/suites/rados/thrash/clusters/openstack.yaml
[2] running the rados suite on OpenStack virtual machines (hammer) http://tracker.ceph.com/issues/12386
[3] running the rados suite on OpenStack virtual machines (infernalis) http://tracker.ceph.com/issues/12471
[4] running the rbd suite on OpenStack virtual machines (hammer) http://tracker.ceph.com/issues/13265
[5] running the rbd suite on OpenStack virtual machines (infernalis) http://tracker.ceph.com/issues/13270

-- 
Loïc Dachary, Artisan Logiciel Libre


Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux