Ceph bock storage and Openstack Cinder Scheduler issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi there,

Can someone possibly shed some light on and issue we are experiencing
with the way Cinder is scheduling Ceph volumes in our environment.

We are running cinder-volume on each of our compute nodes, and they
are all configured to make use of our Ceph cluster.

As far as we can tell the Ceph cluster is working as it should,
however the problem we are having is that each and every Ceph volume
gets attached to only one of the Compute nodes.

This is not idea as it will create a bottle-neck on the one host.

>From what I have read the default Cinder scheduler should pick the
cinder-volume node with the most available space, but since all
compute nodes should report the same, as per the space available in
the Ceph volume pool, how is this meant to work then ?

We have also tried to implement the Cinder chance scheduler in the
hope that Cinder will randomly pick another storage node, but this did
not make any difference.

Has anyone else experienced the same issue or similar ?

Is there perhaps a way that we can round-robin the volume attachments ?

Openstack version: Grizzly using Ubuntu LTS and Cloud PPA.

Ceph version: Cuttlefish from Ceph PPA.

Thanks in advance,
Gavin
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux