Re: [SOLVED] Re: Ceph bock storage and Openstack Cinder Scheduler issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 19 September 2013 11:51, Gavin <netmatters@xxxxxxxxx> wrote:

Hi,

Please excuse/disregard my previous email, I just needed a
clarification on my understanding of how this all fits together.

I was kindly pointed in the right direction by a friendly gentleman
from Rackspace. Thanks Darren. :)

The reason for my confusion was due to the way that the volumes are
displayed in the Horizon dashboard.

The dashboard shows that all volumes are attached to one Compute node,
which obviously led to my initial concerns.

Now that I know that the connections come from libvirt on the compute
node where the instances live, I have one less thing to worry about.

Thanks,
Gavin
_______________________________________________

yes AFAIK, cinder-volume is largely only involved in brokering the initial volume creation in RBD.  libvirt on the compute host where the instance lives then connects to RBD.

Whilst this would suggest that the inital cinder-volume host that brokered the creation is therefore no longer needed after creation, I do vaguely remember there still being some sort of thin requirement on that host remaining there, in grizzly at least.  That may be fixed now, but I'd be interested to see your experiences with that.

Darren

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux