Re: Is it possible to have One Ceph-OSD-Daemon managing more than one OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 14/02/14 22:07, Vikrant Verma wrote:
Hi All,

I was trying to define QoS on volumes in the openstack setup. Ceph
Cluster is configured as Storage back-end for images and volumes.

As part of my experimentation i thought of clubbing few disks (say HDD)
with one type of QoS and other few disks (say SSD) with another type of QoS.
But the configuration/design does not seems to be efficient as you
suggested, rather i am trying now to put QoS on volumes itself.

Thanks for your suggestions.


If you are using Ceph as a storage backend for Openstack, then I would suggest that redundancy is possibly just as important as QoS (ahem - zero QoS if storage is completely unavailable)! You really want to have 2 (or even better 3) replica copies of your data - which means at least 2 (or 3) OSD on different machines.

Regards

Mark

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux