Few more Ceph Questions (Can OSD be in multiple places? How to resize OSD?)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



So I just have a few more questions that are coming to mind.  Firstly, I have OSDi whose underlying filesystems can be..  Dun dun dun!!!!  Resized!!  If I choose to expand my allocation to ceph, I can in theory do so by expanding the quota on the OSDi.  (I'm using ZFS)  Similarly, if the OSD is underutilized enough, I can in theory shrink the OSD.  The question:  How will Ceph respond to it's OSDi spontaneously changing in size?  Do I have to make some modification to the config before restarting the OSDi, or shut them down, resize, and restart them?  Or are they permanently lost, and my only hope is to replace them with larger OSDi and slowly migrate data off of them?

My other question is this:  I'm trying to limit my systems to effectively one OSD per pool.  I have one pool per VM host, and three VM hosts.  VM Hosts have a slow link path between them such that A can communicate in low latency, high throughput time with B or C, but B<==>C is very slow and high latency for obscure, probably stupid, technical reasons.  Therefore, I want to place OSDs, (one per host) such that VMs on either B or C are mirrored only to OSD.A, OSD.B for host B, and OSDA.A, OSD.C for host C.  Host C should be mirrored across all three OSDi, so as to not unduly overburdon either of the other two lower power hosts.  The tree should look something like this:

OSD.A
OSD.B
OSD.C

Host A
    OSD.A
    OSD.B
    OSD.C

Host B
    OSD.B
    OSD.A

Host C
    OSD.C
    OSD.A

Will this work?  Or will I be forced to run additional OSDi to be able to distribute data like this?  I can figure the config out myself, but before testing it, I just want to be fairly certain that it will actually work as I intend, and not give some error about OSDi appearing in multiple places in the config.

Thank you for taking the time to read and respond to my confound questions, everyone, and thanks for making Ceph as awesome as it is today :)
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux