Re: pros/cons of multiple OSD's per host

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Mon, Aug 21, 2017 at 3:58 PM, Ronny Aasen <ronny+ceph-users@xxxxxxxx> wrote:
On 21. aug. 2017 07:40, Nick Tan wrote:
Hi all,

I'm in the process of building a ceph cluster, primarily to use cephFS.  At this stage I'm in the planning phase and doing a lot of reading on best practices for building the cluster, however there's one question that I haven't been able to find an answer to.

Is it better to use many hosts with single OSD's, or fewer hosts with multiple OSD's?  I'm looking at using 8 or 10TB HDD's as OSD's and hosts with up to 12 HDD's.  If a host dies, that means up to 120TB of data will need to be recovered if the host has 12 x 10TB HDD's.  But if smaller hosts with single HDD's are used then a single host failure will result in only a maximum of 10TB to be recovered, so in this case it looks better to use smaller hosts with single OSD's if the failure domain is the host.

Are there other benefits or drawbacks of using many small servers with single OSD's vs fewer large servers with lots of OSD's?


one thing i did not see mentioned in previous emails was that 10TB disks are often SMR disks. those are not suited for CEPH unless your data is of the write once, archive forever type. This is not a ceph problem exactly more of how the SMR disks deal with lots of random writes.

http://ceph.com/planet/do-not-use-smr-disks-with-ceph/


generally more nodes are better. (but more expencive) due to how ceph spreads the load over all the nodes. depending on your needs you should not have too much of your data in a single node (eggs vs baskets).
Large nodes are not "wrong" if your needs are tons of archival data, but most people have more varied needs. Try to avoid having more then 10% of your data in a single node and allways have enough freespace to deal with the loss of a whole node. if you can have a cold node standby you could just as well plug it into the cluster. It would improve performance since you'd have more nodes to spread load on.

kind regards
Ronny Aasen

Thanks Ronny for your advice.  I've used SMR disks before and will definitely be avoiding them for this project. 

Thanks,
Nick
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux