Re: pros/cons of multiple OSD's per host

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It is not recommended to get your cluster more than 70% full due to rebalancing and various other reasons. That would change your 12x 10TB disks in a host to only be 84TB if you filled your cluster to 70% full. I still think that the most important aspects of what is best for you hasn't been provided as none of us know what type of CephFS usage you are planning on.  Are you writing once and reading forever? Using this for home directories? doing processing of files in it? etc...  Each instance is different and would have different hardware and configuration requirements.

On Mon, Aug 21, 2017 at 6:31 AM John Spray <jspray@xxxxxxxxxx> wrote:
On Mon, Aug 21, 2017 at 6:40 AM, Nick Tan <nick.tan@xxxxxxxxx> wrote:
> Hi all,
>
> I'm in the process of building a ceph cluster, primarily to use cephFS.  At
> this stage I'm in the planning phase and doing a lot of reading on best
> practices for building the cluster, however there's one question that I
> haven't been able to find an answer to.
>
> Is it better to use many hosts with single OSD's, or fewer hosts with
> multiple OSD's?  I'm looking at using 8 or 10TB HDD's as OSD's and hosts
> with up to 12 HDD's.  If a host dies, that means up to 120TB of data will
> need to be recovered if the host has 12 x 10TB HDD's.  But if smaller hosts
> with single HDD's are used then a single host failure will result in only a
> maximum of 10TB to be recovered, so in this case it looks better to use
> smaller hosts with single OSD's if the failure domain is the host.
>
> Are there other benefits or drawbacks of using many small servers with
> single OSD's vs fewer large servers with lots of OSD's?

Think of it in percentage terms: what %ge of my system am I
comfortable with having offline in the event that a host dies?

At one extreme, you might have a three node cluster in which losing
even one host is a 33% loss of capacity which will probably severely
impact your workloads.

At the other extreme, if you have 1000 servers, then even if each
server has 32 drives, losing one server is still only losing 0.1% of
your capacity.

So as usual, the answer is "it depends".

John

> Thanks,
> Nick
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux