Re: OSD host count affecting available pool size?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm not sure I understand what your interpretation is.
If you have 30 OSDs each with 1TB you'll end up with 30TB available (raw) space, no matter if those OSDs are spread across 3 or 10 hosts. The crush rules you define determine how many replicas are going to be distributed across your OSDs. Having a default recplicated rule with "size = 3" will result in 10TB usable space (given your example), just for simplification I'm not taking into account the rocksDB sizes etc.

Is Ceph limiting the max available pool size because all of my OSDs are
being hosted on just three nodes? If I had 30 OSDs running across ten nodes
instead, a node failure would result in just three OSDs dropping out
instead of ten.

So the answer would be no, the pool size is defined by available OSDs (of a device class) divided by the replica count. What you gain from having more servers with fewer OSDs is a higher failure resiliency and you're more flexible in terms of data placement (e.g. use erasure-coding to save space). The load during the recovery of one failed node with 10 OSDs is much higher than having to recover only 3 OSDs on one node, the clients probably wouldn't even notice. Having more nodes improves the performance if you have many clients since there are more OSD nodes to talk to, ceph scales out quite well.

If this doesn't answer your question, could you please clarify?

Regards,
Eugen


Zitat von Dallas Jones <djones@xxxxxxxxxxxxxxxxx>:

Ah, this sentence in the docs I've overlooked before:

When you deploy OSDs they are automatically placed within the CRUSH map
under a host node named with the hostname for the host they are running on.
This, combined with the default CRUSH failure domain, ensures that replicas
or erasure code shards are separated across hosts and a single host failure
will not affect availability.

I think this means what I thought it would mean - having the OSDs
concentrated onto fewer hosts is limiting the volume size...

On Mon, Oct 19, 2020 at 9:08 AM Dallas Jones <djones@xxxxxxxxxxxxxxxxx>
wrote:

Hi, Ceph brain trust:

I'm still trying to wrap my head around some capacity planning for Ceph,
and I can't find a definitive answer to this question in the docs (at least
one that penetrates my mental haze)...

Does the OSD host count affect the total available pool size? My cluster
consists of three 12-bay Dell PowerEdge machines running reflashed PERCs to
make each SAS drive individually addressable. Each node is running 10 OSDs.

Is Ceph limiting the max available pool size because all of my OSDs are
being hosted on just three nodes? If I had 30 OSDs running across ten nodes
instead, a node failure would result in just three OSDs dropping out
instead of ten.

Is there any rationale to this thinking, or am I trying to manufacture a
solution to a problem I still don't understand?

Thanks,

Dallas

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux