Re: OSD host count affecting available pool size?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ah, this sentence in the docs I've overlooked before:

When you deploy OSDs they are automatically placed within the CRUSH map
under a host node named with the hostname for the host they are running on.
This, combined with the default CRUSH failure domain, ensures that replicas
or erasure code shards are separated across hosts and a single host failure
will not affect availability.

I think this means what I thought it would mean - having the OSDs
concentrated onto fewer hosts is limiting the volume size...

On Mon, Oct 19, 2020 at 9:08 AM Dallas Jones <djones@xxxxxxxxxxxxxxxxx>
wrote:

> Hi, Ceph brain trust:
>
> I'm still trying to wrap my head around some capacity planning for Ceph,
> and I can't find a definitive answer to this question in the docs (at least
> one that penetrates my mental haze)...
>
> Does the OSD host count affect the total available pool size? My cluster
> consists of three 12-bay Dell PowerEdge machines running reflashed PERCs to
> make each SAS drive individually addressable. Each node is running 10 OSDs.
>
> Is Ceph limiting the max available pool size because all of my OSDs are
> being hosted on just three nodes? If I had 30 OSDs running across ten nodes
> instead, a node failure would result in just three OSDs dropping out
> instead of ten.
>
> Is there any rationale to this thinking, or am I trying to manufacture a
> solution to a problem I still don't understand?
>
> Thanks,
>
> Dallas
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux