OSD host count affecting available pool size?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, Ceph brain trust:

I'm still trying to wrap my head around some capacity planning for Ceph,
and I can't find a definitive answer to this question in the docs (at least
one that penetrates my mental haze)...

Does the OSD host count affect the total available pool size? My cluster
consists of three 12-bay Dell PowerEdge machines running reflashed PERCs to
make each SAS drive individually addressable. Each node is running 10 OSDs.

Is Ceph limiting the max available pool size because all of my OSDs are
being hosted on just three nodes? If I had 30 OSDs running across ten nodes
instead, a node failure would result in just three OSDs dropping out
instead of ten.

Is there any rationale to this thinking, or am I trying to manufacture a
solution to a problem I still don't understand?

Thanks,

Dallas
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux