Re: pros/cons of multiple OSD's per host

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Mon, Aug 21, 2017 at 8:57 PM, David Turner <drakonstein@xxxxxxxxx> wrote:
It is not recommended to get your cluster more than 70% full due to rebalancing and various other reasons. That would change your 12x 10TB disks in a host to only be 84TB if you filled your cluster to 70% full. I still think that the most important aspects of what is best for you hasn't been provided as none of us know what type of CephFS usage you are planning on.  Are you writing once and reading forever? Using this for home directories? doing processing of files in it? etc...  Each instance is different and would have different hardware and configuration requirements.



Hi David,

The planned usage for this CephFS cluster is scratch space for an image processing cluster with 100+ processing nodes.  My thinking is we'd be better off with a large number (100+) of storage hosts with 1-2 OSD's each, rather than 10 or so storage nodes with 10+ OSD's to get better parallelism but I don't have any practical experience with CephFS to really judge.  And I don't have enough hardware to setup a test cluster of any significant size to run some actual testing.

Thanks,
Nick
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux