Re: striping for a small cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Wed, 15 Jun 2016 00:22:51 +0000 pixelfairy wrote:

> We have a small cluster, 3mons, each which also have 6 4tb osds, and a
> 20gig link to the cluster (2x10gig lacp to a stacked pair of switches).
> well have at least replica pool (size=3) and one erasure coded pool.

I'm neither particular knowledgeable nor a fan of EC pools, but keep in
mind that the coding is dictated by the number of OSD nodes, so 3 doesn't
give a lot of options, IIRC.
In fact, it will be the same as a RAID5 and only sustain the loss of one
OSD/disk, something nobody in their right mind does these days.

> current plan is to have journals coexist with osds as that seems to the
> be safest and most economical.
> 
You will be thoroughly disappointed by the performance if you do this,
unless your use case is something like a backup server with very few
random I/Os. 
Any performance optimizations will suggest looking at journal SSDs first.

> what levels of striping would you recommend for this size cluster? any
> other optimization conciderations? looking for a starting point to work
> from.
> 
Striping is one of the last things to ponder.
Not only does it depend a LOT on your use case, it's also not possible to
change later on, so getting it right for the initial size and future
growth is an interesting challenge.

> also, any recommendations for testing / benchmarking these
> configurations?
> 
> so far, looking at
> https://www.sebastien-han.fr/blog/2012/08/26/ceph-benchmarks/
> bsd rebuilding itself, and maybe phoronix.
>
Those benchmarks are very much out-dated, both in terms of Ceph versions
and capabilities as well as the tools used (fio is the most common
benchmark tool for some time now).
Once bluestore comes along (in a year or so), there will be another
performance and HW design shift.

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux