Re: Crushmap rule for multi-datacenter erasure coding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mark Nelson's space amp sheet visualizes this really well.  A nuance here is that Ceph always writes a full stripe, so with a 9,6 profile, on conventional media, a minimum of 15x4KB=20KB  underlying storage will be consumed, even for a 1KB object.  A 22 KB object would similarly tie up ~18KB of storage.  As the size increases, the remainder factor drops off quite quickly.  This is an important consideration when using, say, QLC SSDs with an 8, 16, or even 64KB IU size where there are good reasons to set min_alloc_size to amtch.

If compression is enabled, this can be exacerbated as well.  

Large parity groups also can result in lower overall write performance.





https://docs.google.com/spreadsheets/d/1rpGfScgG-GLoIGMJWDixEkqs-On9w8nAUToPQjN8bDI/edit#gid=358760253;
Bluestore Space Amplification Cheat Sheet
docs.google.com

> 
>> As you can see, the larger N the smaller the overhead. The downside is larger stripes, meaning that larger N only make sense

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux