Re: Impact of a small DB size with Bluestore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Tue, 26 Nov 2019 13:57:51 +0000
Simon Ironside <sironside@xxxxxxxxxxxxx> ==> ceph-users@xxxxxxxxxxxxxx :
> Mattia Belluco said back in May:
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-May/035086.html
> 
> "when RocksDB needs to compact a layer it rewrites it
> *before* deleting the old data; if you'd like to be sure you db does not
> spill over to the spindle you should allocate twice the size of the
> biggest layer to allow for compaction."
> 
> I didn't spot anyone disagreeing so I used 64GiB DB/WAL partitions on 
> the SSDs in my most recent clusters to allow for this and to be certain 
> that I definitely had room for the WAL on top and wouldn't get caught 
> out by people saying GB (x1000^3 bytes) when they mean GiB (x1024^3 
> bytes). I left the rest of the SSD empty to make the most of wear 
> leveling, garbage collection etc.
> 
> Simon


this is something I liked to get a comment from a developer too.
So what about the doubled size for block_db?

Thanks
Lars

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux