Re: Impact of a small DB size with Bluestore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Lars,

I've also seen interim space usage burst during my experiments. Up to 2x times of max level size when topmost RocksDB level is  L3 (i.e. 25GB max). So I think 2x (which results in 60-64 GB for DB) is a good grade when your DB is expected to be small and medium sized. Not sure this multiplier is perfect enough for large systems where L4 (250GB max) is expected. It results in pretty high spare volume. But actually I don't have any real experience for this case.

FYI: one can learn per-device maximum bluefs space allocated since OSD restart using the following bluefs performance counters:

    l_bluefs_max_bytes_wal,
    l_bluefs_max_bytes_db,
    l_bluefs_max_bytes_slow,

which might give some insight for your system's real needs.


Thanks,

Igor


On 12/2/2019 10:55 AM, Lars Täuber wrote:
Hi,

Tue, 26 Nov 2019 13:57:51 +0000
Simon Ironside <sironside@xxxxxxxxxxxxx> ==> ceph-users@xxxxxxxxxxxxxx :
Mattia Belluco said back in May:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-May/035086.html

"when RocksDB needs to compact a layer it rewrites it
*before* deleting the old data; if you'd like to be sure you db does not
spill over to the spindle you should allocate twice the size of the
biggest layer to allow for compaction."

I didn't spot anyone disagreeing so I used 64GiB DB/WAL partitions on
the SSDs in my most recent clusters to allow for this and to be certain
that I definitely had room for the WAL on top and wouldn't get caught
out by people saying GB (x1000^3 bytes) when they mean GiB (x1024^3
bytes). I left the rest of the SSD empty to make the most of wear
leveling, garbage collection etc.

Simon

this is something I liked to get a comment from a developer too.
So what about the doubled size for block_db?

Thanks
Lars

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux