Re: Impact of a small DB size with Bluestore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It's mentioned here among other places
https://books.google.se/books?id=vuiLDwAAQBAJ&pg=PA79&lpg=PA79&dq=rocksdb+sizes+3+30+300+g&source=bl&ots=TlH4GR0E8P&sig=ACfU3U0QOJQZ05POZL9DQFBVwTapML81Ew&hl=en&sa=X&ved=2ahUKEwiPscq57YfmAhVkwosKHY1bB1YQ6AEwAnoECAoQAQ#v=onepage&q=rocksdb%20sizes%203%2030%20300%20g&f=false

The 4% was a quick ballpark figure someone came up with to give early adopters a decent start, but later science has shown that L0,L1,L2 levels make the sizes 3,30,300 "optimal" to not waste SSD space that will not be used.
You can set 240, but it will not be better than 30. It will be better than 24, so "not super bad, but not optimal".


Den tis 26 nov. 2019 kl 12:18 skrev Vincent Godin <vince.mlist@xxxxxxxxx>:
The documentation tell to size the DB to 4% of the disk data ie 240GB
for a 6 TB disk. Plz gives more explanations when your answer disagree
with the documentation !

Le lun. 25 nov. 2019 à 11:00, Konstantin Shalygin <k0ste@xxxxxxxx> a écrit :
>
> I have an Ceph cluster which was designed for file store. Each host
> have 5 SSDs write intensive of 400GB and 20 HDD of 6TB. So each HDD
> have a WAL of 5 GB on SSD
> If i want to put Bluestore on this cluster, i can only allocate ~75GB
> of WAL and DB on SSD for each HDD which is far below the 4% limit of
> 240GB (for 6TB)
> In the doc, i read "It is recommended that the block.db size isn’t
> smaller than 4% of block. For example, if the block size is 1TB, then
> block.db shouldn’t be less than 40GB."
> Are the 4% mandatory ? What should i expect ? Only relative slow
> performance or problem with such a configuration ?
>
> You should use not more 1Gb for WAL and 30Gb for RocksDB. Numbers ! 3,30,300 (Gb) for block.db is useless.
>
>
>
> k
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux