Re: Advice on sizing WAL/DB cluster for Optane and SATA SSD disks.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 3/16/2020 3:25 PM, vitalif@xxxxxxxxxx wrote:
Hi Victor,

1) RocksDB doesn't put L4 on the fast device if it's less than ~ 286 GB, so no. But, anyway, there's usually no L4, so 30 GB is usually sufficient. I had ~17 GB block.dbs even for 8 TB hard drives used for RBD... RGW probably uses slightly more if stored objects are small... but you're still unlikely to overflow a 30 GB partition with 2 TB OSDs.

As already mentioned by Janne and given plenty of available space at NVMes I would recommend to have some spare space above 30GB at DB. In my lab I observed up to 100% transient excess under peak load when running some (pretty artificial!) benchmarks.

IMO 64GB is the perfect size for combined WAL/DB volume for almost any OSD except ones that have huge main device behind and handle heavy RGW load. 300+ GB is required in that case.



2) WAL is the on-disk mirror of RocksDB memtables, its size is defined by bluestore_rocksdb_options. Default is max_write_buffer_number=4, write_buffer_size=256MB, thus 1GB. You don't even need to split wal and db partitions if they're on the same device. Calculations are here: https://yourcmc.ru/wiki/Ceph_performance#About_block.db_sizing

Hi,

Vitaliy - Sure, I can use those absolute values (30GB for DB, 2GB for
WAL) you suggested.

Currently - Proxmox is defaulting to a 178.85 GB partition for the
DB/WAL. (It seems to put the DB and WAL on the same partition).

Using your calculations, with 6 x OSDs per host - that means a total
of 180GB for DB, 12GB for WAL = 192GB in total. (Optane drive is 960GB
in capacity).

Question 1 - Are there any advantages to using larger DB partition
than 30GB, or larger WAL than 2GB? (Just thinking how to best use the
entire Optane drive if possible).

Question 2 - How do I check the WAL size in Ceph? (Proxmox seems to be
putting the WAL on the same partition as the DB, but I don't know
where its size is specified).

Thanks,
Victor
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux