Re: Ceph WAL/DB disks - do they really only use 3GB, or 30Gb, or 300GB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Victor,

that's true for Ceph releases prior to Octopus. The latter has some improvements in this area..

There is pending backport PR to fix that in Nautilus as well:

https://github.com/ceph/ceph/pull/33889


AFAIR this topic has been discussed in this mailing list multiple times.


Thanks,

Igor


On 3/27/2020 10:56 PM, victorhooi@xxxxxxxxx wrote:
Hi,

I'm using Intel Optane disks to provide WAL/DB capacity for my Ceph cluster (which is part of Proxmox - for VM hosting).

I've read that WAL/DB partitions only use either 3GB, or 30GB, or 300GB - due to the way that RocksDB works.

Is this true?

My current partition for WAL/DB is 145 GB - does this mean that 115Gb of that will be permanently wasted?

Is this behaviour documented somewhere, or is there some background, so I can understand a bit more about how it works?

Thanks,
Victor
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux