Re: SSD Sizing for DB/WAL: 4% for large drives?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jake,

just my 2 cents - I'd suggest to use LVM for DB/WAL to be able seamlessly extend their sizes if needed.

Once you've configured this way and if you're able to add more NVMe later you're almost free to select any size at the initial stage.


Thanks,

Igor


On 5/28/2019 4:13 PM, Jake Grimmett wrote:
Dear All,

Quick question regarding SSD sizing for a DB/WAL...

I understand 4% is generally recommended for a DB/WAL.

Does this 4% continue for "large" 12TB drives, or can we  economise and
use a smaller DB/WAL?

Ideally I'd fit a smaller drive providing a 266GB DB/WAL per 12TB OSD,
rather than 480GB. i.e. 2.2% rather than 4%.

Will "bad things" happen as the OSD fills with a smaller DB/WAL?

By the way the cluster will mainly be providing CephFS, fairly large
files, and will use erasure encoding.

many thanks for any advice,

Jake


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux