Hi Jake,
just my 2 cents - I'd suggest to use LVM for DB/WAL to be able
seamlessly extend their sizes if needed.
Once you've configured this way and if you're able to add more NVMe
later you're almost free to select any size at the initial stage.
Thanks,
Igor
On 5/28/2019 4:13 PM, Jake Grimmett wrote:
Dear All,
Quick question regarding SSD sizing for a DB/WAL...
I understand 4% is generally recommended for a DB/WAL.
Does this 4% continue for "large" 12TB drives, or can we economise and
use a smaller DB/WAL?
Ideally I'd fit a smaller drive providing a 266GB DB/WAL per 12TB OSD,
rather than 480GB. i.e. 2.2% rather than 4%.
Will "bad things" happen as the OSD fills with a smaller DB/WAL?
By the way the cluster will mainly be providing CephFS, fairly large
files, and will use erasure encoding.
many thanks for any advice,
Jake
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com