Re: SSD Sizing for DB/WAL: 4% for large drives?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The ~4% recommendation in the docs is missleading.

How much you need really depends on how you use it, for CephFS that means: are you going to put lots of small files on it? Or mainly big files?
If you expect lots of small files: go for a DB that's > ~300 GB. For mostly large files you are probably fine with a 60 GB DB.

As pointed out by others: 266 GB is the same as 60 GB.

I expect the new Nautilus warning for spillover to bite a lot of people who didn't know about the undocumented magic numbers for sizes ;)

Paul


--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


On Tue, May 28, 2019 at 3:13 PM Jake Grimmett <jog@xxxxxxxxxxxxxxxxx> wrote:
Dear All,

Quick question regarding SSD sizing for a DB/WAL...

I understand 4% is generally recommended for a DB/WAL.

Does this 4% continue for "large" 12TB drives, or can we  economise and
use a smaller DB/WAL?

Ideally I'd fit a smaller drive providing a 266GB DB/WAL per 12TB OSD,
rather than 480GB. i.e. 2.2% rather than 4%.

Will "bad things" happen as the OSD fills with a smaller DB/WAL?

By the way the cluster will mainly be providing CephFS, fairly large
files, and will use erasure encoding.

many thanks for any advice,

Jake


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux