Hello Jake,
you can use 2.2% as well and performance will most of the time better than without having a DB/WAL. However if the DB/WAL is filled up, a spillover to the regular drive is done and the performance will just drop as if you wouldn't have a DB/WAL drive.
I believe that you could use "ceph daemon osd.X perf dump" and look for "db_used_bytes" and "wal_used_bytes", but without guarantee from my side.
As far I know, it would be ok to choose values from 2-4% depending on your usage and configuration.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx
Am Di., 28. Mai 2019 um 18:28 Uhr schrieb Jake Grimmett <jog@xxxxxxxxxxxxxxxxx>:
Hi Martin,
thanks for your reply :)
We already have a separate NVMe SSD pool for cephfs metadata.
I agree it's much simpler & more robust not using a separate DB/WAL, but
as we have enough money for a 1.6TB SSD for every 6 HDD, so it's
tempting to go down that route. If people think a 2.2% DB/WAL is a bad
idea, we will definitely have a re-think.
Perhaps I'm being greedy for more performance; we have a 250 node HPC
cluster, and it would be nice to see how cephfs compares to our beegfs
scratch.
best regards,
Jake
On 5/28/19 3:14 PM, Martin Verges wrote:
> Hello Jake,
>
> do you have any latency requirements that you do require the DB/WAL at all?
> If not, CephFS with EC on SATA HDD works quite well as long as you have
> the metadata on a separate ssd pool.
>
> --
> Martin Verges
> Managing director
>
> Mobile: +49 174 9335695
> E-Mail: martin.verges@xxxxxxxx <mailto:martin.verges@xxxxxxxx>
> Chat: https://t.me/MartinVerges
>
> croit GmbH, Freseniusstr. 31h, 81247 Munich
> CEO: Martin Verges - VAT-ID: DE310638492
> Com. register: Amtsgericht Munich HRB 231263
>
> Web: https://croit.io
> YouTube: https://goo.gl/PGE1Bx
>
>
> Am Di., 28. Mai 2019 um 15:13 Uhr schrieb Jake Grimmett
> <jog@xxxxxxxxxxxxxxxxx <mailto:jog@xxxxxxxxxxxxxxxxx>>:
>
> Dear All,
>
> Quick question regarding SSD sizing for a DB/WAL...
>
> I understand 4% is generally recommended for a DB/WAL.
>
> Does this 4% continue for "large" 12TB drives, or can we economise and
> use a smaller DB/WAL?
>
> Ideally I'd fit a smaller drive providing a 266GB DB/WAL per 12TB OSD,
> rather than 480GB. i.e. 2.2% rather than 4%.
>
> Will "bad things" happen as the OSD fills with a smaller DB/WAL?
>
> By the way the cluster will mainly be providing CephFS, fairly large
> files, and will use erasure encoding.
>
> many thanks for any advice,
>
> Jake
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com