Re: cephadm automatic sizing of WAL/DB on SSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Patrick,

On 7/28/22 16:22, Calhoun, Patrick wrote:
> In a new OSD node with 24 hdd (16 TB each) and 2 ssd (1.44 TB each), I'd like to have "ceph orch" allocate WAL and DB on the ssd devices.
> 
> I use the following service spec:
> spec:
>   data_devices:
>     rotational: 1
>     size: '14T:'
>   db_devices:
>     rotational: 0
>     size: '1T:'
>   db_slots: 12
> 
> This results in each OSD having a 60GB volume for WAL/DB, which equates to 50% total usage in the VG on each ssd, and 50% free.
> I honestly don't know what size to expect, but exactly 50% of capacity makes me suspect this is due to a bug:
> https://tracker.ceph.com/issues/54541
> (In fact, I had run into this bug when specifying block_db_size rather than db_slots)
> 
> Questions:
>   Am I being bit by that bug?
>   Is there a better approach, in general, to my situation?
>   Are DB sizes still governed by the rocksdb tiering? (I thought that this was mostly resolved by https://github.com/ceph/ceph/pull/29687 )
>   If I provision a DB/WAL logical volume size to 61GB, is that effectively a 30GB database, and 30GB of extra room for compaction?

I don't use cephadm, but it's maybe related to this regression:
https://tracker.ceph.com/issues/56031. At list the symptoms looks very
similar...

Cheers,

-- 
Arthur Outhenin-Chalandre
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux