Thanks, Arthur, I think you are right about that bug looking very similar to what I've observed. I'll try to remember to update the list once the fix is merged and released and I get a chance to test it. I'm hoping somebody can comment on what are ceph's current best practices for sizing WAL/DB volumes, considering rocksdb levels and compaction. -Patrick ________________________________ From: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@xxxxxxx> Sent: Friday, July 29, 2022 2:11 AM To: ceph-users@xxxxxxx <ceph-users@xxxxxxx> Subject: Re: cephadm automatic sizing of WAL/DB on SSD Hi Patrick, On 7/28/22 16:22, Calhoun, Patrick wrote: > In a new OSD node with 24 hdd (16 TB each) and 2 ssd (1.44 TB each), I'd like to have "ceph orch" allocate WAL and DB on the ssd devices. > > I use the following service spec: > spec: > data_devices: > rotational: 1 > size: '14T:' > db_devices: > rotational: 0 > size: '1T:' > db_slots: 12 > > This results in each OSD having a 60GB volume for WAL/DB, which equates to 50% total usage in the VG on each ssd, and 50% free. > I honestly don't know what size to expect, but exactly 50% of capacity makes me suspect this is due to a bug: > https://tracker.ceph.com/issues/54541 > (In fact, I had run into this bug when specifying block_db_size rather than db_slots) > > Questions: > Am I being bit by that bug? > Is there a better approach, in general, to my situation? > Are DB sizes still governed by the rocksdb tiering? (I thought that this was mostly resolved by https://github.com/ceph/ceph/pull/29687 ) > If I provision a DB/WAL logical volume size to 61GB, is that effectively a 30GB database, and 30GB of extra room for compaction? I don't use cephadm, but it's maybe related to this regression: https://tracker.ceph.com/issues/56031. At list the symptoms looks very similar... Cheers, -- Arthur Outhenin-Chalandre _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx