cephadm trouble with OSD db- and wal-device placement (quincy)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hej,

we are using ceph version 17.2.0 on Ubuntu 22.04.1 LTS.

We've got several servers with the same setup and are facing a problem with OSD deployment and db-/wal-device placement.

Each server consists of ten rotational disks (10TB each) and two NVME devices (3TB each).

We would like to deploy each rotational disk with a db- and wal-device.

We want to place the db and wal devices of an osd together on the same NVME, to cut the failure of the OSDs in half if one NVME fails.

We tried several osd service type specifications to achieve our deployment goal.

Our best approach is:

service_type: osd
service_id: osd_spec_10x10tb-dsk_db_and_wal_on_2x3tb-nvme
service_name: osd.osd_spec_10x10tb-dsk_db_and_wal_on_2x3tb-nvme
placement:
  host_pattern: '*'
unmanaged: true
spec:
  data_devices:
    model: MG[redacted]
    rotational: 1
  db_devices:
    limit: 1
    model: MZ[redacted]
    rotational: 0
  filter_logic: OR
  objectstore: bluestore
  wal_devices:
    limit: 1
    model: MZ[redacted]
    rotational: 0

This service spec deploys ten OSDs with all db-devices on one NVME and all wal-devices on the second NVME.

If we omit "limit: 1", cephadm deploys ten OSDs with db-devices equally distributed on both NVMEs and no wal-devices at all --- although half of the NVMEs capacity remains unused.

What's the best way to do it.

Does that even make sense?

Thank you very much and with kind regards
Uli
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux