Re: cephadm trouble with OSD db- and wal-device placement (quincy)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I haven't done it, but had to read through the documentation a couple months ago and what I gathered was:
1. if you have a db device specified but no wal device, it will put the wal on the same volume as the db.
2. the recommendation seems to be to not have a separate volume for db and wal if on the same physical device?

So, that should allow you to have the failure mode you want I think?

Can anyone else confirm this or knows that it is incorrect?

Thanks,
Kevin

________________________________________
From: Ulrich Pralle <Ulrich.Pralle@xxxxxxxxxxxx>
Sent: Tuesday, November 1, 2022 7:25 AM
To: ceph-users@xxxxxxx
Subject:  cephadm trouble with OSD db- and wal-device placement (quincy)

Check twice before you click! This email originated from outside PNNL.


Hej,

we are using ceph version 17.2.0 on Ubuntu 22.04.1 LTS.

We've got several servers with the same setup and are facing a problem
with OSD deployment and db-/wal-device placement.

Each server consists of ten rotational disks (10TB each) and two NVME
devices (3TB each).

We would like to deploy each rotational disk with a db- and wal-device.

We want to place the db and wal devices of an osd together on the same
NVME, to cut the failure of the OSDs in half if one NVME fails.

We tried several osd service type specifications to achieve our
deployment goal.

Our best approach is:

service_type: osd
service_id: osd_spec_10x10tb-dsk_db_and_wal_on_2x3tb-nvme
service_name: osd.osd_spec_10x10tb-dsk_db_and_wal_on_2x3tb-nvme
placement:
   host_pattern: '*'
unmanaged: true
spec:
   data_devices:
     model: MG[redacted]
     rotational: 1
   db_devices:
     limit: 1
     model: MZ[redacted]
     rotational: 0
   filter_logic: OR
   objectstore: bluestore
   wal_devices:
     limit: 1
     model: MZ[redacted]
     rotational: 0

This service spec deploys ten OSDs with all db-devices on one NVME and
all wal-devices on the second NVME.

If we omit "limit: 1", cephadm deploys ten OSDs with db-devices equally
distributed on both NVMEs and no wal-devices at all --- although half of
the NVMEs capacity remains unused.

What's the best way to do it.

Does that even make sense?

Thank you very much and with kind regards
Uli
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux