Re: Is replacing OSD whose data is on HDD and DB is on SSD supported?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



To update and close this thread, what I am looking for is not supported yet.
"ceph-volume lvm batch" requires clean device. It doesn't work to reuse
DB LV or create a new DB LV. Followed https://tracker.ceph.com/issues/46691
with "ceph-volume lvm prepare" to make this work.

Thanks!
Tony
________________________________________
From: Tony Liu <tonyliu0592@xxxxxxxxxxx>
Sent: February 14, 2021 02:01 PM
To: ceph-users@xxxxxxx; dev
Subject:  Is replacing OSD whose data is on HDD and DB is on SSD supported?

​Hi,

I've been trying with v15.2 and v15.2.8, no luck.
Wondering if this is actually supported or ever worked for anyone?

Here is what I've done.
1) Create a cluster with 1 controller (mon and mgr) and 3 OSD nodes,
   each of which is with 1 SSD for DB and 8 HDDs for data.
2) OSD service spec.
service_type: osd
service_id: osd-spec
placement:
 hosts:
 - ceph-osd-1
 - ceph-osd-2
 - ceph-osd-3
spec:
  block_db_size: 92341796864
  data_devices:
    model: ST16000NM010G
  db_devices:
    model: KPM5XRUG960G
3) Add OSD hosts and apply OSD service spec. 8 OSDs (data on HDD and
   DB on SSD) are created on each host properly.
4) Run "orch osd rm 1 --replace --force". OSD is marked "destroyed" and
   reweight is set to 0 in "osd tree". "pg dump" shows no PG on that OSD.
   "orch ps" shows no daemon running for that OSD.
5) Run "orch device zap <host> <device>". VG and LV for HDD are removed.
   LV for DB stays. "orch device ls" shows HDD device is available.
6) Cephadm finds OSD claims and applies OSD spec on the host.
   Here is the message.
   ============================
   cephadm [INF] Found osd claims -> {'ceph-osd-1': ['1']}
   cephadm [INF] Found osd claims for drivegroup osd-spec -> {'ceph-osd-1': ['1']}
   cephadm [INF] Applying osd-spec on host ceph-osd-1...
   cephadm [INF] Applying osd-spec on host ceph-osd-2...
   cephadm [INF] Applying osd-spec on host ceph-osd-3...
   cephadm [INF] ceph-osd-1: lvm batch --no-auto /dev/sdc /dev/sdd
     /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj
     --db-devices /dev/sdb --block-db-size 92341796864
     --osd-ids 1 --yes --no-systemd
   code: 0
   out: ['']
   err: ['/bin/docker:stderr --> passed data devices: 8 physical, 0 LVM',
   '/bin/docker:stderr --> relative data size: 1.0',
   '/bin/docker:stderr --> passed block_db devices: 1 physical, 0 LVM',
   '/bin/docker:stderr --> 1 fast devices were passed, but none are available']
   ============================

Q1. Is DB LV on SSD supposed to be deleted or not, when replacing an OSD
    whose data is on HDD and DB is on SSD?
Q2. If yes from Q1, is a new DB LV supposed to be created on SSD as long as
    there is sufficient free space, when building the new OSD?
Q3. If no from Q1, since it's replacing, is the old DB LV going to be reused
    for the new OSD?

Again, is this actually supposed to work? Am I missing anything or just trying
on some unsupported feature?


Thanks!
Tony

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux