Re: Correct procedure to replace RAID0 OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den tis 21 juni 2022 kl 09:23 skrev Oliver Weinmann <oliver.weinmann@xxxxxx>:
>
> Hi,we have a small 3 Node All-Flash Ceph Pacific 16.2.7 Cluster installed by cephadm. The RAID Controller is a LSI MegaRAID SAS 2208 and it can't be run in IT mode (passthrough) by simply changing a BIOS setting. The servers IBM x3650 we use are pretty old but they are just fine for this test cluster. By disassembling the raid controller battery cache, the controller switches to IT mode. I have successfully done this in the past. Flashing to a different Firmware seems possible, but I would like to this only as a last resort. Due to this the disks were created as single RAID0. We would like to change this and would like to know the correct procedure. Since the OS disks are configured as RAID1 and also reside on this controller we need to do a complete reinstall I assume and create a software raid. This is not a big deal. But how would we now proceed in regards to CEPH. Do we have to remove the complete node using cephadm? Or just the OSDs? I tried to find an answer in the docs, but 
 I
>  guess our approach is pretty rare.Best Regards,Oliver

I don't see your operation as "rare" at all from the ceph perspective,
ceph doesn't care for the "why" part, only the operation of removing
one or more OSDs on a host, and then later on, adding one or more new
OSD drives. From that viewpoint, it is all a very common thing and you
could well just crush-reweight the OSDs down to 0.0 and let the
cluster move the data over to other hosts, and when the OSDs on this
raid are completely emptied, redo the drives in any fashion and just
add them back again and let the cluster move data back onto them.

https://docs.ceph.com/en/latest/rados/operations/add-or-rm-osds/#take-the-osd-out-of-the-cluster




-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux