Re: Replacing OSD with DB on shared NVMe

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



That did it, thanks!

It seems like something that should be better documented and/or handled automatically when replacing drives.

And yeah, I know I don’t have to reapply my OSD spec, but doing so can be faster than waiting for the cluster to get around to it.

Thanks again.

From: David Orman <ormandj@xxxxxxxxxxxx>
Sent: Wednesday, May 25, 2022 5:03 PM
To: Edward R Huyer <erhvks@xxxxxxx>
Cc: ceph-users@xxxxxxx
Subject: Re:  Replacing OSD with DB on shared NVMe

In your example, you can login to the server in question with the OSD, and run "ceph-volume lvm zap --osd-id <osdid> --destroy" and it will purge the DB/WAL LV. You don't need to reapply your osd spec, it will detect the available space on the nvme and redploy that OSD.

On Wed, May 25, 2022 at 3:37 PM Edward R Huyer <erhvks@xxxxxxx<mailto:erhvks@xxxxxxx>> wrote:
Ok, I'm not sure if I'm missing something or if this is a gap in ceph orch functionality, or what:

On a given host all the OSDs share a single large NVMe drive for DB/WAL storage and were set up using a simple ceph orch spec file.  I'm replacing some of the OSDs.  After they've been removed with the dashboard equivalent of "ceph orch osd rm # --replace" and a new drive has been swapped in, how do I get the OSD recreated using the chunk of NVMe for DB/WAL storage?  Because the NVMe has data and is still in use by other OSDs, the orchestrator doesn't seem to recognize it as a valid storage location, so it won't create the OSDs when I do "ceph orch apply -i osdspec.yml".

Thoughts?

-----
Edward Huyer
Golisano College of Computing and Information Sciences
Rochester Institute of Technology
Golisano 70-2373
152 Lomb Memorial Drive
Rochester, NY 14623
585-475-6651
erhvks@xxxxxxx<mailto:erhvks@xxxxxxx><mailto:erhvks@xxxxxxx<mailto:erhvks@xxxxxxx>>

Obligatory Legalese:
The information transmitted, including attachments, is intended only for the person(s) or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and destroy any copies of this information.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux