db/wal pvmoved ok, but gui show old metadatas

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

we have a Ceph 17.2.5 cluster with a total of 26 nodes, where 15 nodes that have faulty NVMe drives, 
where the db/wal resides (one NVMe for the first 6 OSDs and another for the remaining 6). 

We replaced them with new drives and pvmoved it to avoid losing the OSDs. 

So far, there are no issues, and the OSDs are functioning properly. 

ceph see the correct news disks
root@node02:/# ceph daemon osd.26 list_devices
[
{
"device": "/dev/nvme0n1",
"device_id": "INTEL_SSDPEDME016T4S_CVMD516500851P6KGN"
},
{
"device": "/dev/sdc",
"device_id": "SEAGATE_ST18000NM004J_ZR52TT830000C148JFSJ"
}
]

However, the Cephadm GUI still shows the old NVMe drives and hasn't recognized the device change.

How can we make the GUI and Cephadm recognize the new devices? 

I tried restarting the managers, thinking that it would rescan the OSDs during startup, but it didn't work. 

If you have any ideas, I would appreciate it. 

Should I perform something like that: ceph orch daemon reconfig osd.*

Thank you for your help.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux