Re: [cephadm] Found duplicate OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



IIRC cephadm refreshes its daemons within 15 minutes, at least that was my last impression. So sometimes you have to be patient. :-)


Zitat von Satish Patel <satish.txt@xxxxxxxxx>:

Hi Eugen,

My error cleared up itself, Look like it took some time but now I am not
seeing any errors and the output is very clean. Thank you so much.




On Fri, Oct 21, 2022 at 1:46 PM Eugen Block <eblock@xxxxxx> wrote:

Do you still see it with ‚cephadm ls‘ on that node? If yes you could
try ‚cephadm rm-daemon —name osd.3‘. Or you try it with the
orchestrator: ceph orch daemon rm…
I don’t have the exact command at the moment, you should check the docs.

Zitat von Satish Patel <satish.txt@xxxxxxxxx>:

> Hi Eugen,
>
> I have delected osd.3 directory from datastorn4 node as you mentioned but
> still i am seeing that duplicate osd in ps output.
>
> root@datastorn1:~# ceph orch ps | grep osd.3
> osd.3                      datastorn4                stopped          5m
> ago   3w        -    42.6G  <unknown>  <unknown>     <unknown>
> osd.3                      datastorn5                running (3w)     5m
> ago   3w    2587M    42.6G  17.2.3     0912465dcea5  d139f8a1234b
>
> How do I clean up permanently?
>
>
> On Fri, Oct 21, 2022 at 6:24 AM Eugen Block <eblock@xxxxxx> wrote:
>
>> Hi,
>>
>> it looks like the OSDs haven't been cleaned up after removing them. Do
>> you see the osd directory in /var/lib/ceph/<UUID>/osd.3 on datastorn4?
>> Just remove the osd.3 directory, then cephadm won't try to activate it.
>>
>>
>> Zitat von Satish Patel <satish.txt@xxxxxxxxx>:
>>
>> > Folks,
>> >
>> > I have deployed 15 OSDs node clusters using cephadm and encount
duplicate
>> > OSD on one of the nodes and am not sure how to clean that up.
>> >
>> > root@datastorn1:~# ceph health
>> > HEALTH_WARN 1 failed cephadm daemon(s); 1 pool(s) have no replicas
>> > configured
>> >
>> > osd.3 is duplicated on two nodes, i would like to remove it from
>> > datastorn4 but I'm not sure how to remove it. In the ceph osd tree I
am
>> not
>> > seeing any duplicate.
>> >
>> > root@datastorn1:~# ceph orch ps | grep osd.3
>> > osd.3                      datastorn4                stopped
7m
>> > ago   3w        -    42.6G  <unknown>  <unknown>     <unknown>
>> > osd.3                      datastorn5                running (3w)
 7m
>> > ago   3w    2584M    42.6G  17.2.3     0912465dcea5  d139f8a1234b
>> >
>> >
>> > Getting following error in logs
>> >
>> > 2022-10-21T09:10:45.226872+0000 mgr.datastorn1.nciiiu (mgr.14188)
>> 1098186 :
>> > cephadm [INF] Found duplicate OSDs: osd.3 in status stopped on
>> datastorn4,
>> > osd.3 in status running on datastorn5
>> > 2022-10-21T09:11:46.254979+0000 mgr.datastorn1.nciiiu (mgr.14188)
>> 1098221 :
>> > cephadm [INF] Found duplicate OSDs: osd.3 in status stopped on
>> datastorn4,
>> > osd.3 in status running on datastorn5
>> > 2022-10-21T09:12:53.009252+0000 mgr.datastorn1.nciiiu (mgr.14188)
>> 1098256 :
>> > cephadm [INF] Found duplicate OSDs: osd.3 in status stopped on
>> datastorn4,
>> > osd.3 in status running on datastorn5
>> > 2022-10-21T09:13:59.283251+0000 mgr.datastorn1.nciiiu (mgr.14188)
>> 1098293 :
>> > cephadm [INF] Found duplicate OSDs: osd.3 in status stopped on
>> datastorn4,
>> > osd.3 in status running on datastorn5
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@xxxxxxx
>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>







_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux