Re: All new osds are made orphans [SOLVED]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
thanks to the advices received ( thank you Joachim, Anthony) i managed to solve this ... layer-8 problem.
Not a ceph problem; I'm not proud, but if it can help within archives..

My ceph is fully routed and i had a typo on the routing table; therefore many hosts couldn't connect each other.

The cluster is rebuilding and at this time, no dataloss.

Ceph is rock solid, even against layer-8 issues..

Thanks for all
Best regards


Philippe


23 sept. 2024, 10:13 de joachim.kraftmayer@xxxxxxxxx:

>
> Hi Phil,
>
> is the ceph public and cluster network set in your ceph config dump or is there another ceph.conf on the local servers?
> Joachim
>
>
>
>
>
>   > joachim.kraftmayer@xxxxxxxxx
>
>
>   > www.clyso.com <http://www.clyso.com/>
>
>
>   Hohenzollernstr. 27, 80801 Munich
>
>
> Utting a. A. | HR: Augsburg | HRB: 25866 | USt. ID-Nr.: DE2754306
>
>
>
>
>
>
>
> Am Sa., 21. Sept. 2024 um 23:31 Uhr schrieb Phil <> infolist@xxxxxxxxxxxxxx> >:
>
>> Digged further.
>>
>> ceph osd dump --format json |jq '.osds[]| select (.osd==4)'
>>
>> {
>>   "osd": 4,
>>   "uuid": "OSD UID",
>>   "up": 0,
>>   "in": 0,
>>   "weight": 0,
>>   "primary_affinity": 1,
>>   "last_clean_begin": 0,
>>   "last_clean_end": 0,
>>   "up_from": 0,
>>   "up_thru": 0,
>>   "down_at": 0,
>>   "lost_at": 0,
>>   "public_addrs": {
>>     "addrvec": []
>>   },
>>   "cluster_addrs": {
>>     "addrvec": []
>>   },
>>   "heartbeat_back_addrs": {
>>     "addrvec": []
>>   },
>>   "heartbeat_front_addrs": {
>>     "addrvec": []
>>   },
>>   "public_addr": "(unrecognized address family 0)/0",
>>   "cluster_addr": "(unrecognized address family 0)/0",
>>   "heartbeat_back_addr": "(unrecognized address family 0)/0",
>>   "heartbeat_front_addr": "(unrecognized address family 0)/0",
>>   "state": [
>>     "autoout",
>>     "exists",
>>     "new"
>>   ]
>> }
>>
>> Every other healthy osds fields are filled with proper values..
>>
>> Clues match, but i keep on looking for the root cause.
>>
>> Best regards
>>
>>
>> Philippe
>>
>>
>>
>> 21 sept. 2024, 21:29 de >> infolist@xxxxxxxxxxxxxx>> :
>>
>> > Hi, on a healthy cluster, every osd creation made orphans osd.
>> >
>> >
>> > # ceph osd tree
>> > ID   CLASS  WEIGHT    TYPE NAME           STATUS  REWEIGHT  PRI-AFF
>> > -1         26.38051  root default                                 
>> > -7          3.63869      host A
>> >   1    hdd   3.63869          osd.1           up   1.00000  1.00000
>> > -11          1.81940      host F
>> >   2    hdd   1.81940          osd.2           up   1.00000  1.00000
>> > -5          7.27737      host J
>> >   3    hdd   3.63869          osd.3           up   1.00000  1.00000
>> >   5    hdd   3.63869          osd.5           up   1.00000  1.00000
>> > -3          7.27737      host K
>> >   0    hdd   3.63869          osd.0           up   1.00000  1.00000
>> >   8    hdd   3.63869          osd.8           up   1.00000  1.00000
>> > -9          6.36768      host S
>> >   6    hdd   3.63869          osd.6           up   1.00000  1.00000
>> >   7    hdd   2.72899          osd.7           up   1.00000  1.00000
>> >   4                0  osd.4                 down         0  1.00000
>> >   9                0  osd.9                 down         0  1.00000
>> >
>> > neither 4 nor 9 osd appear in the crushmap (decompiled)
>> > Destroying and recreating osd recreate orphan osd.
>> >
>> >
>> > Any hint ?
>> > Thanks for all
>> > Best regards
>> > Philippe 
>> >
>>
>> _______________________________________________
>> ceph-users mailing list -- >> ceph-users@xxxxxxx
>> To unsubscribe send an email to >> ceph-users-leave@xxxxxxx
>>

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux