Re: fixing unrepairable inconsistent PG

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can you try the following?

$ ceph --debug_ms 5 --debug_auth 20 pg 18.2 query

On Fri, Jun 22, 2018 at 7:54 PM, Andrei Mikhailovsky <andrei@xxxxxxxxxx> wrote:
> Hi Brad,
>
> here is the output of the command (replaced the real auth key with [KEY]):
>
>
> ----------------
>
> 2018-06-22 10:47:27.659895 7f70ef9e6700 10 monclient: build_initial_monmap
> 2018-06-22 10:47:27.661995 7f70ef9e6700 10 monclient: init
> 2018-06-22 10:47:27.662002 7f70ef9e6700  5 adding auth protocol: cephx
> 2018-06-22 10:47:27.662004 7f70ef9e6700 10 monclient: auth_supported 2 method cephx
> 2018-06-22 10:47:27.662221 7f70ef9e6700  2 auth: KeyRing::load: loaded key file /etc/ceph/ceph.client.admin.keyring
> 2018-06-22 10:47:27.662338 7f70ef9e6700 10 monclient: _reopen_session rank -1
> 2018-06-22 10:47:27.662425 7f70ef9e6700 10 monclient(hunting): picked mon.noname-b con 0x7f70e8176c80 addr 192.168.168.202:6789/0
> 2018-06-22 10:47:27.662484 7f70ef9e6700 10 monclient(hunting): picked mon.noname-a con 0x7f70e817a2e0 addr 192.168.168.201:6789/0
> 2018-06-22 10:47:27.662534 7f70ef9e6700 10 monclient(hunting): _renew_subs
> 2018-06-22 10:47:27.662544 7f70ef9e6700 10 monclient(hunting): authenticate will time out at 2018-06-22 10:52:27.662543
> 2018-06-22 10:47:27.663831 7f70d77fe700 10 monclient(hunting): handle_monmap mon_map magic: 0 v1
> 2018-06-22 10:47:27.663885 7f70d77fe700 10 monclient(hunting):  got monmap 20, mon.noname-b is now rank -1
> 2018-06-22 10:47:27.663889 7f70d77fe700 10 monclient(hunting): dump:
> epoch 20
> fsid 51e9f641-372e-44ec-92a4-b9fe55cbf9fe
> last_changed 2018-06-16 23:14:48.936175
> created 0.000000
> 0: 192.168.168.201:6789/0 mon.arh-ibstorage1-ib
> 1: 192.168.168.202:6789/0 mon.arh-ibstorage2-ib
> 2: 192.168.168.203:6789/0 mon.arh-ibstorage3-ib
>
> 2018-06-22 10:47:27.664005 7f70d77fe700 10 cephx: set_have_need_key no handler for service mon
> 2018-06-22 10:47:27.664020 7f70d77fe700 10 cephx: set_have_need_key no handler for service osd
> 2018-06-22 10:47:27.664021 7f70d77fe700 10 cephx: set_have_need_key no handler for service mgr
> 2018-06-22 10:47:27.664025 7f70d77fe700 10 cephx: set_have_need_key no handler for service auth
> 2018-06-22 10:47:27.664026 7f70d77fe700 10 cephx: validate_tickets want 53 have 0 need 53
> 2018-06-22 10:47:27.664032 7f70d77fe700 10 monclient(hunting): my global_id is 411322261
> 2018-06-22 10:47:27.664035 7f70d77fe700 10 cephx client: handle_response ret = 0
> 2018-06-22 10:47:27.664046 7f70d77fe700 10 cephx client:  got initial server challenge d66f2dffc2113d43
> 2018-06-22 10:47:27.664049 7f70d77fe700 10 cephx client: validate_tickets: want=53 need=53 have=0
>
> 2018-06-22 10:47:27.664052 7f70d77fe700 10 cephx: set_have_need_key no handler for service mon
> 2018-06-22 10:47:27.664053 7f70d77fe700 10 cephx: set_have_need_key no handler for service osd
> 2018-06-22 10:47:27.664054 7f70d77fe700 10 cephx: set_have_need_key no handler for service mgr
> 2018-06-22 10:47:27.664055 7f70d77fe700 10 cephx: set_have_need_key no handler for service auth
> 2018-06-22 10:47:27.664056 7f70d77fe700 10 cephx: validate_tickets want 53 have 0 need 53
> 2018-06-22 10:47:27.664057 7f70d77fe700 10 cephx client: want=53 need=53 have=0
> 2018-06-22 10:47:27.664061 7f70d77fe700 10 cephx client: build_request
> 2018-06-22 10:47:27.664145 7f70d77fe700 10 cephx client: get auth session key: client_challenge d4c95f637e641b55
> 2018-06-22 10:47:27.664175 7f70d77fe700 10 monclient(hunting): handle_monmap mon_map magic: 0 v1
> 2018-06-22 10:47:27.664208 7f70d77fe700 10 monclient(hunting):  got monmap 20, mon.arh-ibstorage1-ib is now rank 0
> 2018-06-22 10:47:27.664211 7f70d77fe700 10 monclient(hunting): dump:
> epoch 20
> fsid 51e9f641-372e-44ec-92a4-b9fe55cbf9fe
> last_changed 2018-06-16 23:14:48.936175
> created 0.000000
> 0: 192.168.168.201:6789/0 mon.arh-ibstorage1-ib
> 1: 192.168.168.202:6789/0 mon.arh-ibstorage2-ib
> 2: 192.168.168.203:6789/0 mon.arh-ibstorage3-ib
>
> 2018-06-22 10:47:27.664241 7f70d77fe700 10 cephx: set_have_need_key no handler for service mon
> 2018-06-22 10:47:27.664244 7f70d77fe700 10 cephx: set_have_need_key no handler for service osd
> 2018-06-22 10:47:27.664245 7f70d77fe700 10 cephx: set_have_need_key no handler for service mgr
> 2018-06-22 10:47:27.664246 7f70d77fe700 10 cephx: set_have_need_key no handler for service auth
> 2018-06-22 10:47:27.664247 7f70d77fe700 10 cephx: validate_tickets want 53 have 0 need 53
> 2018-06-22 10:47:27.664251 7f70d77fe700 10 monclient(hunting): my global_id is 411323061
> 2018-06-22 10:47:27.664253 7f70d77fe700 10 cephx client: handle_response ret = 0
> 2018-06-22 10:47:27.664256 7f70d77fe700 10 cephx client:  got initial server challenge d5d3c1e5bcf3c0b8
> 2018-06-22 10:47:27.664258 7f70d77fe700 10 cephx client: validate_tickets: want=53 need=53 have=0
> 2018-06-22 10:47:27.664260 7f70d77fe700 10 cephx: set_have_need_key no handler for service mon
> 2018-06-22 10:47:27.664261 7f70d77fe700 10 cephx: set_have_need_key no handler for service osd
> 2018-06-22 10:47:27.664262 7f70d77fe700 10 cephx: set_have_need_key no handler for service mgr
> 2018-06-22 10:47:27.664263 7f70d77fe700 10 cephx: set_have_need_key no handler for service auth
> 2018-06-22 10:47:27.664264 7f70d77fe700 10 cephx: validate_tickets want 53 have 0 need 53
> 2018-06-22 10:47:27.664265 7f70d77fe700 10 cephx client: want=53 need=53 have=0
> 2018-06-22 10:47:27.664268 7f70d77fe700 10 cephx client: build_request
> 2018-06-22 10:47:27.664328 7f70d77fe700 10 cephx client: get auth session key: client_challenge d31821a6437d4974
> 2018-06-22 10:47:27.664651 7f70d77fe700 10 cephx client: handle_response ret = 0
> 2018-06-22 10:47:27.664667 7f70d77fe700 10 cephx client:  get_auth_session_key
> 2018-06-22 10:47:27.664673 7f70d77fe700 10 cephx: verify_service_ticket_reply got 1 keys
> 2018-06-22 10:47:27.664676 7f70d77fe700 10 cephx: got key for service_id auth
> 2018-06-22 10:47:27.664766 7f70d77fe700 10 cephx:  ticket.secret_id=3681
> 2018-06-22 10:47:27.664774 7f70d77fe700 10 cephx: verify_service_ticket_reply service auth secret_id 3681 session_key [KEY] validity=43200.000000
> 2018-06-22 10:47:27.664806 7f70d77fe700 10 cephx: ticket expires=2018-06-22 22:47:27.664805 renew_after=2018-06-22 19:47:27.664805
> 2018-06-22 10:47:27.664825 7f70d77fe700 10 cephx client:  want=53 need=53 have=0
> 2018-06-22 10:47:27.664827 7f70d77fe700 10 cephx: set_have_need_key no handler for service mon
> 2018-06-22 10:47:27.664829 7f70d77fe700 10 cephx: set_have_need_key no handler for service osd
> 2018-06-22 10:47:27.664830 7f70d77fe700 10 cephx: set_have_need_key no handler for service mgr
> 2018-06-22 10:47:27.664832 7f70d77fe700 10 cephx: validate_tickets want 53 have 32 need 21
> 2018-06-22 10:47:27.664836 7f70d77fe700 10 cephx client: validate_tickets: want=53 need=21 have=32
> 2018-06-22 10:47:27.664837 7f70d77fe700 10 cephx: set_have_need_key no handler for service mon
> 2018-06-22 10:47:27.664839 7f70d77fe700 10 cephx: set_have_need_key no handler for service osd
> 2018-06-22 10:47:27.664840 7f70d77fe700 10 cephx: set_have_need_key no handler for service mgr
> 2018-06-22 10:47:27.664841 7f70d77fe700 10 cephx: validate_tickets want 53 have 32 need 21
> 2018-06-22 10:47:27.664842 7f70d77fe700 10 cephx client: want=53 need=21 have=32
> 2018-06-22 10:47:27.664844 7f70d77fe700 10 cephx client: build_request
> 2018-06-22 10:47:27.664846 7f70d77fe700 10 cephx client: get service keys: want=53 need=21 have=32
> 2018-06-22 10:47:27.664928 7f70d77fe700 10 cephx client: handle_response ret = 0
> 2018-06-22 10:47:27.664933 7f70d77fe700 10 cephx client:  get_auth_session_key
> 2018-06-22 10:47:27.664935 7f70d77fe700 10 cephx: verify_service_ticket_reply got 1 keys
> 2018-06-22 10:47:27.664937 7f70d77fe700 10 cephx: got key for service_id auth
> 2018-06-22 10:47:27.664985 7f70d77fe700 10 cephx:  ticket.secret_id=3681
> 2018-06-22 10:47:27.664987 7f70d77fe700 10 cephx: verify_service_ticket_reply service auth secret_id 3681 session_key [KEY] validity=43200.000000
> 2018-06-22 10:47:27.665009 7f70d77fe700 10 cephx: ticket expires=2018-06-22 22:47:27.665008 renew_after=2018-06-22 19:47:27.665008
> 2018-06-22 10:47:27.665017 7f70d77fe700 10 cephx client:  want=53 need=53 have=0
> 2018-06-22 10:47:27.665019 7f70d77fe700 10 cephx: set_have_need_key no handler for service mon
> 2018-06-22 10:47:27.665020 7f70d77fe700 10 cephx: set_have_need_key no handler for service osd
> 2018-06-22 10:47:27.665024 7f70d77fe700 10 cephx: set_have_need_key no handler for service mgr
> 2018-06-22 10:47:27.665026 7f70d77fe700 10 cephx: validate_tickets want 53 have 32 need 21
> 2018-06-22 10:47:27.665029 7f70d77fe700 10 cephx client: validate_tickets: want=53 need=21 have=32
> 2018-06-22 10:47:27.665031 7f70d77fe700 10 cephx: set_have_need_key no handler for service mon
> 2018-06-22 10:47:27.665032 7f70d77fe700 10 cephx: set_have_need_key no handler for service osd
> 2018-06-22 10:47:27.665033 7f70d77fe700 10 cephx: set_have_need_key no handler for service mgr
> 2018-06-22 10:47:27.665034 7f70d77fe700 10 cephx: validate_tickets want 53 have 32 need 21
> 2018-06-22 10:47:27.665035 7f70d77fe700 10 cephx client: want=53 need=21 have=32
> 2018-06-22 10:47:27.665037 7f70d77fe700 10 cephx client: build_request
> 2018-06-22 10:47:27.665039 7f70d77fe700 10 cephx client: get service keys: want=53 need=21 have=32
> 2018-06-22 10:47:27.665354 7f70d77fe700 10 cephx client: handle_response ret = 0
> 2018-06-22 10:47:27.665365 7f70d77fe700 10 cephx client:  get_principal_session_key session_key [KEY]
> 2018-06-22 10:47:27.665377 7f70d77fe700 10 cephx: verify_service_ticket_reply got 3 keys
> 2018-06-22 10:47:27.665379 7f70d77fe700 10 cephx: got key for service_id mon
> 2018-06-22 10:47:27.665419 7f70d77fe700 10 cephx:  ticket.secret_id=44133
> 2018-06-22 10:47:27.665425 7f70d77fe700 10 cephx: verify_service_ticket_reply service mon secret_id 44133 session_key [KEY] validity=3600.000000
> 2018-06-22 10:47:27.665437 7f70d77fe700 10 cephx: ticket expires=2018-06-22 11:47:27.665436 renew_after=2018-06-22 11:32:27.665436
> 2018-06-22 10:47:27.665443 7f70d77fe700 10 cephx: got key for service_id osd
> 2018-06-22 10:47:27.665476 7f70d77fe700 10 cephx:  ticket.secret_id=44133
> 2018-06-22 10:47:27.665478 7f70d77fe700 10 cephx: verify_service_ticket_reply service osd secret_id 44133 session_key [KEY] validity=3600.000000
> 2018-06-22 10:47:27.665497 7f70d77fe700 10 cephx: ticket expires=2018-06-22 11:47:27.665496 renew_after=2018-06-22 11:32:27.665496
> 2018-06-22 10:47:27.665506 7f70d77fe700 10 cephx: got key for service_id mgr
> 2018-06-22 10:47:27.665539 7f70d77fe700 10 cephx:  ticket.secret_id=132
> 2018-06-22 10:47:27.665546 7f70d77fe700 10 cephx: verify_service_ticket_reply service mgr secret_id 132 session_key [KEY] validity=3600.000000
> 2018-06-22 10:47:27.665564 7f70d77fe700 10 cephx: ticket expires=2018-06-22 11:47:27.665564 renew_after=2018-06-22 11:32:27.665564
> 2018-06-22 10:47:27.665573 7f70d77fe700 10 cephx: validate_tickets want 53 have 53 need 0
> 2018-06-22 10:47:27.665602 7f70d77fe700  1 monclient: found mon.arh-ibstorage2-ib
> 2018-06-22 10:47:27.665617 7f70d77fe700 20 monclient: _un_backoff reopen_interval_multipler now 1
> 2018-06-22 10:47:27.665636 7f70d77fe700 10 monclient: _send_mon_message to mon.arh-ibstorage2-ib at 192.168.168.202:6789/0
> 2018-06-22 10:47:27.665656 7f70d77fe700 10 cephx: validate_tickets want 53 have 53 need 0
> 2018-06-22 10:47:27.665658 7f70d77fe700 20 cephx client: need_tickets: want=53 have=53 need=0
> 2018-06-22 10:47:27.665661 7f70d77fe700 20 monclient: _check_auth_rotating not needed by client.admin
> 2018-06-22 10:47:27.665678 7f70ef9e6700  5 monclient: authenticate success, global_id 411322261
> 2018-06-22 10:47:27.665694 7f70ef9e6700 10 monclient: _renew_subs
> 2018-06-22 10:47:27.665698 7f70ef9e6700 10 monclient: _send_mon_message to mon.arh-ibstorage2-ib at 192.168.168.202:6789/0
> 2018-06-22 10:47:27.665817 7f70ef9e6700 10 monclient: _renew_subs
> 2018-06-22 10:47:27.665828 7f70ef9e6700 10 monclient: _send_mon_message to mon.arh-ibstorage2-ib at 192.168.168.202:6789/0
> 2018-06-22 10:47:27.666069 7f70d77fe700 10 monclient: handle_monmap mon_map magic: 0 v1
> 2018-06-22 10:47:27.666102 7f70d77fe700 10 monclient:  got monmap 20, mon.arh-ibstorage2-ib is now rank 1
> 2018-06-22 10:47:27.666110 7f70d77fe700 10 monclient: dump:
>
> epoch 20
> fsid 51e9f641-372e-44ec-92a4-b9fe55cbf9fe
> last_changed 2018-06-16 23:14:48.936175
> created 0.000000
> 0: 192.168.168.201:6789/0 mon.arh-ibstorage1-ib
> 1: 192.168.168.202:6789/0 mon.arh-ibstorage2-ib
> 2: 192.168.168.203:6789/0 mon.arh-ibstorage3-ib
>
> 2018-06-22 10:47:27.666617 7f70eca43700 10 cephx client: build_authorizer for service mgr
> 2018-06-22 10:47:27.667043 7f70eca43700 10 In get_auth_session_handler for protocol 2
> 2018-06-22 10:47:27.678417 7f70eda45700 10 cephx client: build_authorizer for service osd
> 2018-06-22 10:47:27.678914 7f70eda45700 10 In get_auth_session_handler for protocol 2
> 2018-06-22 10:47:27.679003 7f70eda45700 10 _calc_signature seq 1 front_crc_ = 2696387361 middle_crc = 0 data_crc = 0 sig = 929021353460216573
> 2018-06-22 10:47:27.679026 7f70eda45700 20 Putting signature in client message(seq # 1): sig = 929021353460216573
> 2018-06-22 10:47:27.679520 7f70eda45700 10 _calc_signature seq 1 front_crc_ = 1943489909 middle_crc = 0 data_crc = 0 sig = 10026640535487722288
> Error EPERM: problem getting command descriptions from pg.18.2
> 2018-06-22 10:47:27.681798 7f70ef9e6700 10 monclient: shutdown
>
>
> -----------------
>
>
> From what I can see the auth works:
>
> 2018-06-22 10:47:27.665678 7f70ef9e6700  5 monclient: authenticate success, global_id 411322261
>
>
>
>
> ----- Original Message -----
>> From: "Brad Hubbard" <bhubbard@xxxxxxxxxx>
>> To: "Andrei" <andrei@xxxxxxxxxx>
>> Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
>> Sent: Friday, 22 June, 2018 02:05:51
>> Subject: Re:  fixing unrepairable inconsistent PG
>
>> That seems like an authentication issue?
>>
>> Try running it like so...
>>
>> $ ceph --debug_monc 20 --debug_auth 20 pg 18.2 query
>>
>> On Thu, Jun 21, 2018 at 12:18 AM, Andrei Mikhailovsky <andrei@xxxxxxxxxx> wrote:
>>> Hi Brad,
>>>
>>> Yes, but it doesn't show much:
>>>
>>> ceph pg 18.2 query
>>> Error EPERM: problem getting command descriptions from pg.18.2
>>>
>>> Cheers
>>>
>>>
>>>
>>> ----- Original Message -----
>>>> From: "Brad Hubbard" <bhubbard@xxxxxxxxxx>
>>>> To: "andrei" <andrei@xxxxxxxxxx>
>>>> Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
>>>> Sent: Wednesday, 20 June, 2018 00:02:07
>>>> Subject: Re:  fixing unrepairable inconsistent PG
>>>
>>>> Can you post the output of a pg query?
>>>>
>>>> On Tue, Jun 19, 2018 at 11:44 PM, Andrei Mikhailovsky <andrei@xxxxxxxxxx> wrote:
>>>>> A quick update on my issue. I have noticed that while I was trying to move
>>>>> the problem object on osds, the file attributes got lost on one of the osds,
>>>>> which is I guess why the error messages showed the no attribute bit.
>>>>>
>>>>> I then copied the attributes metadata to the problematic object and
>>>>> restarted the osds in question. Following a pg repair I got a different
>>>>> error:
>>>>>
>>>>> 2018-06-19 13:51:05.846033 osd.21 osd.21 192.168.168.203:6828/24339 2 :
>>>>> cluster [ERR] 18.2 shard 21: soid 18:45f87722:::.dir.default.80018061.2:head
>>>>> omap_digest 0x25e8a1da != omap_digest 0x21c7f871 from auth oi
>>>>> 18:45f87722:::.dir.default.80018061.2:head(106137'603495 osd.21.0:41403910
>>>>> dirty|omap|data_digest|omap_digest s 0 uv 603494 dd ffffffff od 21c7f871
>>>>> alloc_hint [0 0 0])
>>>>> 2018-06-19 13:51:05.846042 osd.21 osd.21 192.168.168.203:6828/24339 3 :
>>>>> cluster [ERR] 18.2 shard 28: soid 18:45f87722:::.dir.default.80018061.2:head
>>>>> omap_digest 0x25e8a1da != omap_digest 0x21c7f871 from auth oi
>>>>> 18:45f87722:::.dir.default.80018061.2:head(106137'603495 osd.21.0:41403910
>>>>> dirty|omap|data_digest|omap_digest s 0 uv 603494 dd ffffffff od 21c7f871
>>>>> alloc_hint [0 0 0])
>>>>> 2018-06-19 13:51:05.846046 osd.21 osd.21 192.168.168.203:6828/24339 4 :
>>>>> cluster [ERR] 18.2 soid 18:45f87722:::.dir.default.80018061.2:head: failed
>>>>> to pick suitable auth object
>>>>> 2018-06-19 13:51:05.846118 osd.21 osd.21 192.168.168.203:6828/24339 5 :
>>>>> cluster [ERR] repair 18.2 18:45f87722:::.dir.default.80018061.2:head no '_'
>>>>> attr
>>>>> 2018-06-19 13:51:05.846129 osd.21 osd.21 192.168.168.203:6828/24339 6 :
>>>>> cluster [ERR] repair 18.2 18:45f87722:::.dir.default.80018061.2:head no
>>>>> 'snapset' attr
>>>>> 2018-06-19 13:51:09.810878 osd.21 osd.21 192.168.168.203:6828/24339 7 :
>>>>> cluster [ERR] 18.2 repair 4 errors, 0 fixed
>>>>>
>>>>> It mentions that there is an incorrect omap_digest . How do I go about
>>>>> fixing this?
>>>>>
>>>>> Cheers
>>>>>
>>>>> ________________________________
>>>>>
>>>>> From: "andrei" <andrei@xxxxxxxxxx>
>>>>> To: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
>>>>> Sent: Tuesday, 19 June, 2018 11:16:22
>>>>> Subject:  fixing unrepairable inconsistent PG
>>>>>
>>>>> Hello everyone
>>>>>
>>>>> I am having trouble repairing one inconsistent and stubborn PG. I get the
>>>>> following error in ceph.log:
>>>>>
>>>>>
>>>>>
>>>>> 2018-06-19 11:00:00.000225 mon.arh-ibstorage1-ib mon.0
>>>>> 192.168.168.201:6789/0 675 : cluster [ERR] overall HEALTH_ERR noout flag(s)
>>>>> set; 4 scrub errors; Possible data damage: 1 pg inconsistent; application
>>>>> not enabled on 4 pool(s)
>>>>> 2018-06-19 11:09:24.586392 mon.arh-ibstorage1-ib mon.0
>>>>> 192.168.168.201:6789/0 841 : cluster [ERR] Health check update: Possible
>>>>> data damage: 1 pg inconsistent, 1 pg repair (PG_DAMAGED)
>>>>> 2018-06-19 11:09:27.139504 osd.21 osd.21 192.168.168.203:6828/4003 2 :
>>>>> cluster [ERR] 18.2 soid 18:45f87722:::.dir.default.80018061.2:head: failed
>>>>> to pick suitable object info
>>>>> 2018-06-19 11:09:27.139545 osd.21 osd.21 192.168.168.203:6828/4003 3 :
>>>>> cluster [ERR] repair 18.2 18:45f87722:::.dir.default.80018061.2:head no '_'
>>>>> attr
>>>>> 2018-06-19 11:09:27.139550 osd.21 osd.21 192.168.168.203:6828/4003 4 :
>>>>> cluster [ERR] repair 18.2 18:45f87722:::.dir.default.80018061.2:head no
>>>>> 'snapset' attr
>>>>>
>>>>> 2018-06-19 11:09:35.484402 osd.21 osd.21 192.168.168.203:6828/4003 5 :
>>>>> cluster [ERR] 18.2 repair 4 errors, 0 fixed
>>>>> 2018-06-19 11:09:40.601657 mon.arh-ibstorage1-ib mon.0
>>>>> 192.168.168.201:6789/0 844 : cluster [ERR] Health check update: Possible
>>>>> data damage: 1 pg inconsistent (PG_DAMAGED)
>>>>>
>>>>>
>>>>> I have tried to follow a few instructions on the PG repair, including
>>>>> removal of the 'broken' object .dir.default.80018061.2
>>>>>  from primary osd following by the pg repair. After that didn't work, I've
>>>>> done the same for the secondary osd. Still the same issue.
>>>>>
>>>>> Looking at the actual object on the file system, the file size is 0 for both
>>>>> primary and secondary objects. The md5sum is the same too. The broken PG
>>>>> belongs to the radosgw bucket called .rgw.buckets.index
>>>>>
>>>>> What else can I try to get the thing fixed?
>>>>>
>>>>> Cheers
>>>>>
>>>>> _______________________________________________
>>>>> ceph-users mailing list
>>>>> ceph-users@xxxxxxxxxxxxxx
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> ceph-users mailing list
>>>>> ceph-users@xxxxxxxxxxxxxx
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Cheers,
>>>> Brad
>>
>>
>>
>> --
>> Cheers,
>> Brad



-- 
Cheers,
Brad
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux