debugging radosgw sync errors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello again,

as my tests with some fresh clusters answerd most of my config questions, I
now wanted to start with our production cluster and the basic setup looks
good, but the sync does not work:

[root@3cecef5afb05 ~]# radosgw-admin sync status
          realm 5d6f2ea4-b84a-459b-bce2-bccac338b3ef (company)
      zonegroup f6f3f550-89f0-4c0d-b9b0-301a06c52c16 (bc01)
           zone a7edb6fe-737f-4a1c-a333-0ba0566bb3dd (bc01)
  metadata sync preparing for full sync
                full sync: 64/64 shards
                full sync: 0 entries to sync
                failed to fetch master sync status: (5) Input/output error

[root@3cecef5afb05 ~]# radosgw-admin metadata sync run
2021-09-17 16:23:08.346 7f6c83c63840  0 meta sync: ERROR: failed to fetch
metadata sections
ERROR: sync.run() returned ret=-5
2021-09-17 16:23:08.474 7f6c83c63840  0 RGW-SYNC:meta: ERROR: failed to
fetch all metadata keys (r=-5)

And when I check "radosgw-admin period get", the sync_status is just an
array of empty strings:
[root@3cecef5afb05 ~]# radosgw-admin period get
{
    "id": "e8fc96f1-ae86-4dc1-b432-470b0772fded",
    "epoch": 71,
    "predecessor_uuid": "5349ac85-3d6d-4088-993f-7a1d4be3835a",
    "sync_status": [
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",

How can I debug what is going wrong?
I triet to dig into the logs and see a lot of these messages:
2021-09-17 14:06:04.144 7f755b4e7700  1 civetweb: 0x5641a22b33a8:
IPV6_OF_OUR_HAPROXY - - [17/Sep/2021:14:06:04 +0000] "GET
/admin/log/?type=metadata&status&rgwx-zonegroup=da651dc1-2663-4e1b-af2e-ac4454f24c9d
HTTP/1.1" 403 439 - -
2021-09-17 14:06:11.646 7f755f4ef700  1 civetweb: 0x5641a22ae4e8:
IPV6_OF_OUR_HAPROXY - - [17/Sep/2021:14:06:11 +0000] "POST
/admin/realm/period?period=e8fc96f1-ae86-4dc1-b432-470b0772fded&epoch=71&rgwx-zonegroup=da651dc1-2663-4e1b-af2e-ac4454f24c9d
HTTP/1.1" 403 439 - -

The 403 status makes me think I might have an access problem, but pulling
the realm/period from the master was successful. Also the period commit
from the new cluster worked fine.
-- 
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux