Re: Upgrade from Mimic to Pacific, hidden zone in RGW?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

do you have the zone information in the ceph.conf? Do they match on all rgw hosts? Do you see any orphans or anything suspicious in 'rados -p .rgw.root ls' output?


Zitat von Federico Lazcano <federico.lazcano@xxxxxxxxx>:

Hi everyone! I'm looking for help in an upgrade from Mimic.
I've managed to upgrade MON, MGR, OSD from Mimic to Nautilus, Octopus an
Paficic, in that order.

But i'm having trouble migrating the RGW service. It seems that when added
two more RGW servers they somehow were created in a different zone than the
original (Mimic) RGW servers.

------------------------------------------------
root@ceph11-test:/# ceph -s
  cluster:
    id:     dfade847-e28f-4551-99dc-21e3094d9c8f
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim   <--- There are
still RGW in Mimic.
   services:
    mon: 3 daemons, quorum ceph11-test,ceph12-test,ceph13-test (age 8d)
    mgr: ceph11-test(active, since 8d), standbys: ceph12-test, ceph13-test
    osd: 6 osds: 6 up (since 8d), 6 in (since 8d)
    rgw: 4 daemons active (4 hosts, 2 zones)   <---- **** TWO ZONES ?????
****

  data:
    pools:   8 pools, 416 pgs
    objects: 238.31k objects, 42 GiB
    usage:   88 GiB used, 2.8 TiB / 2.9 TiB avail
    pgs:     416 active+clean
------------------------------------------------

But I can't find a way to list the OTHER Zone


------------------------------------------------
root@ceph11-test:/# radosgw-admin realm list
{
    "default_info": "4bd729f3-9e52-43d8-995c-8683d4bf4fbf",
    "realms": [
        "default"
    ]
}
root@ceph11-test:/# radosgw-admin zonegroup list
{
    "default_info": "47f2c5e8-f942-4e68-8cd9-6372a0ee6935",
    "zonegroups": [
        "default"
    ]
}
root@ceph11-test:/# radosgw-admin zone list
{
    "default_info": "c91cebf7-81c7-40e2-b107-2e58036cdb92",
    "zones": [
        "default"
    ]
}
------------------------------------------------

There are only the default pools

------------------------------------------------
root@ceph11-test:/# ceph osd pool ls
default.rgw.meta
default.rgw.log
default.rgw.buckets.index
default.rgw.buckets.data
.rgw.root
default.rgw.control
default.rgw.buckets.non-ec
device_health_metrics
------------------------------------------------

I'm using HAProxy to publish de RGW servers.
(extract from haproxy.cfg)
        server ceph-rgw3-test ceph-rgw3-test:7480 check fall 3 rise 2   #
OLD Servers in Mimic
        server ceph-rgw4-test ceph-rgw4-test:7480 check fall 3 rise 2   #
OLD Servers in Mimic
        server ceph11-test ceph11-test:7480 check fall 3 rise 2         #
NEW Servers in Pacific
        server ceph12-test ceph12-test:7480 check fall 3 rise 2         #
NEW Servers in Pacific

When I configure the old (Mimic) RGW as the backend, everything works ok,
but if I configure the
new RGW (Pacific) I get HTTP 301 errors when I try to use existing buckets.

*** with old RGW servers - Mimic ***
 s3cmd ls s3://test
2022-10-04 02:19      2097152
 s3://test/s3loop-2fc8a8a9-6b93-4680-b0a3-6875efaa6cb4.bin.s3
2022-10-04 02:18      2097152
 s3://test/s3loop-74bef337-92ea-4e83-938c-4865d4ee795a.bin.s3
 s3cmd get s3://test/s3loop-2fc8a8a9-6b93-4680-b0a3-6875efaa6cb4.bin.s3
download: 's3://test/s3loop-2fc8a8a9-6b93-4680-b0a3-6875efaa6cb4.bin.s3' ->
'./s3loop-2fc8a8a9-6b93-4680-b0a3-6875efaa6cb4.bin.s3'  [1 of 1]
 2097152 of 2097152   100% in    6s   307.83 KB/s  done

*** with new RGW servers - Pacific ***
 s3cmd ls s3://test
 (no results)
   s3cmd get s3://test/s3loop-2fc8a8a9-6b93-4680-b0a3-6875efaa6cb4.bin.s3
download: 's3://test/s3loop-2fc8a8a9-6b93-4680-b0a3-6875efaa6cb4.bin.s3' ->
'./s3loop-2fc8a8a9-6b93-4680-b0a3-6875efaa6cb4.bin.s3'  [1 of 1]
ERROR: Download of './s3loop-2fc8a8a9-6b93-4680-b0a3-6875efaa6cb4.bin.s3'
failed (Reason: 404 (NoSuchBucket))
ERROR: S3 error: 404 (NoSuchBucket)


I suspect this behavior reflects that the old servers and the new servers
are in different zones...but I can't «see» the other zone configuration.

Thanks in advance.

--
Federico Lazcano
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux