Re: radosgw not working - upgraded from mimic to octopus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Anyone running octopus (v15)? Can you please share your experience of
radosgw-admin performance?

A simple 'radosgw-admin user list' took 11 minutes; if I use a v13.2.4
radosgw-admin, it can be finished in a few seconds.

This sounds like a performance regression to me. I've already filed a bug
report (https://tracker.ceph.com/issues/48983) but so far no feedback yet.

On Mon, Jan 25, 2021 at 10:06 AM Youzhong Yang <youzhong@xxxxxxxxx> wrote:

> I upgraded our ceph cluster (6 bare metal nodes, 3 rgw VMs) from v13.2.4
> to v15.2.8. The mon, mgr, mds and osd daemons were all upgraded
> successfully, everything looked good.
>
> After the radosgw was upgraded, they refused to work, the log messages are
> at the end of this e-mail.
>
> Here are the things I tried:
>
> 1. I moved aside the pools for the rgw service, started from scratch
> (creating realm, zonegroup, zone, users), but when I tried to run
> 'radosgw-admin user create ...', it appeared to be stuck and never
> returned, other command like 'radosgw-admin period update --commit' also
> got stuck.
>
> 2. I rolled back radosgw to the old version v13.2.4, then everything works
> great again.
>
> What am I missing here? Is there anything extra that needs to be done for
> rgw after upgrading from mimic to octopus?
>
> Please kindly help. Thanks.
>
> ---------------------
> 2021-01-24T09:24:10.192-0500 7f638f79f9c0  0 deferred set uid:gid to
> 64045:64045 (ceph:ceph)
> 2021-01-24T09:24:10.192-0500 7f638f79f9c0  0 ceph version 15.2.8
> (bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable), process
> radosgw, pid 898
> 2021-01-24T09:24:10.192-0500 7f638f79f9c0  0 framework: civetweb
> 2021-01-24T09:24:10.192-0500 7f638f79f9c0  0 framework conf key: port,
> val: 80
> 2021-01-24T09:24:10.192-0500 7f638f79f9c0  0 framework conf key:
> num_threads, val: 1024
> 2021-01-24T09:24:10.192-0500 7f638f79f9c0  0 framework conf key:
> request_timeout_ms, val: 50000
> 2021-01-24T09:24:10.192-0500 7f638f79f9c0  1 radosgw_Main not setting numa
> affinity
> 2021-01-24T09:29:10.195-0500 7f638cbcd700 -1 Initialization timeout,
> failed to initialize
> 2021-01-24T09:29:10.367-0500 7f4c213ba9c0  0 deferred set uid:gid to
> 64045:64045 (ceph:ceph)
> 2021-01-24T09:29:10.367-0500 7f4c213ba9c0  0 ceph version 15.2.8
> (bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable), process
> radosgw, pid 1541
> 2021-01-24T09:29:10.367-0500 7f4c213ba9c0  0 framework: civetweb
> 2021-01-24T09:29:10.367-0500 7f4c213ba9c0  0 framework conf key: port,
> val: 80
> 2021-01-24T09:29:10.367-0500 7f4c213ba9c0  0 framework conf key:
> num_threads, val: 1024
> 2021-01-24T09:29:10.367-0500 7f4c213ba9c0  0 framework conf key:
> request_timeout_ms, val: 50000
> 2021-01-24T09:29:10.367-0500 7f4c213ba9c0  1 radosgw_Main not setting numa
> affinity
> 2021-01-24T09:29:25.883-0500 7f4c213ba9c0  1 robust_notify: If at first
> you don't succeed: (110) Connection timed out
> 2021-01-24T09:29:25.883-0500 7f4c213ba9c0  0 ERROR: failed to distribute
> cache for coredumps.rgw.log:meta.history
> 2021-01-24T09:32:27.754-0500 7fcdac2bf9c0  0 deferred set uid:gid to
> 64045:64045 (ceph:ceph)
> 2021-01-24T09:32:27.754-0500 7fcdac2bf9c0  0 ceph version 15.2.8
> (bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable), process
> radosgw, pid 978
> 2021-01-24T09:32:27.758-0500 7fcdac2bf9c0  0 framework: civetweb
> 2021-01-24T09:32:27.758-0500 7fcdac2bf9c0  0 framework conf key: port,
> val: 80
> 2021-01-24T09:32:27.758-0500 7fcdac2bf9c0  0 framework conf key:
> num_threads, val: 1024
> 2021-01-24T09:32:27.758-0500 7fcdac2bf9c0  0 framework conf key:
> request_timeout_ms, val: 50000
> 2021-01-24T09:32:27.758-0500 7fcdac2bf9c0  1 radosgw_Main not setting numa
> affinity
> 2021-01-24T09:32:44.719-0500 7fcdac2bf9c0  1 robust_notify: If at first
> you don't succeed: (110) Connection timed out
> 2021-01-24T09:32:44.719-0500 7fcdac2bf9c0  0 ERROR: failed to distribute
> cache for coredumps.rgw.log:meta.history
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux