Re: Radosgw dynamic sharding jewel -> luminous

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The default value of this reshard pool is "default.rgw.log:reshard". You can check 'radosgw-admin zone get' for the list of pool names/namespaces in use. It may be that your log pool is named ".rgw.log" instead, so you could change your reshard_pool to ".rgw.log:reshard" to share that.


On 3/2/20 6:52 PM, Robert LeBlanc wrote:
I just upgraded a cluster that I inherited from Jewel to Luminous and
trying to work through the new warnings/errors.

I got the message about 3 OMAP objects being too big, all of them in the
default.rgw.buckets.index pool. I expected that dynamic sharding should
kick in, but no luck after several days. I looked at

$ radosgw-admin reshard list
[2020-03-02 04:27:22.303601 7f8bb58b8e40 -1 ERROR: failed to list reshard
log entries, oid=reshard.0000000000
2020-03-02 04:27:22.305403 7f8bb58b8e40 -1 ERROR: failed to list reshard
log entries, oid=reshard.0000000001
2020-03-02 04:27:22.307038 7f8bb58b8e40 -1 ERROR: failed to list reshard
log entries, oid=reshard.0000000002
2020-03-02 04:27:22.317932 7f8bb58b8e40 -1 ERROR: failed to list reshard
log entries, oid=reshard.0000000003
2020-03-02 04:27:22.348383 7f8bb58b8e40 -1 ERROR: failed to list reshard
log entries, oid=reshard.0000000004
2020-03-02 04:27:22.349212 7f8bb58b8e40 -1 ERROR: failed to list reshard
log entries, oid=reshard.0000000005
2020-03-02 04:27:22.349853 7f8bb58b8e40 -1 ERROR: failed to list reshard
log entries, oid=reshard.0000000006
2020-03-02 04:27:22.350490 7f8bb58b8e40 -1 ERROR: failed to list reshard
log entries, oid=reshard.0000000007
2020-03-02 04:27:22.351256 7f8bb58b8e40 -1 ERROR: failed to list reshard
log entries, oid=reshard.0000000008
2020-03-02 04:27:22.351843 7f8bb58b8e40 -1 ERROR: failed to list reshard
log entries, oid=reshard.0000000009
2020-03-02 04:27:22.353225 7f8bb58b8e40 -1 ERROR: failed to list reshard
log entries, oid=reshard.0000000010
2020-03-02 04:27:22.353910 7f8bb58b8e40 -1 ERROR: failed to list reshard
log entries, oid=reshard.0000000011
2020-03-02 04:27:22.367161 7f8bb58b8e40 -1 ERROR: failed to list reshard
log entries, oid=reshard.0000000012
2020-03-02 04:27:22.367741 7f8bb58b8e40 -1 ERROR: failed to list reshard
log entries, oid=reshard.0000000013
2020-03-02 04:27:22.368498 7f8bb58b8e40 -1 ERROR: failed to list reshard
log entries, oid=reshard.0000000014
]

And searching on the Internet indicates tha Luminous added a new "reshard"
namespace that the radosgw user needs access to. I'm not sure which pool
this namespace was added to (being from Jewel there are a slew of rgw
pools) and I'm not sure which rados-gw user it's talking about. I can't
find a keyring for rados-admin, but it works so I assume that it is using
the admin key ring. The permissions are open. I even appended the namespace
option to the admin caps like follows:

[client.admin]
         key = SECRET
         caps mds = "allow *"
         caps mon = "allow *"
         caps: [osd] allow * namespace="*"

But I get a new error:

$ radosgw-admin reshard list
2020-03-02 05:31:40.875642 7fdb983f7e40  0 failed reading realm info: ret
-1 (1) Operation not permitted

Any nudge in the right direction would be helpful. I manually sharded the
indexes, but I'd really like to have it done automatically from now on so I
don't have to worry about it.

Thank you,
Robert LeBlanc
----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux