Hello,
there is one bucket for a user in our Ceph cluster who is suddenly not
able to write to one of his buckets.
Reading works fine.
All other buckets work fine.
If we copy the bucket to another bucket on the same cluster, the error
stays. Writing is not possible in the new bucket, too.
Interesting: If we copy the contents of the bucket to a bucket in
another Ceph cluster the error is gone.
So now we know how to solve this but we do not finde the root cause.
I checked the policies, lifecycle and versioning.
Nothing. The user has FULL_CONTROL. Same settings for the user's other
buckets he can still write to.
Wenn setting debugging to higher numbers all I can see is something like
this while trying to write to the bucket:
s3:put_obj reading permissions
s3:put_obj init op
s3:put_obj verifying op mask
s3:put_obj verifying op permissions
op->ERRORHANDLER: err_no=-13 new_err_no=-13
cache get: name=default.rgw.log++script.postrequest. : hit (negative entry)
s3:put_obj op status=0
s3:put_obj http status=403
1 ====== req done req=0x7fe8bb60a710 op status=0 http_status=403
latency=0.000000000s ======
I still think there is something with a policy or so. When we copy the
bucket to another bucket in the same cluster, at first, while copying
you can write to the new bucket but when the copy job progresses at one
point writing is not possible anymore.
But what is it?
Best,
Malte
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx