Hi everyone, something strange here with bucket resharding vs. bucket
listing.
I have a bucket with about 1M objects in it, I increased the bucket
quota from 1M to 2M, and manually resharded from 11 to 23. (dynamic
resharding is disabled)
Since then, the user can't list objects in some paths. The objects are
there, but the client can't list them.
Using this example: s3://bucket/dir1/dir2/dir3/dir4
s3cmd can't list the objects in dir2 and dir4 but rclone works and list
all objects.
s3cmd don't give any errors, just list the path with no object in it.
I reshard to 1, everything is ok, s3cmd can list all objects in all paths.
I reshard to 11, s3cmd works with dir2 but can't list the objects in dir4.
I reshard to 13, s3cmd can't list dir2 and dir4.
I reshard to 7, s3cmd works with all the paths.
s3cmd always works with dir1 and dir3, regardless of the shard number,
the problem is just with dir2 and dir4.
s3cmd, s3browser and "aws s3 ls" are problematic, "aws s3api
list-objects" and rclone always work.
I did a "bucket check --fix --check-objects", scrub/deep-scrub of the
index pgs, "bi list" looks good to me, charset & etags looks good too,
s3cmd in debug mode doesn't report any error, no xml error, no http-4xx
everything is http-200. I can't find anything suspicious in the
haproxy/beast syslog. resharding process didn't give any error,
everything is HEALTH_OK.
Maybe the next step is to look for a s3cmd/python bug, but I'm curious
if someone here have ever experienced something like this.
Any thoughts are welcome :-)
Thanks!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx