Re: rgw bucket inaccessible - appears to be using incorrect index pool?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Robin,

Of the two issues, this seems to me like it must be #22928.

Since the majority of index entries for this bucket are in the .rgw.buckets pool, but newer entries have been created in .rgw.buckets.index, it's clearly failing to use the explicit placement pool - and with the index data split across two pools I don't see how resharding could correct this.

I can get the object names from the "new" (incorrect) indexes with something like:

for i in `rados -p .rgw.buckets.index ls - | grep "default.2049236.2"`; do rados -p .rgw.buckets.index listomapkeys $i|grep "^[a-zA-Z0-9]"; done

Fortunately there are only ~20, in this bucket (my grep is just a stupid way to skip what I assume are the multipart parts, which have a non-ascii first char).

... and these files are downloadable, at least using s3cmd (minio client fails, it seems to try and check the index first).

Once I have these newer files downloaded, then to restore access to the older index I like the suggestion in the issue tracker to create a new placement target in the zone, and modify the bucket's placement rule to match. It seems like it might be safer than copying objects from one index pool to the other (though the latter certainly sounds faster and easier!)

From a quick check, I suspect I probably have 40 or so other buckets with this problem... will need to check them more closely.

Actually it looks like a lot of the affected buckets were created around 10/2016 - I suspect the placement policies were incorrect for a short time due to confusion over the hammer->jewel upgrade (the realm/period/zonegroup/zone conversion didn't really go smoothly!)

On 02/16/2018 11:39 PM, Robin H. Johnson wrote:
On Fri, Feb 16, 2018 at 07:06:21PM -0600, Graham Allan wrote:
[snip great debugging]

This seems similar to two open issues, could be either of them depending
on how old that bucket is.
http://tracker.ceph.com/issues/22756
http://tracker.ceph.com/issues/22928

- I have a mitigation posted to 22756.
- There's a PR posted for 22928, but it'll probably only be in v12.2.4.


--
Graham Allan
Minnesota Supercomputing Institute - gta@xxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux