Re: "Lost" buckets on radosgw

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 21, 2016 at 3:33 PM, Graham Allan <gta@xxxxxxx> wrote:
>
>
> On 11/21/2016 05:23 PM, Yehuda Sadeh-Weinraub wrote:
>>
>>
>> Seems like bucket was sharded, but for some reason the bucket instance
>> info does not specify that. I don't know why that would happen, but
>> maybe running mixed versions could be the culprit. You can try
>> modifying the bucket instance info for this bucket, change the num
>> shards param to 32.
>>
>> Yehuda
>
>
> Yes! That seems to do it...
>
> And of course in retrospect I can see num_shards = 32 for my working bucket
> but 0 for this one.
>
> so I did:
>>
>> rados get --pool .rgw.buckets.ec42
>> default.712449.19__shadow_tcga_test/rnaseq/Aligned.out.sam.2~10nUehK5BnyXdhhiOqTL2JdpLfDCd0k.11_76
>> lemming
>
>
> edit "num_shards": 32
>
>> # radosgw-admin metadata put bucket.instance:tcga:default.712449.19 <
>> tcga.json
>
>
> and this bucket is now visible again! Thanks so much!
>
> I wonder how this happened. It looks like this affects ~25/680 buckets.
>
>

Could be that this is a secondary issue, maybe trying to fix the
buckets because of the original issue caused this to happen
(radosgw-admin bucket check maybe broke it?).

Yehuda

> Graham
> --
> Graham Allan
> Minnesota Supercomputing Institute - gta@xxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux