Re: Still have orphaned rgw shadow files, ceph 0.94.3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The bucket index objects are most likely in the .rgw.buckets.index pool.

Yehuda

On Mon, Aug 31, 2015 at 3:27 PM, Ben Hines <bhines@xxxxxxxxx> wrote:
> Good call, thanks!
>
> Is there any risk of also deleting parts of the bucket index? I'm not
> sure what the objects for the index itself look like, or if they are
> in the .rgw.buckets pool.
>
>
> On Mon, Aug 31, 2015 at 3:23 PM, Yehuda Sadeh-Weinraub
> <yehuda@xxxxxxxxxx> wrote:
>> Make sure you use the underscore also, e.g., "default.8873277.32_".
>> Otherwise you could potentially erase objects you did't intend to,
>> like ones who start with "default.8873277.320" and such.
>>
>> On Mon, Aug 31, 2015 at 3:20 PM, Ben Hines <bhines@xxxxxxxxx> wrote:
>>> Ok. I'm not too familiar with the inner workings of RGW, but i would
>>> assume that for a bucket with these parameters:
>>>
>>>    "id": "default.8873277.32",
>>>    "marker": "default.8873277.32",
>>>
>>> Tha it would be the only bucket using the files that start with
>>> "default.8873277.32"
>>>
>>> default.8873277.32__shadow_.OkYjjANx6-qJOrjvdqdaHev-LHSvPhZ_15
>>> default.8873277.32__shadow_.a2qU3qodRf_E5b9pFTsKHHuX2RUC12g_2
>>>
>>>
>>>
>>> On Mon, Aug 31, 2015 at 2:51 PM, Yehuda Sadeh-Weinraub
>>> <yehuda@xxxxxxxxxx> wrote:
>>>> As long as you're 100% sure that the prefix is only being used for the
>>>> specific bucket that was previously removed, then it is safe to remove
>>>> these objects. But please do double check and make sure that there's
>>>> no other bucket that matches this prefix somehow.
>>>>
>>>> Yehuda
>>>>
>>>> On Mon, Aug 31, 2015 at 2:42 PM, Ben Hines <bhines@xxxxxxxxx> wrote:
>>>>> No input, eh? (or maybe TL,DR for everyone)
>>>>>
>>>>> Short version: Presuming the bucket index shows blank/empty, which it
>>>>> does and is fine, would me manually deleting the rados objects with
>>>>> the prefix matching the former bucket's ID cause any problems?
>>>>>
>>>>> thanks,
>>>>>
>>>>> -Ben
>>>>>
>>>>> On Fri, Aug 28, 2015 at 4:22 PM, Ben Hines <bhines@xxxxxxxxx> wrote:
>>>>>> Ceph 0.93->94.2->94.3
>>>>>>
>>>>>> I noticed my pool used data amount is about twice the bucket used data count.
>>>>>>
>>>>>> This bucket was emptied long ago. It has zero objects:
>>>>>>     "globalcache01",
>>>>>>     {
>>>>>>         "bucket": "globalcache01",
>>>>>>         "pool": ".rgw.buckets",
>>>>>>         "index_pool": ".rgw.buckets.index",
>>>>>>         "id": "default.8873277.32",
>>>>>>         "marker": "default.8873277.32",
>>>>>>         "owner": "...",
>>>>>>         "ver": "0#12348839",
>>>>>>         "master_ver": "0#0",
>>>>>>         "mtime": "2015-03-08 11:44:11.000000",
>>>>>>         "max_marker": "0#",
>>>>>>         "usage": {
>>>>>>             "rgw.none": {
>>>>>>                 "size_kb": 0,
>>>>>>                 "size_kb_actual": 0,
>>>>>>                 "num_objects": 0
>>>>>>             },
>>>>>>             "rgw.main": {
>>>>>>                 "size_kb": 0,
>>>>>>                 "size_kb_actual": 0,
>>>>>>                 "num_objects": 0
>>>>>>             }
>>>>>>         },
>>>>>>         "bucket_quota": {
>>>>>>             "enabled": false,
>>>>>>             "max_size_kb": -1,
>>>>>>             "max_objects": -1
>>>>>>         }
>>>>>>     },
>>>>>>
>>>>>>
>>>>>>
>>>>>> bucket check shows nothing:
>>>>>>
>>>>>> 16:07:09 root@sm-cephrgw4 ~ $ radosgw-admin bucket check
>>>>>> --bucket=globalcache01 --fix
>>>>>> []
>>>>>> 16:07:27 root@sm-cephrgw4 ~ $ radosgw-admin bucket check
>>>>>> --check-head-obj-locator --bucket=globalcache01 --fix
>>>>>> {
>>>>>>     "bucket": "globalcache01",
>>>>>>     "check_objects": [
>>>>>> ]
>>>>>> }
>>>>>>
>>>>>>
>>>>>> However, i see a lot of data for it on an OSD (all shadow files with
>>>>>> escaped underscores)
>>>>>>
>>>>>> [root@sm-cld-mtl-008 current]# find . -name default.8873277.32* -print
>>>>>> ./12.161_head/DIR_1/DIR_6/DIR_9/DIR_E/default.8873277.32\u\ushadow\u.Tos2Ms8w2BiEG7YJAZeE6zrrc\uwcHPN\u1__head_D886E961__c
>>>>>> ./12.161_head/DIR_1/DIR_6/DIR_9/DIR_E/DIR_1/default.8873277.32\u\ushadow\u.Aa86mlEMvpMhRaTDQKHZmcxAReFEo2J\u1__head_4A71E961__c
>>>>>> ./12.161_head/DIR_1/DIR_6/DIR_9/DIR_E/DIR_5/default.8873277.32\u\ushadow\u.KCiWEa4YPVaYw2FPjqvpd9dKTRBu8BR\u17__head_00B5E961__c
>>>>>> ./12.161_head/DIR_1/DIR_6/DIR_9/DIR_E/DIR_8/default.8873277.32\u\ushadow\u.A2K\u2H1XKR8weiSwKGmbUlsCmEB9GDF\u32__head_42E8E961__c
>>>>>> <snip>
>>>>>>
>>>>>> -bash-4.1$ rados -p .rgw.buckets ls | egrep '8873277\.32.+'
>>>>>> default.8873277.32__shadow_.pvaIjBfisb7pMABicR9J2Bgh8JUkEfH_47
>>>>>> default.8873277.32__shadow_.Wr_dGMxdSRHpoeu4gsQZXJ8t0I3JI7l_6
>>>>>> default.8873277.32__shadow_.WjijDxYhLFMUYdrMjeH7GvTL1LOwcqo_3
>>>>>> default.8873277.32__shadow_.3lRIhNePLmt1O8VVc2p5X9LtAVfdgUU_1
>>>>>> default.8873277.32__shadow_.VqF8n7PnmIm3T9UEhorD5OsacvuHOOy_16
>>>>>> default.8873277.32__shadow_.Jrh59XT01rIIyOdNPDjCwl5Pe1LDanp_2
>>>>>> <snip>
>>>>>>
>>>>>> Is there still a bug in the fix obj locator command perhaps? I suppose
>>>>>> can just do something like:
>>>>>>
>>>>>>    rados -p .rgw.buckets cleanup --prefix default.8873277.32
>>>>>>
>>>>>> Since i want to destroy the bucket anyway, but if this affects other
>>>>>> buckets, i may want to clean those a better way.
>>>>>>
>>>>>> -Ben
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux