Still have orphaned rgw shadow files, ceph 0.94.3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ceph 0.93->94.2->94.3

I noticed my pool used data amount is about twice the bucket used data count.

This bucket was emptied long ago. It has zero objects:
    "globalcache01",
    {
        "bucket": "globalcache01",
        "pool": ".rgw.buckets",
        "index_pool": ".rgw.buckets.index",
        "id": "default.8873277.32",
        "marker": "default.8873277.32",
        "owner": "...",
        "ver": "0#12348839",
        "master_ver": "0#0",
        "mtime": "2015-03-08 11:44:11.000000",
        "max_marker": "0#",
        "usage": {
            "rgw.none": {
                "size_kb": 0,
                "size_kb_actual": 0,
                "num_objects": 0
            },
            "rgw.main": {
                "size_kb": 0,
                "size_kb_actual": 0,
                "num_objects": 0
            }
        },
        "bucket_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        }
    },



bucket check shows nothing:

16:07:09 root@sm-cephrgw4 ~ $ radosgw-admin bucket check
--bucket=globalcache01 --fix
[]
16:07:27 root@sm-cephrgw4 ~ $ radosgw-admin bucket check
--check-head-obj-locator --bucket=globalcache01 --fix
{
    "bucket": "globalcache01",
    "check_objects": [
]
}


However, i see a lot of data for it on an OSD (all shadow files with
escaped underscores)

[root@sm-cld-mtl-008 current]# find . -name default.8873277.32* -print
./12.161_head/DIR_1/DIR_6/DIR_9/DIR_E/default.8873277.32\u\ushadow\u.Tos2Ms8w2BiEG7YJAZeE6zrrc\uwcHPN\u1__head_D886E961__c
./12.161_head/DIR_1/DIR_6/DIR_9/DIR_E/DIR_1/default.8873277.32\u\ushadow\u.Aa86mlEMvpMhRaTDQKHZmcxAReFEo2J\u1__head_4A71E961__c
./12.161_head/DIR_1/DIR_6/DIR_9/DIR_E/DIR_5/default.8873277.32\u\ushadow\u.KCiWEa4YPVaYw2FPjqvpd9dKTRBu8BR\u17__head_00B5E961__c
./12.161_head/DIR_1/DIR_6/DIR_9/DIR_E/DIR_8/default.8873277.32\u\ushadow\u.A2K\u2H1XKR8weiSwKGmbUlsCmEB9GDF\u32__head_42E8E961__c
<snip>

-bash-4.1$ rados -p .rgw.buckets ls | egrep '8873277\.32.+'
default.8873277.32__shadow_.pvaIjBfisb7pMABicR9J2Bgh8JUkEfH_47
default.8873277.32__shadow_.Wr_dGMxdSRHpoeu4gsQZXJ8t0I3JI7l_6
default.8873277.32__shadow_.WjijDxYhLFMUYdrMjeH7GvTL1LOwcqo_3
default.8873277.32__shadow_.3lRIhNePLmt1O8VVc2p5X9LtAVfdgUU_1
default.8873277.32__shadow_.VqF8n7PnmIm3T9UEhorD5OsacvuHOOy_16
default.8873277.32__shadow_.Jrh59XT01rIIyOdNPDjCwl5Pe1LDanp_2
<snip>

Is there still a bug in the fix obj locator command perhaps? I suppose
can just do something like:

   rados -p .rgw.buckets cleanup --prefix default.8873277.32

Since i want to destroy the bucket anyway, but if this affects other
buckets, i may want to clean those a better way.

-Ben
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux