Bucket reporting content inconsistently

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



HI all,

 

We have recently upgraded to 10.2.10 in preparation for our upcoming upgrade to Luminous and I have been attempting to remove a bucket. When using tools such as s3cmd I can see files are listed, verified by the checking with bi list too as shown below:

 

root@ceph-rgw-1:~# radosgw-admin --id rgw.ceph-rgw-1 bi list --bucket='bucketnamehere' | grep -i "\"idx\":" | wc -l

3278

 

However, on attempting to delete the bucket and purge the objects , it appears not to be recognised:

 

root@ceph-rgw-1:~# radosgw-admin --id rgw.ceph-rgw-1 bucket rm --bucket= bucketnamehere --purge-objects

2018-05-10 14:11:05.393851 7f0ab07b6a00 -1 ERROR: unable to remove bucket(2) No such file or directory

 

Checking the bucket stats, it does appear that the bucket is reporting no content, and repeat the above content test there has been no change to the 3278 figure:

 

root@ceph-rgw-1:~# radosgw-admin --id rgw.ceph-rgw-1 bucket stats --bucket="bucketnamehere"

{

    "bucket": "bucketnamehere",

    "pool": ".rgw.buckets",

    "index_pool": ".rgw.buckets.index",

    "id": "default.28142894.1",

    "marker": "default.28142894.1",

    "owner": "16355",

    "ver": "0#5463545,1#5483686,2#5483484,3#5474696,4#5479052,5#5480339,6#5469460,7#5463976",

    "master_ver": "0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0",

    "mtime": "2015-12-08 12:42:26.286153",

    "max_marker": "0#,1#,2#,3#,4#,5#,6#,7#",

    "usage": {

        "rgw.main": {

            "size_kb": 0,

            "size_kb_actual": 0,

            "num_objects": 0

        },

        "rgw.multimeta": {

            "size_kb": 0,

            "size_kb_actual": 0,

            "num_objects": 0

        }

    },

    "bucket_quota": {

        "enabled": false,

        "max_size_kb": -1,

        "max_objects": -1

    }

}

 

I have attempted a bucket index check and fix on this, however, it does not appear to have made a difference and no fixes or errors reported from it. Does anyone have any advice on how to proceed with removing this content? At this stage I am not too concerned if the method needed to remove this generates orphans, as we will shortly be running a large orphan scan after our upgrade to Luminous. Cluster health otherwise reports normal.


Thanks

Sean Redmond

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux