Delete pool .rgw.bucket and objects within it

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Irek,

I stopped radosgw, then I deleted all pools that is default for radosgw as
below, wait for ceph delete objects, and re-created the pools. I stopped
and started the whole cluster including starting radosgw. Now it is very
unstable. Osds is usually marked down or crash. Please see a part of log
for osd.15 at http://pastebin.com/r0NLNDjf

I upgraded ceph from version emperor 0.72.2 to version firefly ceph version
0.80-rc1-40-g491cfdb.

Do deleting pools impact to performance of the cluster? or what did i do
wrong?

POOLS:
    NAME                   ID     USED       %USED     OBJECTS
    .rgw                   23     642        0         4
    .rgw.root              24     822        0         3
    .rgw.control           25     0          0         8
    .rgw.gc                26     0          0         64
    .users.uid             27     607        0         2
    .users.email           28     12         0         1
    .users                 29     24         0         2
    .users.swift           30     12         0         1
    .rgw.buckets.index     31     0          0         2
    .rgw.buckets           32     22763M     0.06      46625

Best regards,
Thanh Tran


On Thu, May 8, 2014 at 12:17 AM, Thanh Tran <thanhtv26 at gmail.com> wrote:

> thanks Irek, it is correct as you did.
>
> Best regards,
> Thanh Tran
>
>
> On Wed, May 7, 2014 at 2:15 PM, Irek Fasikhov <malmyzh at gmail.com> wrote:
>
>> Yes, delete all the objects stored in the pool.
>>
>>
>> 2014-05-07 6:58 GMT+04:00 Thanh Tran <thanhtv26 at gmail.com>:
>>
>>> Hi,
>>>
>>> If i use command "ceph osd pool delete .rgw.bucket .rgw.bucket
>>> --yes-i-really-really-mean-it" to delete the pool .rgw.bucket, will this
>>> delete the pool, its objects and clean the data on osds?
>>>
>>> Best regards,
>>> Thanh Tran
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users at lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>
>>
>> --
>> ? ?????????, ??????? ???? ???????????
>> ???.: +79229045757
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140509/19260c3a/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux