Re: Removing empty placement groups / empty objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 1, 2015 at 5:47 PM, Burkhard Linke
<Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
> Hi,
>
>
> On 07/01/2015 06:09 PM, Gregory Farnum wrote:
>>
>> On Mon, Jun 29, 2015 at 1:44 PM, Burkhard Linke
>> <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
>>>
>>> Hi,
>>>
>>> I've noticed that a number of placement groups in our setup contain
>>> objects,
>>> but no actual data
>>> (ceph pg dump | grep remapped during a hard disk replace operation):
>>>
>>> 7.616   2636    0       0       5272    0       4194304 3003 3003
>>> active+remapped+wait_backfill   2015-06-29 13:43:28.716687
>>> 159913'33987
>>> 160091:526298   [30,6,36] 30
>>>        [30,36,3]       30      153699'33892    2015-06-29 07:30:16.030470
>>> 149573'32565    2015-06-23 07:00:21.948563
>>> 7.60a   2696    0       0       5392    0       0       3046 3046
>>> active+remapped+wait_backfill   2015-06-29 13:43:09.847541
>>> 159919'34627
>>> 160091:388532   [2,36,3] 2
>>>         [2,36,31]       2       153669'34496    2015-06-28
>>> 20:09:51.850005
>>> 153669'34496    2015-06-28 20:09:51.850005
>>> 7.60d   2694    0       2       5388    0       0       3026 3026
>>> active+remapped+wait_backfill   2015-06-29 13:43:27.202928
>>> 159939'33708
>>> 160091:392535   [31,6,38] 31
>>>        [31,38,3]       31      152584'33610    2015-06-29 07:11:37.484500
>>> 152584'33610    2015-06-29 07:11:37.484500
>>> ....
>>>
>>> Pool 7 was used a data pool in cephfs, but almost all files stored in
>>> that
>>> pool have been removed:
>>> ~# rados df
>>> pool name                 KB      objects       clones degraded
>>> unfound
>>> rd        rd KB           wr wr KB
>>> cephfs_test_data       940066      5537838            0 202           0
>>> 2022238   1434381904     21823705   3064326550
>>>
>>> Is it possible to remove these "zombie" objects, since they influence
>>> maintenance operations like backfilling or recovery?
>>
>> That's odd; the actual objects should have been deleted (not just
>> truncated). Have you used this pool for anything else (CephFS metadata
>> storage, RGW bucket indexes, etc)? What version of Ceph are you
>> running and what workload did you do to induce this issue?
>
> Ceph version is 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3) running
> Ubuntu 14.04 with kernel 3.13.0-55-generic.
>
> The cephfs_test_data has only been used as cephfs data pool in a backup
> scenario using rsync. It contained a mix of files resulted from several
> rsync attempts from a failing NAS device. Most files were small (kbyte
> range). The total number of files in that pool was about 10-15 million
> before almost all files were removed. The total size of the pool was about
> 10 TB.
>
> Since I want to remove the pool completely I'm currently trying to locate
> the remaining files in the filesystem, but that's a low priority task at the
> moment.

Hmm, I wonder if this is a RADOS issue with misplaced PGs. If you've
still got the cluster around, can you look in the store for each of
the active OSDs holding these PGs and see if the objects are really
zero-sized or not?
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux