Re: CephFS: removing default data pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 28, 2015 at 11:08 AM, Burkhard Linke
<Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
> Hi,
>
> I've created CephFS with a certain data pool some time ago (using firefly
> release). I've added addtional pools in the meantime and moved all data to
> them. But a large number of empty (or very small) objects are left in the
> pool according to 'ceph df':
>
>     cephfs_test_data         7        918M         0 45424G      6751721
>
> The number of objects change if new files are added to CephFS or deleted.
>
> Does the first data pool play a different role and is used to store
> additional information? How can I remove this pool? In the current
> configuration the pool is a burden both to recovery/backfilling (many
> objects) and to performance due to object creation/deletion.

You can't remove it.  The reason is an implementation quirk for
hardlinks: where inodes have layouts pointing to another data pool,
they also write backtraces to the default data pool (whichever was the
first is the default).  It's so that when we resolve a hardlink, we
don't have to chase through all data pools looking for an inode, we
can just look it up in the default data pool.

Clearly this isn't optimal, but that's how it works right now.  For
each file you create, you'll get a few hundred bytes-ish written to an
xattr on an object in the default data pool.

John
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux