CephFS : double objects in 2 pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 21, 2017 at 5:20 PM, Florent B <florent at coppint.com> wrote:
> Hi everyone,
>
> I use a Ceph Jewel cluster.
>
> I have a CephFS with some directories at root, on which I defined some
> layouts :
>
> # getfattr -n ceph.dir.layout maildata1/
> # file: maildata1/
> ceph.dir.layout="stripe_unit=1048576 stripe_count=3 object_size=4194304
> pool=cephfs.maildata1"
>
>
> My problem is that the default "data" pool contains 44904 EMPTY objects
> (size of pool=0), and duplicates of my pool cephfs.maildata1.

This is normal: the MDS stores a "backtrace" for each file, that
allows it to find the file by inode number when necessary.  Usually,
when files are in the first data pool, the backtrace is stored along
with the data.  When your files are in a different data pool, the
backtrace is stored on an otherwise-empty object in the first data
pool.

Cheers,
John

> An example :
>
> # stat
> maildata1/domain.net/test5/mdbox/mailboxes/1319/dbox/dovecot.index.cache
>   File:
> 'maildata1/domain.net/test5/mdbox/mailboxes/1319/dbox/dovecot.index.cache'
>   Size: 728           Blocks: 2          IO Block: 1048576 regular file
> Device: 54h/84d    Inode: 1099526218076  Links: 1
>
> # getfattr -n ceph.file.layout
> maildata1/domain.net/test5/mdbox/mailboxes/1319/dbox/dovecot.index.cache
> # file:
> maildata1/domain.net/test5/mdbox/mailboxes/1319/dbox/dovecot.index.cache
> ceph.file.layout="stripe_unit=1048576 stripe_count=3 object_size=4194304
> pool=cephfs.maildata1"
>
> 1099526218076 = 10000dea15c in hex :
>
> # rados -p cephfs.maildata1 ls | grep "10000dea15c"
> 10000dea15c.00000000
>
> # rados -p data ls | grep "10000dea15c"
> 10000dea15c.00000000
>
> The object in maildata1 pool contains file data, wheras the one in data
> is empty :
>
> # rados -p data get 10000dea15c.00000000 - | wc -c
> 0
>
> # rados -p cephfs.maildata1 get 10000dea15c.00000000 - | wc -c
> 728
>
> Clients accessing these directories does not have permission on "data"
> pool, that's normal :
>
> # ceph auth get client.maildata1
> exported keyring for client.maildata1
> [client.maildata1]
>     key = XXXX
>     caps mds = "allow r, allow rw path=/maildata1"
>     caps mon = "allow r"
>     caps osd = "allow * pool=cephfs.maildata1"
>
> Have you ever seen this ? What could be the cause ?
>
> Thank you for your help.
>
> Florent
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux