Re: CephFS: effects of using hard links

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 19, 2019 at 9:43 AM Erwin Bogaard <erwin.bogaard@xxxxxxxxx> wrote:
>
> Hi,
>
>
>
> For a number of application we use, there is a lot of file duplication. This wastes precious storage space, which I would like to avoid.
>
> When using a local disk, I can use a hard link to let all duplicate files point to the same inode (use “rdfind”, for example).
>
>
>
> As there isn’t any deduplication in Ceph(FS) I’m wondering if I can use hard links on CephFS in the same way as I use for ‘regular’ file systems like ext4 and xfs.
>
> 1. Is it advisible to use hard links on CephFS? (It isn’t in the ‘best practices’: http://docs.ceph.com/docs/master/cephfs/app-best-practices/)
>
> 2. Is there any performance (dis)advantage?
>
> 3. When using hard links, is there an actual space savings, or is there some trickery happening?
>
> 4. Are there any issues (other than the regular hard link ‘gotcha’s’) I need to keep in mind combining hard links with CephFS?

The only issue we've seen is if you hardlink b to a, then rm a, then
never stat b, the inode is added to the "stray" directory. By default
there is a limit of 1 million stray entries -- so if you accumulate
files in this state eventually users will be unable to rm any files,
until you stat the `b` files.

-- dan


-- dan


>
>
>
> Thanks
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux