Re: Hammer patch on Wheezy + CephFS leaking space?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 27, 2017 at 9:52 AM, Steffen Winther Sørensen
<stefws@xxxxxxxxx> wrote:
> Ceph users,
>
> Got an old Hammer CephFS installed on old debian wheezy (7.11) boxes (I know
> :)
>
> root@node4:~# dpkg -l  | grep -i ceph
> ii  ceph                             0.94.9-1~bpo70+1                 amd64
> distributed storage and file system
> ii  ceph-common                      0.94.9-1~bpo70+1                 amd64
> common utilities to mount and interact with a ceph storage cluster
> ii  ceph-deploy                      1.5.35                           all
> Ceph-deploy is an easy to use configuration tool
> ii  ceph-fs-common                   0.94.9-1~bpo70+1                 amd64
> common utilities to mount and interact with a ceph file system
> ii  ceph-fuse                        0.94.9-1~bpo70+1                 amd64
> FUSE-based client for the Ceph distributed file system
> ii  ceph-mds                         0.94.9-1~bpo70+1                 amd64
> metadata server for the ceph distributed file system
> ii  libcephfs1                       0.94.9-1~bpo70+1                 amd64
> Ceph distributed file system client library
> ii  libcurl3-gnutls:amd64            7.29.0-1~bpo70+1.ceph            amd64
> easy-to-use client-side URL transfer library (GnuTLS flavour)
> ii  libleveldb1:amd64                1.12.0-1~bpo70+1.ceph            amd64
> fast key-value storage library
> ii  python-ceph                      0.94.9-1~bpo70+1                 amd64
> Meta-package for python libraries for the Ceph libraries
> ii  python-cephfs                    0.94.9-1~bpo70+1                 amd64
> Python libraries for the Ceph libcephfs library
> ii  python-rados                     0.94.9-1~bpo70+1                 amd64
> Python libraries for the Ceph librados library
> ii  python-rbd                       0.94.9-1~bpo70+1                 amd64
> Python libraries for the Ceph librbd library
>
> Currently I fail to patch it to 0.94.10 with apt-get, I get:
>
> Get:13 http://ceph.com wheezy Release
> Err http://ceph.com wheezy Release
>
> W: A error occurred during the signature verification. The repository is not
> updated and the previous index files will be used. GPG error:
> http://ceph.com wheezy Release: The following signatures were invalid:
> NODATA 1 NODATA 2
>
> W: Failed to fetch http://ceph.com/debian-hammer/dists/wheezy/Release
>
>
> root@node4:~# apt-key list
> /etc/apt/trusted.gpg
> --------------------
> ...
> pub   4096R/17ED316D 2012-05-20
> uid                  Ceph Release Key <sage@xxxxxxxxxxxx>
>
> pub   4096R/F2AE6AB9 2013-04-29
> uid                  Joe Healy <joehealy@xxxxxxxxx>
> sub   4096R/91EE136C 2013-04-29
>
> pub   1024D/03C3951A 2011-02-08
> uid                  Ceph automated package build (Ceph automated package
> build) <sage@xxxxxxxxxxxx>
> sub   4096g/2E457B51 2011-02-08
>
> pub   4096R/460F3994 2015-09-15
> uid                  Ceph.com (release key) <security@xxxxxxxx>
> ...
>
> How to fix this?
>
>
>
> Also we use a cephFS fuse mounted to store RBD backup dumps every weekend,
> only it seems to leak space after each backup run:
>
> root@node4:~# df -h /var/lib/ceph/backup/
> Filesystem      Size  Used Avail Use% Mounted on
> ceph-fuse       4.8T  3.1T  1.8T  64% /var/lib/ceph/backup
>
> root@node4:~# rados df
> pool name                 KB      objects       clones     degraded
> unfound           rd        rd KB           wr        wr KB
> cephfs_data        639810067       156482            0            0
> 0      3739243    461786202      1802652   3850537128
> cephfs_metadata        43222           34            0            0
> 0          348       165602        70546       168253
> vmimages           978905302       239804            0            0
> 0    537951546  35216013843    987828425  63368630346
>   total used      3244203364       396320
>   total avail     1862738112
>   total space     5106941476
>
>
> root@node4:~# crontab -l
> # track Ceph Pool/FS usage, assuimg leak space in cephFS data pool
> 0 0 * * * (echo `date`; /usr/bin/rados df) >> /var/tmp/ceph_pool_usage.log
>
> # rados df output for last week (backup dump in last two):
> root@node4:~# grep cephfs_data /var/tmp/ceph_pool_usage.log
> cephfs_data        606742032       148395            0            0
> 0      3067956    378470922      1786217   3800881920
> cephfs_data        606742032       148395            0            0
> 0      3068249    378507338      1786217   3800881920
> cephfs_data        606742032       148395            0            0
> 0      3068249    378507338      1786217   3800881920
> cephfs_data        606742032       148395            0            0
> 0      3068249    378507338      1786217   3800881920
> cephfs_data        606742032       148395            0            0
> 0      3068249    378507338      1786217   3800881920
> cephfs_data        630184822       154126            0            0
> 0      3068257    378507346      1797061   3834535366
> cephfs_data        639810067       156482            0            0
> 0      3068263    378507352      1802652   3850537128
>
>
> Don’t understand why cephfs_data pool seems to grow and be so large (+600GB)
> when fs only holds +100GB:
>
> root@node4:~# du -sh /var/lib/ceph/backup
> 107G    /var/lib/ceph/backup
>
> Is it a bug in Hammer CephFS or do we need to run some sort of fstrim or
> like to reclaim some pool space?

By CephFS standards, hammer is really old, and had many bugs
(including around file deletion).  You may find that with some
situations that unmounting and then mounting the client will allow
purging deleted files to proceed (there was at least one bug where
clients just weren't properly releasing files after unlinking).

If you're using CephFS then I'd strongly recommend an upgrade to jewel.

John

>
> TIA
>
> /Steffen
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux