On Tue, Jun 27, 2017 at 9:52 AM, Steffen Winther Sørensen<stefws@xxxxxxxxx> wrote:Ceph users,
Got an old Hammer CephFS installed on old debian wheezy (7.11) boxes (I know :)
Currently I fail to patch it to 0.94.10 with apt-get, I get:
Get:13 http://ceph.com wheezy Release Err http://ceph.com wheezy Release
W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://ceph.com wheezy Release: The following signatures were invalid: NODATA 1 NODATA 2
W: Failed to fetch http://ceph.com/debian-hammer/dists/wheezy/Release
root@node4:~# apt-key list /etc/apt/trusted.gpg -------------------- ... pub 4096R/17ED316D 2012-05-20 uid Ceph Release Key <sage@xxxxxxxxxxxx>
pub 4096R/F2AE6AB9 2013-04-29 uid Joe Healy <joehealy@xxxxxxxxx> sub 4096R/91EE136C 2013-04-29
pub 1024D/03C3951A 2011-02-08 uid Ceph automated package build (Ceph automated package build) <sage@xxxxxxxxxxxx> sub 4096g/2E457B51 2011-02-08
pub 4096R/460F3994 2015-09-15 uid Ceph.com (release key) <security@xxxxxxxx> ...
How to fix this?
Also we use a cephFS fuse mounted to store RBD backup dumps every weekend, only it seems to leak space after each backup run:
root@node4:~# df -h /var/lib/ceph/backup/ Filesystem Size Used Avail Use% Mounted on ceph-fuse 4.8T 3.1T 1.8T 64% /var/lib/ceph/backup
root@node4:~# rados df pool name KB objects clones degraded unfound rd rd KB wr wr KB cephfs_data 639810067 156482 0 0 0 3739243 461786202 1802652 3850537128 cephfs_metadata 43222 34 0 0 0 348 165602 70546 168253 vmimages 978905302 239804 0 0 0 537951546 35216013843 987828425 63368630346 total used 3244203364 396320 total avail 1862738112 total space 5106941476
cephfs_data 639810067 156482 0 0 0 3068263 378507352 1802652 3850537128
Don’t understand why cephfs_data pool seems to grow and be so large (+600GB) when fs only holds +100GB:
root@node4:~# du -sh /var/lib/ceph/backup 107G /var/lib/ceph/backup
Is it a bug in Hammer CephFS or do we need to run some sort of fstrim or like to reclaim some pool space?
By CephFS standards, hammer is really old, and had many bugs(including around file deletion). You may find that with somesituations that unmounting and then mounting the client will allowpurging deleted files to proceed (there was at least one bug whereclients just weren't properly releasing files after unlinking).
Thanks, unmounted from all and remounting seemed to reclaim a lot of space :)
root@node4:~# rados df pool name KB objects clones degraded unfound rd rd KB wr wr KB cephfs_data 352194020 86139 0 0 0 3739243 461786202 1872995 3850537128 cephfs_metadata 43325 34 0 0 0 348 165602 70596 168391 vmimages 978905302 239804 0 0 0 537951551 35216013863 987925111 63369894143 total used 2668400656 325977 total avail 2438540820 total space 5106941476
root@node4:~# df -h /var/lib/ceph/backup/ Filesystem Size Used Avail Use% Mounted on ceph-fuse 4.8T 2.5T 2.3T 53% /var/lib/ceph/backup
If you're using CephFS then I'd strongly recommend an upgrade to jewel.
Right, any good doc pointers on upgrading?
And would I be able to upgrade to jewel before upgrading debian from wheezy?
/Steffen |
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com