Ceph file system is not freeing space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am trying to figure out why my Ceph file system is not freeing
space.  Using Ceph 9.1.0 I created a file system with snapshots
enabled, filled up the file system over days while taking snapshots
hourly.  I then deleted all files and all snapshots, but Ceph is not
returning the space. I left the cluster sit for two days to see if the
cleanup process was being done in the background and it still has not
freed the space. I tried rebooting the cluster and clients and the
space is still not returned.

The file system was created with the command:
# ceph fs new cephfs cephfs_metadata cephfs_data

Info on the Ceph file system:

# getfattr -d -m ceph.dir.* /cephfs/
getfattr: Removing leading '/' from absolute path names
# file: cephfs/
ceph.dir.entries="0"
ceph.dir.files="0"
ceph.dir.rbytes="0"
ceph.dir.rctime="1447033469.0920991041"
ceph.dir.rentries="4"
ceph.dir.rfiles="1"
ceph.dir.rsubdirs="3"
ceph.dir.subdirs="0"

ls -l /cephfs/
total 0

# ls -l /cephfs/.snap
total 0

# grep ceph /proc/mounts
ceph-fuse /cephfs fuse.ceph-fuse
rw,noatime,user_id=0,group_id=0,default_permissions,allow_other 0 0

# df /cephfs/
Filesystem     1K-blocks      Used Available Use% Mounted on
ceph-fuse      276090880 194162688  81928192  71% /cephfs

# df -i /cephfs/
Filesystem      Inodes IUsed IFree IUse% Mounted on
ceph-fuse      2501946     -     -     - /cephfs

# ceph df detail
GLOBAL:
    SIZE     AVAIL      RAW USED     %RAW USED     OBJECTS
    263G     80009M         181G         68.78       2443k
POOLS:
    NAME                ID     CATEGORY     USED       %USED     MAX
AVAIL     OBJECTS     DIRTY     READ     WRITE
    rbd                 0      -                 0         0
27826M           0         0        0          0
    cephfs_data         1      -            76846M     28.50
27826M     2501672     2443k     345k     32797k
    cephfs_metadata     2      -            34868k      0.01
27826M         259       259     480k     23327k
    kSAFEbackup         3      -              108M      0.04
27826M          15        15        0         49

The Ceph cluster and client systems are running on Trusty with a 4.3.0
kernel and Ceph version 9.1.0
# ceph -v
ceph version 9.1.0 (3be81ae6cf17fcf689cd6f187c4615249fea4f61)
# uname -a
Linux ede-c2-adm01 4.3.0-040300-generic #201511020949 SMP Mon Nov 2
14:50:44 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Any ideas on why the space is not being freed?

Thanks,
Eric
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux