On Fri, Sep 14, 2018 at 7:25 AM Zhenshi Zhou <deaderzzs@xxxxxxxxx> wrote: > > Hi, > > I have a ceph cluster of version 12.2.5 on centos7. > > I created 3 pools, 'rbd' for rbd storage, as well as 'cephfs_data' > and 'cephfs_meta' for cephfs. Cephfs is used for backing up by > rsync and volumes mounting by docker. > > The size of backup files is 3.5T. Besides, docker use less than > 60G spaces. 'cephfs df' shows no more than 3.6T at first. But the > used size is 3.9T now. And according to my observation, the used > size (cephfs df) is growing 60-100G everyday. As I check the origin > files, the whole size is almost the same size as the beginning does. > > Does anybody encounter the same issue? Presumably your workload is deleting the previous backup before creating a new one? The most similar sounding bug is http://tracker.ceph.com/issues/24533, where the deletion path could get stuck. You can check your MDS logs for errors to see if it's the same. You can also look at the MDS performance counters (`ceph daemon mds.<id> perf dump`) and show us the "num_strays" fields and the purge_queue section. John > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com