Re: cephfs is growing up rapidly

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I use rsync to back up filse. I'm not sure if it update files by removing 
and retransfering or by overwiriting the files. Options of rsync command
include '-artuz', and I'm trying to figure out how it works.

MDS logs has nothing error as I think it's not the same bug (or it's not
a bug).

Also, I checked the performance counters of MDS:

    "mds_cache": {
        "num_strays": 1,
        "num_strays_delayed": 0,
        "num_strays_enqueuing": 0,
        "strays_created": 604527,
        "strays_enqueued": 604528,
        "strays_reintegrated": 0,
        "strays_migrated": 0,
        "num_recovering_processing": 0,
        "num_recovering_enqueued": 0,
        "num_recovering_prioritized": 0,
        "recovery_started": 7,
        "recovery_completed": 7,
        "ireq_enqueue_scrub": 0,
        "ireq_exportdir": 0,
        "ireq_flush": 0,
        "ireq_fragmentdir": 0,
        "ireq_fragstats": 0,
        "ireq_inodestats": 0
    }

    "purge_queue": {
        "pq_executing_ops": 0,
        "pq_executing": 0,
        "pq_executed": 604533
    }

John Spray <jspray@xxxxxxxxxx> 于2018年9月14日周五 下午5:19写道:
On Fri, Sep 14, 2018 at 7:25 AM Zhenshi Zhou <deaderzzs@xxxxxxxxx> wrote:
>
> Hi,
>
> I have a ceph cluster of version 12.2.5 on centos7.
>
> I created 3 pools, 'rbd' for rbd storage, as well as 'cephfs_data'
> and 'cephfs_meta' for cephfs. Cephfs is used for backing up by
> rsync and volumes mounting by docker.
>
> The size of backup files is 3.5T. Besides, docker use less than
> 60G spaces. 'cephfs df' shows no more than 3.6T at first. But the
> used size is 3.9T now. And according to my observation, the used
> size (cephfs df) is growing 60-100G everyday. As I check the origin
> files, the whole size is almost the same size as the beginning does.
>
> Does anybody encounter the same issue?

Presumably your workload is deleting the previous backup before
creating a new one?

The most similar sounding bug is http://tracker.ceph.com/issues/24533,
where the deletion path could get stuck.  You can check your MDS logs
for errors to see if it's the same.

You can also look at the MDS performance counters (`ceph daemon
mds.<id> perf dump`) and show us the "num_strays" fields and the
purge_queue section.

John





> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux