CephFS 12.2.0 -> 12.2.1 change in inode caching behaviour

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All

I upgraded my 12.2.0 cluster to 12.2.1 a month or two back. I've noticed that the number of inodes held in cache is only approx 1/5th of my inode_max. This is a surprise to me as with 12.2.0, and before that Jewel, after starting an MDS server, the cache would typically fill to the max within 24 hours. I have ~8 million entries my file system, and most of this is fairly hot data. I'm seeing quite frequent "failing to respond to cache pressure" messages, I just have two Kernel clients accessing the filesystem.

Are there some new defaults I need to change perhaps? Or potentially a bug?

Output of perf dump mds:

"mds": {
        "request": 184132091,
        "reply": 184132064,
        "reply_latency": {
            "avgcount": 184132064,
            "sum": 125364.905594355,
            "avgtime": 0.000680842
        },
        "forward": 0,
        "dir_fetch": 9846671,
        "dir_commit": 562495,
        "dir_split": 0,
        "dir_merge": 0,
        "inode_max": 2500000,
        "inodes": 444642,
        "inodes_top": 185845,
        "inodes_bottom": 127878,
        "inodes_pin_tail": 130919,
        "inodes_pinned": 179149,
        "inodes_expired": 135604208,
        "inodes_with_caps": 165900,
        "caps": 165948,
        "subtrees": 2,
        "traverse": 187280168,
        "traverse_hit": 185739606,
        "traverse_forward": 0,
        "traverse_discover": 0,
        "traverse_dir_fetch": 118150,
        "traverse_remote_ino": 8,
        "traverse_lock": 60256,
        "load_cent": 18413221445,
        "q": 0,
        "exported": 0,
        "exported_inodes": 0,
        "imported": 0,
        "imported_inodes": 0
    }


A few extra details:

Running two MDS servers, one is standby. 
Both have mds_cache_size = 2500000. 
CentOS 7.3 servers.
Kernel clients are CentOS 7.3 (3.10.0-514.2.2.el7.x86_64)

Thanks,
David
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux