cephfs : rsync backup create cache pressure on clients, filling caps

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm currently doing cephfs backup, through a dedicated clients mounting the whole filesystem at root.
others clients are mounting part of the filesystem. (kernel cephfs clients)


I have around 22millions inodes, 

before backup, I have around 5M caps loaded by clients

#ceph daemonperf mds.x.x

---------------mds---------------- --mds_cache--- ---mds_log---- -mds_mem- --mds_server-- mds_ -----objecter------ purg 
req  rlat fwd  inos caps exi  imi |stry recy recd|subm evts segs|ino  dn  |hcr  hcs  hsr |sess|actv rd   wr   rdwr|purg|
118    0    0   22M 5.3M   0    0 |  6    0    0 |  2  120k 130 | 22M  22M|118    0    0 |167 |  0    2    0    0 |  0 



when backup is running, reading all the files, the caps are increasing to max (and even a little bit more)

# ceph daemonperf mds.x.x
---------------mds---------------- --mds_cache--- ---mds_log---- -mds_mem- --mds_server-- mds_ -----objecter------ purg 
req  rlat fwd  inos caps exi  imi |stry recy recd|subm evts segs|ino  dn  |hcr  hcs  hsr |sess|actv rd   wr   rdwr|purg|
155    0    0   20M  22M   0    0 |  6    0    0 |  2  120k 129 | 20M  20M|155    0    0 |167 |  0    0    0    0 |  0 

then mds try recall caps to others clients, and I'm gettin some
2019-01-04 01:13:11.173768 cluster [WRN] Health check failed: 1 clients failing to respond to cache pressure (MDS_CLIENT_RECALL)
2019-01-04 02:00:00.000073 cluster [WRN] overall HEALTH_WARN 1 clients failing to respond to cache pressure
2019-01-04 03:00:00.000069 cluster [WRN] overall HEALTH_WARN 1 clients failing to respond to cache pressure



Doing a simple
echo 2 | tee /proc/sys/vm/drop_caches
on backup server, is freeing caps again

# ceph daemonperf x
---------------mds---------------- --mds_cache--- ---mds_log---- -mds_mem- --mds_server-- mds_ -----objecter------ purg 
req  rlat fwd  inos caps exi  imi |stry recy recd|subm evts segs|ino  dn  |hcr  hcs  hsr |sess|actv rd   wr   rdwr|purg|
116    0    0   22M 4.8M   0    0 |  4    0    0 |  1  117k 131 | 22M  22M|116    1    0 |167 |  0    2    0    0 |  0 




Some questions here :

ceph side
---------
Is it possible to setup some kind of priority between clients, to force retreive caps on a specific client ?
Is is possible to limit the number of caps for a client ?


client side 
-----------
I have tried to use vm.vfs_cache_pressure=40000, to reclam inodes entries more fast, but server have 128GB ram.
Is it possible to limit the number of inodes in cache on linux.
Is is possible to tune something on the ceph mount point ?


Regards,

Alexandre
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux