Hi,
you have given up too early. rsync is not a nice workload for cephfs, in
particular, most linux kernel clients cephfs will end up caching all
inodes/dentries. The result is that mds servers crash due to memory
limitations. And rsync basically screens all inodes/dentries so it is
the perfect application to gobble up all inode caps.
We run a cronjob script flush_cache every few (2-5) minutes:
#!/bin/bash
echo 2 > /proc/sys/vm/drop_caches
on all machines that mount cephfs. There is no performance drop in the
client machines, but happily, the mds congestion is solved by this.
We also went the rbd way before this, but for large rbd images we much
prefer cephfs instead.
Regards,
Mike
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com