Tuning for cephfs backup client?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


we are using cephfs with currently about 200 million files and a single hosts running nightly backups. This setup works fine, except the cephfs caps management. Since the single host has to examine a lot of files, it will soon run into the mds caps per client limit, and processing will slow down due to extra caps request/release round trips to the mds. This problem will probably affect all cephfs users who are running a similar setup.


Are there any tuning knobs on client side we can use to optimize this kind of workload? We have already raised the mds caps limit and memory limit, but these are global settings for all clients. We only need to optimize the single backup client. I'm thinking about:

- earlier release of unused caps

- limiting caps on client in addition to mds

- shorter metadata caching (should also result in earlier release)

- anything else that will result in a better metadata throughput


The amount of data backed up nightly is manageable (< 10TB / night), so the backup is currently only limited by metadata checks. Given the trend of growing data in all fields, backup solution might run into problems in the long run...


Best regards,

Burkhard Linke


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux