Re: Tuning for cephfs backup client?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

If the single backup client is iterating through the entire fs, its
local dentry cache will probably be thrashing, rendering it quite
useless.
And that dentry cache will be constantly hitting the mds caps per
client limit, so the mds will be busy asking it to release caps (to
invalidate cached dentries).

Since you know this up front, you might want to just have a cronjob on
the backup client like

*/2 * * * * echo 2 > /proc/sys/vm/drop_caches

This will keep things simple for the mds.

(Long ago when caps recall was slightly buggy, we used to run this [1]
cron on *all* kernel cephfs clients.)

Cheers, Dan

[1]
#!/bin/bash

# random sleep to avoid thundering herd
sleep $[ ( $RANDOM % 30 )  + 1 ]s

if ls /sys/kernel/debug/ceph/*/caps 1> /dev/null 2>&1; then
  CAPS=`cat /sys/kernel/debug/ceph/*/caps | grep total | awk '{sum +=
$2} END {print sum}'`
else
  CAPS=0
fi

if [ "${CAPS}" -gt 10000 ]; then
    logger -t ceph-drop-caps "Dropping ${CAPS} caps..."
    echo 2 > /proc/sys/vm/drop_caches
    logger -t ceph-drop-caps "Done"
fi



On Thu, Jun 23, 2022 at 10:41 AM Burkhard Linke
<Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
>
> Hi,
>
>
> we are using cephfs with currently about 200 million files and a single
> hosts running nightly backups. This setup works fine, except the cephfs
> caps management. Since the single host has to examine a lot of files, it
> will soon run into the mds caps per client limit, and processing will
> slow down due to extra caps request/release round trips to the mds. This
> problem will probably affect all cephfs users who are running a similar
> setup.
>
>
> Are there any tuning knobs on client side we can use to optimize this
> kind of workload? We have already raised the mds caps limit and memory
> limit, but these are global settings for all clients. We only need to
> optimize the single backup client. I'm thinking about:
>
> - earlier release of unused caps
>
> - limiting caps on client in addition to mds
>
> - shorter metadata caching (should also result in earlier release)
>
> - anything else that will result in a better metadata throughput
>
>
> The amount of data backed up nightly is manageable (< 10TB / night), so
> the backup is currently only limited by metadata checks. Given the trend
> of growing data in all fields, backup solution might run into problems
> in the long run...
>
>
> Best regards,
>
> Burkhard Linke
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux