Re: [ceph-users] Re: RFC: (deep-)scrub manager module

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/21/22 09:59, Frank Schilder wrote:

By the way, Stefan, when you write "daemon", is this a daemon you implemented yourself on your own installation, or is it a daemon provided together with ceph?

It's a daemon written by a colleague of mine (In luminous era). You can find the code here: https://github.com/sndrsmnk/ceph_scrub_daemon. It is tested on Octopus and Pacific. It can be made to work For Nautilus (and older) with some small modifications. It works for us.

Recently it acquired a "-D" flag: with it deep scrubs are scheduled, Without it shallow scrubs. You need to run two daemons if you want to do both (at the same time). The intention was to schedule the PGs over time. But as info was lacking on how long (deep-)scrubbing a PG would take that logic is not working when more than 1 (deep-)scrub is active. So basically you just configure your scrub window(s), configure a "scrub_max_concurrent" and see how long it takes to have all your data scrubbed. After that you can tune these settings to get it right.

Most of the things I / Anthony mentioned are not implemented, so lots of room for improvement. Besides that things like have it store state in a RADOS object instead of local file. Only run on host with active manager (and have it fail over). Use Ceph python API, etc. But then it would better be a manager module ...

Gr. Stefan
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux