On 06/06/2018 08:32 PM, Joe Comeau wrote: > When I am upgrading from filestore to bluestore > or any other server maintenance for a short time > (ie high I/O while rebuilding) > > ceph osd set noout > ceph osd set noscrub > ceph osd set nodeep-scrub > > when finished > > ceph osd unset noscrub > ceph osd unset nodeep-scrub > ceph osd unset noout > > again only while working on a server/cluster for a short time > Keep in mind that OSDs involved in recovery will not start a new (deep)scrub since Jewel. So there is no need to set this flag. OSDs which are not performing recovery will do a scrub as regular, which is fine. Wido > >>>> Alexandru Cucu <me@xxxxxxxxxxx> 6/6/2018 1:51 AM >>> > Hi, > > The only way I know is pretty brutal: list all the PGs with a > scrubbing process, get the primary OSD and mark it as down. The > scrubbing process will stop. > Make sure you set the noout, norebalance and norecovery flags so you > don't add even more load to your cluster. > > On Tue, Jun 5, 2018 at 11:41 PM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote: >> >> >> Is it possible to stop the current running scrubs/deep-scrubs? >> >> http://tracker.ceph.com/issues/11202 >> >> >> >> >> >> >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com