Re: Turning on SCRUB back on - any suggestion ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Will do, of course :)

THx Wido for quick help, as always !

On 13 March 2015 at 12:04, Wido den Hollander <wido@xxxxxxxx> wrote:


On 13-03-15 12:00, Andrija Panic wrote:
> Nice - so I just realized I need to manually scrub 1216 placements groups :)
>

With manual I meant using a script.

Loop through 'ceph pg dump', get the PGid, issue a scrub, sleep for X
seconds and issue the next scrub.

Wido

>
> On 13 March 2015 at 10:16, Andrija Panic <andrija.panic@xxxxxxxxx
> <mailto:andrija.panic@xxxxxxxxx>> wrote:
>
>     Thanks Wido - I will do that.
>
>     On 13 March 2015 at 09:46, Wido den Hollander <wido@xxxxxxxx
>     <mailto:wido@xxxxxxxx>> wrote:
>
>
>
>         On 13-03-15 09:42, Andrija Panic wrote:
>         > Hi all,
>         >
>         > I have set nodeep-scrub and noscrub while I had small/slow hardware for
>         > the cluster.
>         > It has been off for a while now.
>         >
>         > Now we are upgraded with hardware/networking/SSDs and I would like to
>         > activate - or unset these flags.
>         >
>         > Since I now have 3 servers with 12 OSDs each (SSD based Journals) - I
>         > was wondering what is the best way to unset flags - meaning if I just
>         > unset the flags, should I expect that the SCRUB will start all of the
>         > sudden on all disks - or is there way to let the SCRUB do drives one by
>         > one...
>         >
>
>         So, I *think* that unsetting these flags will trigger a big
>         scrub, since
>         all PGs have a very old last_scrub_stamp and last_deepscrub_stamp
>
>         You can verify this with:
>
>         $ ceph pg <pgid> query
>
>         A solution would be to scrub each PG manually first in a timely
>         fashion.
>
>         $ ceph pg scrub <pgid>
>
>         That way you set the timestamps and slowly scrub each PG.
>
>         When that's done, unset the flags.
>
>         Wido
>
>         > In other words - should I expect BIG performance impact or....not ?
>         >
>         > Any experience is very appreciated...
>         >
>         > Thanks,
>         >
>         > --
>         >
>         > Andrija Panić
>         >
>         >
>         > _______________________________________________
>         > ceph-users mailing list
>         > ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
>         > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>         >
>         _______________________________________________
>         ceph-users mailing list
>         ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
>         http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
>     --
>
>     Andrija Panić
>
>
>
>
> --
>
> Andrija Panić



--

Andrija Panić
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux