Re: Scrubs stalled on Pacific

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello.

So there is no workaround...? I guess that's on me for upgrading to the
latest version instead of staying on a stable one. :)

Just as a warning for the future, if anyone is planning on upgrading a
cluster from Nautilus to Pacific (16.2.7), beware that your scrubs may stop
working.

Best regards,
Filipe

On Thu, Mar 10, 2022 at 11:41 AM Filipe Azevedo <cephusersml@xxxxxxxxxx>
wrote:

> Hello Ceph team.
>
> I've recently (3 weeks ago) upgraded a Ceph cluster from Nautilus to
> Pacific (16.2.7), and have encountered a strange issue.
>
> Scrubs, either deep or not, are scheduled and are shown in the cluster
> status, but there is no disk IO and they never finish. At the moment all of
> my PGs have fallen behind on scrubs which has me worried since data can be
> inconsistent and I'm not aware.
>
> This seems similar to:
> https://tracker.ceph.com/issues/54172
> but no solution was reported, it has not begun to work in my case.
>
> and
>
> https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/A45BWXLWC2PKLGA5G7GXKCZDNHEOL2LL/#RHHJXQCBV2GGTGD7GAX5PACEY6TFSWRQ
>
> Because I also see that message in the logs, but the mentioned backport
> has not happened as far as I can tell. I'm assuming that this is not
> common, otherwise there would be more reports, is there something special
> about my cluster? Any workarounds to have scrubs working again?
>
> Thank you and best regards,
> Filipe
>
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux