Re: 1 pg inconsistent and does not recover

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/28/23 10:45, Frank Schilder wrote:
Hi Stefan,

we run Octopus. The deep-scrub request is (immediately) cancelled if the PG/OSD is already part of another (deep-)scrub or if some peering happens. As far as I understood, the commands osd/pg deep-scrub and pg repair do not create persistent reservations. If you issue this command, when does the PG actually start scrubbing? As soon as another one finishes or when it is its natural turn? Do you monitor the scrub order to confirm it was the manual command that initiated a scrub?

We request a deep-scrub ... a few seconds later it starts deep-scrubbing. We do not verify in this process if the PG really did start, but they do. See example from a PG below:

Jun 27 22:59:50 mon1 pg_scrub[2478540]: [27-06-2023 22:59:34] Scrub PG 5.48a (last deep-scrub: 2023-06-16T22:54:58.684038+0200)


^^ deep_scrub daemon requests a deep-scrub, based on latest deep-scrub timestamp. After a couple of minutes it's deep-scrubbed. See below the deep-scrub timestamp (info from a PG query of 5.48a):

"last_deep_scrub_stamp": "2023-06-27T23:06:01.823894+0200"

We have been using this in Octopus (actually since Luminous, but in a different way). Now we are on Pacific.

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux