Re: Reason of Recovery Work Queue not being a priority queue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've got a branch that will actually move that into the main wq.
https://github.com/athanatos/ceph/tree/wip-recovery-wq

More generally, PGs don't start recovering (and therefore are not in
that queue) until they have reservations locally and on recovery
targets -- those reservations *are* prioritized by amount degraded,
etc.
-Sam

On Wed, Feb 3, 2016 at 12:44 AM, Gaurav Bafna <bafnag@xxxxxxxxx> wrote:
> Hi Cephers,
>
> I am going through OSD Code flow. I noticed that the work queue for
> normal read write operations is Prioritized Queue (op_shardedwq in
> OSD.h), whereas the recovery work queue is FIFO with respect to PGs .
> The PG which was first discovered to be "need to be recovered"  will
> be recovered first. There might be a PG which has more recovery work
> to do , but was last in the queue. So in that case, it will lead to a
> higher recovery time.
>
> I think it is FIFO assuming that the PG which needs more recovery ,
> must have lost many OSDs and should be well ahead in the queue.
>
> Should we have a recovery queue implemented as priority queue for the PGs ?
>
> Please let me know whether I am right or there is some problem in my
> understanding.
>
>
> --
> Gaurav Bafna
> 9540631400
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux