Re: [PATCH 2/2] xfs: mark the xfs-alloc workqueue as high priority

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/10/15 1:28 PM, Tejun Heo wrote:
> Hello, Eric.


> As long as the split worker is queued on a separate workqueue, it's
> not really stuck behind xfs_end_io's.  If the global pool that the
> work item is queued on can't make forward progress due to memory
> pressure, the rescuer will be summoned and it will pick out that work
> item and execute it.

Ok, but that's not happening...

> The only reasons that work item would stay there are
> 
> * The rescuer is already executing something else from that workqueue
>   and that one is stuck.

I'll have to look at that.  I hope I still have access to the core...

> * The worker pool is still considered to be making forward progress -
>   there's a worker which isn't blocked and can burn CPU cycles.

AFAICT, the first thing in the pool is the xffs_end_io blocked waiting for the ilock.

I assume it's only the first one that matters?

>   ie. if you have a busy spinning work item on the per-cpu workqueue,
>   it can stall progress.

...

> Again, if xfs is using workqueue correctly, that work item shouldn't
> get stuck at all.  What other workqueues are doing is irrelevant.

and yet here we are; one of us must be missing something.  It's quite
possibly me :) but we definitely have this thing wedged, and moving
the xfsalloc item to the front via high priority did solve it.  Not saying
it's the right solution, just a data point.

Thanks,
-Eric
 
> Thanks.
> 

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux