On Wed, Sep 28, 2011 at 02:16:49PM -0400, Christoph Hellwig wrote: > On Wed, Sep 28, 2011 at 02:09:05PM -0400, Vivek Goyal wrote: > > I had thought of implementing a separate lock for throttling. Then I > > noticed few operations like checking for queue flags where I would > > be required to hold queue locks. > > Can you do a writeup of these issues? E.g. if the flags are throtteling > specific we can move them to a separate flags field, if not we can > see if we can make them atomic or similar. Sure. I am going through the code and trying to list down some of the dependencies. But before that I would say that tyring to optimize the throttling code to reduce the queue lock contention should be last on the priority list. The simple reason being that in common case it does not incur any locking cost. One needs to have IO cgroup configured and needs to have some throttling rules put in place only then queue lock cost will come into picture. And for most of the people, I don't think that's the common case. Now coming back to question of dependencies. We take some services from request queue which needs to happen under queue lock. - Looks like in current code I am only accessing QUEUE_FLAG_DEAD. I see that flag is not synchronized using queue lock. So it probably is not a concern. - I am using tracing functionality. blk_add_trace_msg(). I guess it is not required to take queue lock for this. But I am not sure. - I am accessing q->backing_dev_info to get to associated devices's major and minor number. That's it I guess. That's what a quick look tells me. So looks like it is not too hard to convert existing blk throttle code to use its own lock. That will also get rid of dependency on queue lock and in turn uncertainity associated with queue lock being valid (if driver provided the lock). Thanks Vivek -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html