Il 04/09/2012 15:35, Michael S. Tsirkin ha scritto: > I see. I guess you can rewrite this as: > atomic_inc > if (atomic_read() == 1) > which is a bit cheaper, and make the fact > that you do not need increment and return to be atomic, > explicit. It seems more complicated to me for hardly any reason. (Besides, is it cheaper? It has one less memory barrier on some architectures I frankly do not care much about---not on x86---but it also has two memory accesses instead of one on all architectures). > Another simple idea: store last processor id in target, > if it is unchanged no need to play with req_vq > and take spinlock. Not so sure, consider the previous example with last_processor_id equal to 1. queuecommand on CPU #0 queuecommand #2 on CPU #1 -------------------------------------------------------------- atomic_inc_return(...) == 1 atomic_inc_return(...) == 2 virtscsi_queuecommand to queue #1 last_processor_id == 0? no spin_lock tgt->req_vq = queue #0 spin_unlock virtscsi_queuecommand to queue #0 This is not a network driver, there are still a lot of locks around. This micro-optimization doesn't pay enough for the pain. > Also - some kind of comment explaining why a similar race can not happen > with this lock in place would be nice: I see why this specific race can > not trigger but since lock is dropped later before you submit command, I > have hard time convincing myself what exactly gurantees that vq is never > switched before or even while command is submitted. Because tgt->reqs will never become zero (which is a necessary condition for tgt->req_vq to change), as long as one request is executing virtscsi_queuecommand. Paolo -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html