Hi Sagi & Co, On Mon, 2014-12-01 at 19:49 +0200, Sagi Grimberg wrote: > Hey Nic & Co, > > I modified the logging patches to come after the stable related > patches. Also, given it's been a while, I'm piggy-backing some > more patches I have laying around to the series hoping they > will land in 3.19. > Thank you for this detailed -v2 that re-orders stable iser-target bugfixes ahead of >= v3.19 improvements. :) The full patch-set has been applied to for-next as-is, with a few extra comments below. > This series mainly consists: > > Patches 1-15: Some error flow fixes for live target stack shutdown > and cable pull with stress IO scenarios, as well as some > fixes in the area of bond failover scenarios. > (stable 3.10+ material) > So a couple of the smaller v3.10 patches could be further squashed in order to reduce the workload for Greg-KH + Friends. I'd like to do that before the -rc1 PULL over the next weeks + add a bit more commit log detail for these, and will likely have a few more questions to that end soon.. > Patches 16-20: expose t10_pi attribute correctly and fix a crash > due to a bad dereference. > (stable 3.15+ material) > <nod>, thanks for addressing this regression. > Patch 21: Workaround for live target stack unload in the presence > of multiple (60+) of active sessions. > Very nice. :-) > Patches 22-29: Some completion processing modifications done for > simplification and enhancements. > Thanks for adding the rx flush WR beacon bit, it really makes QP shutdown WR accounting much simpler. > Patches 30-31: Some more fixes needed in the live shutdown case. > <nod> > Patches 32-33: Some logging refactoring. It is much easier to > instruct a user to increase the log level in this case. > Applied. > Patch 34: Nit - remove code duplication. > Applied. > While this set makes things better, there is still some work > left to do especially in the area of multi-session error flows. > What's the remaining TODO for large session count active I/O shutdown..? > I ran some performance benchmarks on this set, I was able to > get iser target to service ~2100K read IOPs and ~1750K write > IOPs using 4 2.6GHz Xeon cores against a single initiator > (single 40GE/FDR link). More work can be done in this area indeed, > no good reason why we wouldn't get these numbers with a single core. Thanks for the performance update. Btw, I assume this is without Moussa's (CC'ed) previous patch-set to run with unbounded isert_comp_wq and bump mlx4 eq count in ethernet mode, right..? Is there still a small block random I/O performance win from Moussa's patch-set..? > Some of the todos are: > - Avoid data copy for ImmediateData writes (would require to refactor > the post_recv buffers logic). ... > - Polling the CQ from a kthread instead of a work-queue which would > benefit in: > * better concurrency in the multi-connection case as we won't > serialize completion works which are bound to the same MSIX vector. > * Reduce interrupts by avoiding re-arming the CQ to maintain > fairness between multiple connections, we can just > schedule()/cond_resched() instead. Both sound reasonable. > - Closer examination of the locking schemes taken in iscsit layer. > - Reordering iser structures to fit hot items into cachelines. Great. Again, thanks for taking the extra time to re-order this series. --nab -- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html