Re: [PATCH v2 00/34] iser target for 3.19

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/2/2014 9:17 AM, Nicholas A. Bellinger wrote:
Hi Sagi & Co,

On Mon, 2014-12-01 at 19:49 +0200, Sagi Grimberg wrote:
Hey Nic & Co,

I modified the logging patches to come after the stable related
patches. Also, given it's been a while, I'm piggy-backing some
more patches I have laying around to the series hoping they
will land in 3.19.


Thank you for this detailed -v2 that re-orders stable iser-target
bugfixes ahead of >= v3.19 improvements.  :)

The full patch-set has been applied to for-next as-is, with a few extra
comments below.

This series mainly consists:

Patches 1-15: Some error flow fixes for live target stack shutdown
	and cable pull with stress IO scenarios, as well as some
	fixes in the area of bond failover scenarios.
	(stable 3.10+ material)


So a couple of the smaller v3.10 patches could be further squashed in
order to reduce the workload for Greg-KH + Friends.

I'd like to do that before the -rc1 PULL over the next weeks + add a bit
more commit log detail for these, and will likely have a few more
questions to that end soon..

Sure.

While this set makes things better, there is still some work
left to do especially in the area of multi-session error flows.


What's the remaining TODO for large session count active I/O shutdown..?

I noticed in some cases (not easily reproduced), teardown sequence is
blocked by target_wait_for_sess_cmds forever, probably missing some
command kref put somewhere.

Also when shutting down the target during multiple initiators login
sequence we found a use after free of sess_cmd_lock (target_sess_cmd_list_set_waiting).


I ran some performance benchmarks on this set, I was able to
get iser target to service ~2100K read IOPs and ~1750K write
IOPs using 4 2.6GHz Xeon cores against a single initiator
(single 40GE/FDR link). More work can be done in this area indeed,
no good reason why we wouldn't get these numbers with a single core.

Thanks for the performance update.

Btw, I assume this is without Moussa's (CC'ed) previous patch-set to run
with unbounded isert_comp_wq and bump mlx4 eq count in ethernet mode,
right..?

This is still with bounded WQs. I still needs to understand the
latency implications of using WQ_UNBOUND, especially in NUMA systems.

Regarding mlx4, the mlx4_en EQ reservation left only 3 interrupt
vectors for RoCE applications (iser). A patch set to address this is on\
it's way.


Is there still a small block random I/O performance win from Moussa's
patch-set..?

I have yet to test WQ_UNBOUND. I'll do that soon.

Sagi.
--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux