Re: [Lsf-pc] [LSF/MM TOPIC] iSCSI MQ adoption via MCS discussion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/12/2015 10:05 PM, Mike Christie wrote:
On 01/11/2015 03:23 AM, Sagi Grimberg wrote:
On 1/9/2015 8:00 PM, Michael Christie wrote:
<SNIP>


Session wide command sequence number synchronization isn't something to
be removed as part of the MQ work.  It's a iSCSI/iSER protocol
requirement.

That is, the expected + maximum sequence numbers are returned as part of
every response PDU, which the initiator uses to determine when the
command sequence number window is open so new non-immediate commands may
be sent to the target.

So, given some manner of session wide synchronization is required
between different contexts for the existing single connection case to
update the command sequence number and check when the window opens, it's
a fallacy to claim MC/S adds some type of new initiator specific
synchronization overhead vs. single connection code.

I think you are assuming we are leaving the iscsi code as it is today.

For the non-MCS mq session per CPU design, we would be allocating and
binding the session and its resources to specific CPUs. They would
only be accessed by the threads on that one CPU, so we get our
serialization/synchronization from that. That is why we are saying we
do not need something like atomic_t/spin_locks for the sequence number
handling for this type of implementation.

If we just tried to do this with the old code where the session could
be accessed on multiple CPUs then you are right, we need locks/atomics
like how we do in the MCS case.


I don't think we will want to restrict session per CPU. There is a
tradeoff question of system resources. We might want to allow a user to
configure multiple HW queues but still not to use too much of the system
resources. So the session locks would still be used but definitely less
congested...

Are you talking about specifically the session per CPU or also MCS and
doing a connection per CPU?

This applies to both.


Based on the srp work, how bad do you think it will be to do a
session/connection per CPU? What are you thinking will be more common?
Session per 4 CPU? 2 CPUs? 8?

This is a level of degree which demonstrates why we need to let the
user choose. I don't think there is a magic number here, there is a
tradeoff between performance and memory footprint.


There is also multipath to take into account here. We could do a mq/MCS
session/connection per CPU (or group of CPS) then also one of those per
transport path. We could also do a mq/MCS session/connection per
transport path, then bind those to specific CPUs. Or something in between.


Is it a good idea to tie iSCSI implementation in multipath? I've seen
deployments where multipath was not used for HA (NIC bonding was used
for that).

The srp implementation allowed the user to choose the number of
channels per target and the default was chosen by empirical results
(Bart, please correct me if I'm wrong here).

Sagi.
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux