Re: [PATCH 2/3] libfc: A modular Fibre Channel library

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Vasu Dev wrote:
> Andi Kleen wrote:
>> On Wed, Dec 10, 2008 at 10:42:28AM -0800, Vasu Dev wrote:
>>  
>>> It had load balancing issue but now it is fixed, related latest
>>> submitted code with its updated comment is:-
>>>
>>>     /*
>>>      * The incoming frame exchange id(oxid) is ANDed with num of online
>>>      * cpu bits to get cpu_idx and then this cpu_idx is used for     
>>> selecting
>>>      * a per cpu kernel thread from fcoe_percpu. In case the cpu is
>>>      * offline or no kernel thread for derived cpu_idx then cpu_idx is
>>>      * initialize to first online cpu index.
>>>      */
>>>     cpu_idx = oxid & (num_online_cpus() - 1);
>>>     
>>
>> First note that num_online_cpus() is not guaranteed to be a power of two,
>> - 1 is not guaranteed to give a suitable mask. So you might actually
>> lose random bits.
> 
> Correct, this will work best for only power of 2 online cpus and that
> would be the most common typical use case. I agree it won't load balance
> better in non power of 2 cpus case.
> 
>>  Also your load balancing scheme is unusual to say at least. e.g. when
>> you're just talking to a single frame exchange you would always
>> transfer data between CPUs instead of keeping it all on the CPU that
>> processes the interrupt.    Normally the rule of thumb is to use local
>> data as much as possible. Or when you distribute like this at least
>> stay in the same socket.   
> 
> We cannot control what cpu to get interrupted for a FC frame in a
> typical generic NIC, so we may end up receiving mostly all FC frames on
> a single same cpu though system might have several other cpus available.
> In this scenario if frame are passed up to same cpu as suggested above
> then that won't do any load balancing, therefore some sort of load
> balancing is required based on some FC frame attributes here.
> 
> As I said in my last response that "performance tuning is yet to be
> done" but you bring up some good related points now of cross socket
> frame migration and balancing on non power of 2 cpus system. These
> should be considered during pending performance tuning but for now I can
> add additional check to select cpu within same socket but not sure how
> to do that, any kernel call for this ? This might cause more locking
> contentions on libfc structs so really we have to experiment these thing
> during performance tuning. Thanks Andi for these hints on performance
> consideration.
> 
> Vasu

Somehow the exchange IDs should be allocated so that responses can be
directed back to the same CPU that issued the requests.  At one time
the mask was a computed mask such that it was always a power of 2.

If we made a separate exchange ID space per CPU and the LLD allocated the XIDs,
then it could also be sure it was doing the right thing with the responses.

If we used a separate exchange manager for each CPU, there would be less
lock contention in the EM code.   As we diffy up the exchange ID space
between the multiple EMs, we could use the high-order bits to be the CPU
number.  We would probably want to limit the number of EMs to something
like 8 or 16 even on systems with more CPUs.

Like you said, Vasu, this is for performance tuning time, whenever that is.
However, I think that might be a few releases away or a gradual evolution
since there might be some significant changes involved.

	Joe
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux