Re: [Open-FCoE] [PATCH] add cpu on/offline support in Fibre Channel exchange and sequence handling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Rob,

Sorry for replying late.

On Tue, Oct 12, 2010 at 6:08 AM, Robert Love <robert.w.love@xxxxxxxxx> wrote:
> On Fri, 2010-10-08 at 22:57 +0800, Hillf Danton wrote:
>
> Hi Hillf,
>
> Can you please prefix the libfc patches with "libfc: <title>"? Also, for

Yes, I can.

> your other patches that have a "SCSI" prefix, I think James' system adds
> the "[SCSI]" prefix when he applies patches so you don't need to add it.

I will remove SCSI.

>
>> Fibre Channel exchange pool is setup in a per cpu way that
>> entire exchange id range is divided equally across all cpus.
>>
>> When cpu goes offline, the corresponding range of exchg id
>> becomes unavailable until it is online again.
>>
>> This work tries to fix the unavailability based on notifier_block.
>>
>> Signed-off-by: Hillf Danton <dhillf@xxxxxxxxx>
>> ---
>>
>> --- o/linux-2.6.36-rc4/drivers/scsi/libfc/fc_exch.c  2010-09-13
>> 07:07:38.000000000 +0800
>> +++ m/linux-2.6.36-rc4/drivers/scsi/libfc/fc_exch.c  2010-10-08
>> 22:24:48.000000000 +0800
>
> Most patches I've seen on the various mailing lists are created by
> diffing the files within the kernel directory, so that you don't have
> the "/linux-2.6.36-rc/" in each hunk. It's the difference between using
> 'patch -p1' or 'patch -p2' when applying the patch.

I want to show clearly patch reviewers the kernel version.

>
> I think this is preventing git-am from applying the patches. To import
> them I ended up having git-am fail and then manually using the 'patch
> -p2' command, so that I could get both the commit message from git-am
> and then the patch itself from 'patch'. I don't know about anyone else,
> but if you post patches without the kernel directory in the diff line it
> would help me out. Thanks.

Sorry for getting you in trouble.

>
>> @@ -26,6 +26,9 @@
>> Â#include <linux/timer.h>
>> Â#include <linux/slab.h>
>> Â#include <linux/err.h>
>> +#include <linux/notifier.h>
>> +#include <linux/cpu.h>
>> +#include <linux/cpumask.h>
>>
>> Â#include <scsi/fc/fc_fc2.h>
>>
>> @@ -40,6 +43,8 @@ static u16 Âfc_cpu_order; Â /* 2's power to
>> Âstatic struct kmem_cache *fc_em_cachep; Â Â Â Â Â Â /* cache for exchanges */
>> Âstruct workqueue_struct *fc_exch_workqueue;
>>
>> +static cpumask_t fc_offline_cpu_mask = CPU_MASK_NONE;
>> +
>> Â/*
>> Â * Structure and function definitions for managing Fibre Channel Exchanges
>> Â * and Sequences.
>> @@ -666,6 +671,7 @@ static struct fc_exch *fc_exch_em_alloc(
>> Â Â Â unsigned int cpu;
>> Â Â Â u16 index;
>> Â Â Â struct fc_exch_pool *pool;
>> + Â Â unsigned int index = -1;
>
> This patch doesn't compile:

This is my fault when I prepare the patch based upon
for_each_cpu in cpumask.h, and sorry.

/**
 * for_each_cpu - iterate over every cpu in a mask
 * @cpu: the (optionally unsigned) integer iterator
 * @mask: the cpumask pointer
 *
 * After the loop, cpu is >= nr_cpu_ids.
 */

>
> drivers/scsi/libfc/fc_exch.c: In function âfc_exch_em_allocâ:
> drivers/scsi/libfc/fc_exch.c:674: error: conflicting types for âindexâ
> drivers/scsi/libfc/fc_exch.c:672: note: previous declaration of âindexâ
> was here
> make[3]: *** [drivers/scsi/libfc/fc_exch.o] Error 1
>
> In general I think the idea is fine. I haven't run into any
> circumstances where the per-CPU pools are exhausted. I know others have

The design of exch pool is an outstandingly good model for applications in
the environment of multiple cpus, it will be in my work.

> when using NPIV, but there were other ways to fix those issues.
>
>>
>> Â Â Â /* allocate memory for exchange */
>> Â Â Â ep = mempool_alloc(mp->ep_pool, GFP_ATOMIC);
>> @@ -679,6 +685,7 @@ static struct fc_exch *fc_exch_em_alloc(
>> Â Â Â pool = per_cpu_ptr(mp->pool, cpu);
>> Â Â Â spin_lock_bh(&pool->lock);
>> Â Â Â put_cpu();
>> +again:
>> Â Â Â index = pool->next_index;
>> Â Â Â /* allocate new exch from pool */
>> Â Â Â while (fc_exch_ptr_get(pool, index)) {
>> @@ -719,6 +726,17 @@ out:
>> Âerr:
>> Â Â Â spin_unlock_bh(&pool->lock);
>> Â Â Â atomic_inc(&mp->stats.no_free_exch_xid);
>
> I think this should be moved below the offline pool check, when we've

Node.

> really failed to allocate an exchange. It's not a per-CPU pool stat,
> it's an overall resource allocation failure stat, which you're now
> helping to happen less frequently. With the current placement we'll be
> bumping up the counter even though we have allocated an exchange.
>
> Thanks, //Rob
>
>

Thank you for sharing so much.

//Hillf
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux