On Thu, Oct 28, 2010 at 5:36 AM, Robert Love <robert.w.love@xxxxxxxxx> wrote: > On Fri, 2010-10-22 at 20:20 +0800, Hillf Danton wrote: >> For allocating new exch from pool, Âscanning for free slot in exch >> array fluctuates when exch pool is close to exhaustion. >> >> The fluctuation is smoothed, and the scan looks to be O(2). >> > Hi Hillf, > >  I think this patch is fine, aside from a few minor nits below. I'm > not sure how much this benefits us though. I don't think that it will > hurt us, but I'd like to leave it in the fcoe-next tree a bit to make > sure there aren't any adverse effects. I will fix the two issues I > mention below and check it into fcoe-next, unless there are objections. > > Have you done any profiling with this patch to show the improvement? It is O(1) when looking up exch. Very cool. In allocating O(1) was tried but failed. > >> Signed-off-by: Hillf Danton <dhillf@xxxxxxxxx> >> --- >> >> --- a/drivers/scsi/libfc/fc_exch.c  Â2010-09-13 07:07:38.000000000 +0800 >> +++ b/drivers/scsi/libfc/fc_exch.c  Â2010-10-22 20:02:54.000000000 +0800 >> @@ -67,6 +67,11 @@ struct workqueue_struct *fc_exch_workque >> Âstruct fc_exch_pool { >>    u16       Ânext_index; >>    u16       Âtotal_exches; >> + >> +   /* two cache of free slot in exch array */ >> +   u16       Âleft; >> +   u16       Âright; >> + >>    spinlock_t    lock; >>    struct list_head ex_list; >> Â}; >> @@ -397,13 +402,26 @@ static inline void fc_exch_ptr_set(struc >> Âstatic void fc_exch_delete(struct fc_exch *ep) >> Â{ >>    struct fc_exch_pool *pool; >> +   u16 index; >> >>    pool = ep->pool; >>    spin_lock_bh(&pool->lock); >>    WARN_ON(pool->total_exches <= 0); >>    pool->total_exches--; >> -   fc_exch_ptr_set(pool, (ep->xid - ep->em->min_xid) >> fc_cpu_order, >> -           NULL); >> + >> +   /* update cache of free slot */ >> +   index = (ep->xid - ep->em->min_xid) >> fc_cpu_order; >> +   if (pool->left == FC_XID_UNKNOWN) >> +       pool->left = index; >> +   else if (pool->right == FC_XID_UNKNOWN) >> +       pool->right = index; >> +   else >> +       /* XXX >> +       Â* next = entropy(index, left, right); >> +       Â**/ > > We can remove this comment, right? Node. > >> +       pool->next_index = index; >> + >> +   fc_exch_ptr_set(pool, index, NULL); >>    list_del(&ep->ex_list); >>    spin_unlock_bh(&pool->lock); >>    fc_exch_release(ep);  Â/* drop hold for exch in mp */ >> @@ -679,6 +697,19 @@ static struct fc_exch *fc_exch_em_alloc( >>    pool = per_cpu_ptr(mp->pool, cpu); >>    spin_lock_bh(&pool->lock); >>    put_cpu(); >> + >> +   /* peek cache of free slot */ >> +   if (pool->left != FC_XID_UNKNOWN) { >> +       index = pool->left; >> +       pool->left = FC_XID_UNKNOWN; >> +       goto hit; >> +   } >> +   if (pool->right != FC_XID_UNKNOWN) { >> +       index = pool->right; >> +       pool->right = FC_XID_UNKNOWN; >> +       goto hit; >> +   } >> + >>    index = pool->next_index; >>    /* allocate new exch from pool */ >>    while (fc_exch_ptr_get(pool, index)) { >> @@ -687,7 +718,7 @@ static struct fc_exch *fc_exch_em_alloc( >>            goto err; >>    } >>    pool->next_index = index == mp->pool_max_index ? 0 : index + 1; >> - >> +hit: >>    fc_exch_hold(ep);    /* hold for exch in mp */ >>    spin_lock_init(&ep->ex_lock); >>    /* >> @@ -2181,6 +2212,8 @@ struct fc_exch_mgr *fc_exch_mgr_alloc(st >>        goto free_mempool; >>    for_each_possible_cpu(cpu) { >>        pool = per_cpu_ptr(mp->pool, cpu); >> +       pool->left Â= > > I think we should initialize this without relying on the following line. It looks like, pool->left = pool->right = FC_XID_UNKNOWN; right? I want cache play its role after warm-up. thanks //Hillf > >> +       pool->right = FC_XID_UNKNOWN; >>        spin_lock_init(&pool->lock); >>        INIT_LIST_HEAD(&pool->ex_list); >>    } >> _______________________________________________ >> devel mailing list >> devel@xxxxxxxxxxxxx >> http://www.open-fcoe.org/mailman/listinfo/devel > > > -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html