Re: [PATCH v7 17/25] block/rnbd: client: main functionality

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > +/**
> > + * rnbd_get_cpu_qlist() - finds a list with HW queues to be rerun
> > + * @sess:    Session to find a queue for
> > + * @cpu:     Cpu to start the search from
> > + *
> > + * Description:
> > + *     Each CPU has a list of HW queues, which needs to be rerun.  If a list
> > + *     is not empty - it is marked with a bit.  This function finds first
> > + *     set bit in a bitmap and returns corresponding CPU list.
> > + */
> > +static struct rnbd_cpu_qlist *
> > +rnbd_get_cpu_qlist(struct rnbd_clt_session *sess, int cpu)
> > +{
> > +     int bit;
> > +
> > +     /* First half */
> > +     bit = find_next_bit(sess->cpu_queues_bm, nr_cpu_ids, cpu);
>
> Is it protected by any lock?
We hold requeue_lock when set/clear bit, and disable preemption via
get_cpu_ptr when find_next_bit.
even it fails to get latest bit, it just cause an rerun the queue.
>
> > +     if (bit < nr_cpu_ids) {
> > +             return per_cpu_ptr(sess->cpu_queues, bit);
> > +     } else if (cpu != 0) {
> > +             /* Second half */
> > +             bit = find_next_bit(sess->cpu_queues_bm, cpu, 0);
> > +             if (bit < cpu)
> > +                     return per_cpu_ptr(sess->cpu_queues, bit);
> > +     }
> > +
> > +     return NULL;
> > +}
> > +
> > +static inline int nxt_cpu(int cpu)
> > +{
> > +     return (cpu + 1) % nr_cpu_ids;
> > +}
> > +
> > +/**
> > + * rnbd_rerun_if_needed() - rerun next queue marked as stopped
> > + * @sess:    Session to rerun a queue on
> > + *
> > + * Description:
> > + *     Each CPU has it's own list of HW queues, which should be rerun.
> > + *     Function finds such list with HW queues, takes a list lock, picks up
> > + *     the first HW queue out of the list and requeues it.
> > + *
> > + * Return:
> > + *     True if the queue was requeued, false otherwise.
> > + *
> > + * Context:
> > + *     Does not matter.
> > + */
> > +static inline bool rnbd_rerun_if_needed(struct rnbd_clt_session *sess)
>
> No inline function in C files.
First time saw such request, there are so many inline functions in C
files across the tree
grep inline drivers/infiniband/core/*.c
drivers/infiniband/core/addr.c:static inline bool
ib_nl_is_good_ip_resp(const struct nlmsghdr *nlh)
drivers/infiniband/core/cma.c:static inline u8 cma_get_ip_ver(const
struct cma_hdr *hdr)
drivers/infiniband/core/cma.c:static inline void cma_set_ip_ver(struct
cma_hdr *hdr, u8 ip_ver)
drivers/infiniband/core/cma.c:static inline void release_mc(struct kref *kref)
drivers/infiniband/core/cma.c:static inline struct sockaddr
*cma_src_addr(struct rdma_id_private *id_priv)
drivers/infiniband/core/cma.c:static inline struct sockaddr
*cma_dst_addr(struct rdma_id_private *id_priv)

>
> > +{
> > +     struct rnbd_queue *q = NULL;
> > +     struct rnbd_cpu_qlist *cpu_q;
> > +     unsigned long flags;
> > +     int *cpup;
> > +
> > +     /*
> > +      * To keep fairness and not to let other queues starve we always
> > +      * try to wake up someone else in round-robin manner.  That of course
> > +      * increases latency but queues always have a chance to be executed.
> > +      */
> > +     cpup = get_cpu_ptr(sess->cpu_rr);
> > +     for (cpu_q = rnbd_get_cpu_qlist(sess, nxt_cpu(*cpup)); cpu_q;
> > +          cpu_q = rnbd_get_cpu_qlist(sess, nxt_cpu(cpu_q->cpu))) {
> > +             if (!spin_trylock_irqsave(&cpu_q->requeue_lock, flags))
> > +                     continue;
> > +             if (likely(test_bit(cpu_q->cpu, sess->cpu_queues_bm))) {
>
> Success oriented approach please.
sorry, I don't quite get your point.

Thanks Leon for review.



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux