Re: [PATCH 13/15] cifs: convert cifsFileInfo->count to non-atomic counter

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 7 Oct 2010 10:37:54 -0500
Steve French <smfrench@xxxxxxxxx> wrote:

> On Thu, Oct 7, 2010 at 7:43 AM, Jeff Layton <jlayton@xxxxxxxxxx> wrote:
> > On Thu, 07 Oct 2010 17:42:57 +0530
> > Suresh Jayaraman <sjayaraman@xxxxxxx> wrote:
> >
> >> On 10/07/2010 04:48 PM, Jeff Layton wrote:
> >> > On Thu, 07 Oct 2010 14:18:50 +0530
> >> > Suresh Jayaraman <sjayaraman@xxxxxxx> wrote:
> >> >
> >> >> On 10/07/2010 01:24 AM, Jeff Layton wrote:
> >> >>> The count for cifsFileInfo is currently an atomic, but that just adds
> >> >>> complexity for little value. We generally need to hold cifs_file_list_lock
> >> >>> to traverse the lists anyway so we might as well make this counter
> >> >>> non-atomic and simply use the cifs_file_list_lock to protect it.
> >> >>>
> >> >>> Signed-off-by: Jeff Layton <jlayton@xxxxxxxxxx>
> >> >>> ---
> >> >>>  fs/cifs/cifsglob.h |    9 ++++++---
> >> >>>  fs/cifs/file.c     |    8 +++++---
> >> >>>  2 files changed, 11 insertions(+), 6 deletions(-)
> >> >>>
> >> >>> diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
> >> >>> index 531a768..f3c4e00 100644
> >> >>> --- a/fs/cifs/cifsglob.h
> >> >>> +++ b/fs/cifs/cifsglob.h
> >> >>> @@ -393,16 +393,19 @@ struct cifsFileInfo {
> >> >>>   struct list_head llist; /* list of byte range locks we have. */
> >> >>>   bool invalidHandle:1;   /* file closed via session abend */
> >> >>>   bool oplock_break_cancelled:1;
> >> >>> - atomic_t count;         /* reference count */
> >> >>> + int count;              /* refcount -- protected by cifs_file_list_lock */
> >> >>>   struct mutex fh_mutex; /* prevents reopen race after dead ses*/
> >> >>>   struct cifs_search_info srch_inf;
> >> >>>   struct work_struct oplock_break; /* work for oplock breaks */
> >> >>>  };
> >> >>>
> >> >>> -/* Take a reference on the file private data */
> >> >>> +/*
> >> >>> + * Take a reference on the file private data. Must be called with
> >> >>> + * cifs_file_list_lock held for read or write.
> >> >>> + */
> >> >>>  static inline void cifsFileInfo_get(struct cifsFileInfo *cifs_file)
> >> >>>  {
> >> >>> - atomic_inc(&cifs_file->count);
> >> >>> + ++cifs_file->count;
> >> >>
> >> >> Since we now use cifs_file_list_lock to protect cifs_file->count too,
> >> >> shouldn't all the callers of cifsFileInfo_get() need to acquire a write
> >> >> lock instead of read lock?
> >> >>
> >> >
> >> > I puzzled over that too, but it's not really necessary. The lock is
> >> > really there to make sure that the list traversals are safe from
> >> > removals. We do need the write lock when inserting or removing the
> >> > cifsFileInfo into the lists, but for incrementing a non-zero refcount a
> >> > read lock is sufficient. That will prevent cifsFileInfo_put from
> >> > racing in, decrementing the refcount and modifying the list since it
> >> > needs a write lock.
> >>
> >> Fine, but, isn't possible to get the refcount wrong due to calls to
> >> cifsFileInfo_get() from multiple callers?
> >>
> >
> > Argh...good point. Maybe we do need to make this a regular spinlock
> > after all. The times you can use a read lock safely are pretty limited
> > here.
> >
> >> > TBH, I think it would be far simpler (and maybe even more efficient) to
> >> > use a regular spinlock here, but I figured this would be easier to get
> >> > past review.
> >> >
> >>
> >> I'm little worried about the overhead as I'm not sure whether all the
> >> paths are low contention paths...
> >>
> >
> > I think they are generally low contention paths. Also, the lists are
> > generally very short so the time the lock is held is too. Unless most
> > of your accesses (like 90%) are read locked, then it's actually worse
> > to use a rwlock than a regular spinlock.
> >
> > Even if a regular spinlock adds some overhead, it's really unlikely to
> > make any real difference in performance. CIFS is almost always bound by
> > network latency, not CPU time.
> >
> > Steve, I mostly left this as a rwlock as I figured you might have
> > issues moving this to a normal spinlock. Would you?
> 
> I doubt that a spinlock vs. a rwlock makes much difference,
> but a rwlock lock for list insertion/removal seems more intuitive
> and overloading it to also protect a counter adds complexity.
> Seems more natural to leave it as an atomic counter.
> 

Unfortunately, it's not that simple. When the count goes to 0 on a put,
you need to atomically take the lock that protects the lists so that
you can safely remove it from those lists. You can't do that with a
rwlock_t.

It gets worse though -- if the atomic_t refcount can be increased while
you're holding the lock, then you need to check that refcount twice.
Once to get the lock, and then again to make sure it didn't change
while you were spinning on it.

For instance:

+	if (!atomic_dec_and_lock(&cifs_file->count, &cifs_file_list_lock))
+		return;
+
+	/* count can be incremented inside the lock, so must check again */
+	if (atomic_read(&cifs_file->count) > 0) {
+		spin_unlock(&cifs_file_list_lock);
+		return;
+	}

If we don't want to use atomics then we can go with a rwlock_t, but
then you need to ensure that any time you could increment the refcount
that you take a write_lock. That's almost all of the code that runs
under this lock. There will only be one or two seldom-used functions
that are able to use a read_lock at that point. rwlocks have more
overhead than simple spinlocks so again that's a loss.

Based on the above, I would argue that the most clear way to do this is
to use a simple spinlock and a simple (non-atomic) counter that is
protected by that lock.

Thoughts?
-- 
Jeff Layton <jlayton@xxxxxxxxxx>
--
To unsubscribe from this list: send the line "unsubscribe linux-cifs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux