Re: [PATCH RFC 0/2] rcu skiplists v2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 26, 2013 at 10:29:36PM -0400, Mathieu Desnoyers wrote:
> * Chris Mason (clmason@xxxxxxxxxxxx) wrote:
> > Quoting Mathieu Desnoyers (2013-06-26 19:02:18)
> > > FWIW, my benchmark of RCU Judy array with ranges in similar conditions:
> > > 
> > > - Using 32-bit key length
> > > - I first populated 10M ranges of len = 1, sequentially.
> > > - Then, I run a reader threads for 10s, which perform random lookups
> > >   (all successful) in keys from 0 to 10M.
> > 
> > Similar, I had 64 bit keys and the lookups were totally random (not all
> > successful).  I doubt it matters too much for these numbers.
> 
> I'd have to try with 64-bit keys, since it matters for RCU Judy. It
> means a successful lookup will need to read twice as many cache lines as
> for 32-bit keys. For my range implementation (on top of Judy), every
> lookup ends up succeeding, because it either finds an "empty" range or a
> populated range, so having match or non-match does not matter much for
> range lookups.

Yeah, I only care about performance with 64 bit keys, sparse
keyspace population and random extent lengths. Random lookup
performance is more important than insert and delete, though I do
have cases where bulk sequential insert and removal are important,
too.

> > Also, my benchmarks were not just inserting keys but keys pointing to
> > things.  So a lookup walked the tree and found an object and then
> > returned the object.  radix can just return a key/value without
> > dereferencing the value, but that wasn't the case in my runs.
> 
> In the specific test I ran, I'm looking up the "range" object, which is
> the dereferenced "value" pointer in terms of Judy lookup. My Judy array
> implementation represents items as a linked list of structures matching
> a given key. This linked list is embedded within the structures,
> similarly to the linux/list.h API. Then, if the lookup succeeds, I take
> a mutex on the range, and check if it has been concurrently removed.

Does that mean that each "extent" that is indexed has a list head
embedded in it? That blows the size of the index out when all I
might want to store in the tree is a 64 bit value for a block
mapping...

FWIW, when a bunch of scalability work was done on xfs_repair years
ago, judy arrays were benchmarked for storing extent lists that
tracked free/used space. We ended up using a btree, because while it
was slower than the original bitmap code, it was actually faster
than the highly optimised judy array library and at the scale we
needed there was no memory usage advantage to using a judy array,
either...

So I'm really starting to wonder if it'd be simpler for me just to
resurrect the old RCU friendly btree code Peter Z wrote years ago
(http://programming.kicks-ass.net/kernel-patches/vma_lookup/) and
customise it for the couple of uses I have in XFS....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux