Re: [vdo-devel] [PATCH v2 00/39] Add the dm-vdo deduplication and compression device mapper target.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



An offline discussion suggested maybe I should've gone into a little
more detail about how VDO uses its work queues.

VDO is sufficiently work-intensive that we found long ago that doing all
the work in one thread wouldn't keep up.

Our multithreaded design started many years ago and grew out of our
existing design for UDS (VDO's central deduplication index), which,
somewhat akin to partitioning and sharding in databases, does scanning
of the in-memory part of the "database" of values in some number (fixed
at startup) of threads, with the data and work divided up based on
certain bits of the hash value being looked up, and performs its I/O and
callbacks from certain other threads. We aren't splitting work to
multiple machines as database systems sometimes do, but to multiple
threads and potentially multiple NUMA nodes.

We try to optimize for keeping the busy case fast, even if it means
light usage loads don't perform quite as well as they could be made to.
We try to reduce instances of contention between threads by avoiding
locks when we can, preferring a fast queueing mechanism or loose
synchronization between threads. (We haven't kept to it strictly, but
we've mostly tried to.)

In VDO, at the first level, the work is split according to the
collection of data structures to be updated (e.g., recovery journal vs
disk block allocation vs block address mapping management).

For some data structures, we split the structures further based on
values of relevant bit-strings for the data structure in question (block
addresses, hash values). Currently we can split the work N ways for many
small values of N but it's hard to change N without restarting. The
processing of a read or write operation generally doesn't need to touch
more than one "zone" in any of these sets (or two, in a certain write
case).

Giving one thread exclusive access to the data structures means we can
do away with the locking. Of course, with so many different threads
owning data structures, we get a lot of queueing in exchange, but we
depend on a fast, nearly-lock-free MPSC queueing mechanism to keep that
reasonably efficient.

There's a little more to it in places where we need to preserve the
order of processing of multiple VIOs in a couple different sections of
the write path. So we do make some higher-level use of the fact that
we're adding work to queues with certain behavior, and not just turning
loose a bunch of threads to contend for a just-released mutex.

Some other bits of work like computing the hash value don't update any
other data structures, and not only would be amenable to kernel
workqueue conversion with concurrency greater than 1, but such a
conversion might open up some interesting options, like hashing on the
CPU or NUMA node where the data block is likely to reside in cache. But
for now, using one work management mechanism has been easier than two.

The experiment I referred to in my earlier email with using kernel
workqueues in VDO kept the same model of protecting data structures by
making them exclusive to specific threads (or in this case,
concurrency-1 workqueues) to serialize all access and using message
passing; it didn't change everything over to using mutexes instead.

I hope some of this helps. I'm happy to answer further questions.

Ken




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux