Re: dm-thin: Several Questions on dm-thin performance.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/22/19 8:55 PM, Joe Thornber wrote:
On Fri, Nov 22, 2019 at 11:14:15AM +0800, JeffleXu wrote:

The first question is what's the purpose of data cell? In thin_bio_map(),
normal bio will be packed as a virtual cell and data cell. I can understand
that virtual cell is used to prevent discard bio and non-discard bio
targeting the same block from being processed at the same time. I find it
was added in commit     e8088073c9610af017fd47fddd104a2c3afb32e8 (dm thin:
fix race between simultaneous io and discards to same block), but I'm still
confused about the use of data cell.

As you are aware there are two address spaces for the locks.  The 'virtual' one
refers to cells in the logical address space of the thin devices, and the 'data' one
refers to the underlying data device.  There are certain conditions where we
unfortunately need to hold both of these (eg, to prevent a data block being reprovisioned
before an io to it has completed).

The second question is the impact of virtual cell and data cell on IO
performance. If $data_block_size is large for example 1G, in multithread fio
test, most bio will be buffered in cell->bios list and then be processed by
worker thread asynchronously, even when there's no discard bio. Thus the
original parallel IO is processed by worker thread serially now. As the
number of fio test threads increase, the single worker thread can easily get
CPU 100%, and thus become the bottleneck of the performance since dm-thin
workqueue is ordered unbound.

Yep, this is a big issue.  Take a look at dm-bio-prison-v2.h, this is the
new interface that we need to move dm-thin across to use (dm-cache already uses it).
It allows concurrent holders of a cell (ie, read locks), so we'll be able to remap
much more io without handing it off to a worker thread.  Once this is done I want
to add an extra field to cells that will cache the mapping, this way if you acquire a
cell that is already held then you can avoid the expensive btree lookup.  Together
these changes should make a huge difference to the performance.

If you've got some spare coding cycles I'd love some help with this ;)


Hi Joe,

I would be interested in helping you with this task. I can't make any
promises, but I believe I could probably spare some time to work on it.

If you think you could use the extra help, let me know.

Nikos

- Joe

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel


--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux