Re: [PATCH v4 00/11] Performance fixes for 9p filesystem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Saturday, February 18, 2023 1:33:12 AM CET Eric Van Hensbergen wrote:
> This is the fourth version of a patch series which adds a number
> of features to improve read/write performance in the 9p filesystem.
> Mostly it focuses on fixing caching to help utilize the recently
> increased MSIZE limits and also fixes some problematic behavior
> within the writeback code.
> 
> All together, these show roughly 10x speed increases on simple
> file transfers over no caching for readahead mode.  Future patch
> sets will improve cache consistency and directory caching, which
> should benefit loose mode.
> 
> This iteration of the patch incorporates an important fix for
> writeback which uses a stronger mechanism to flush writeback on
> close of files and addresses observed bugs in previous versions of
> the patch for writeback, mmap, and loose cache modes.
> 
> These patches are also available on github:
> https://github.com/v9fs/linux/tree/ericvh/for-next
> and on kernel.org:
> https://git.kernel.org/pub/scm/linux/kernel/git/ericvh/v9fs.git
> 
> Tested against qemu, cpu, and diod with fsx, dbench, and postmark
> in every caching mode.
> 
> I'm gonna definitely submit the first couple patches as they are
> fairly harmless - but would like to submit the whole series to the
> upcoming merge window.  Would appreciate reviews.

I tested this version thoroughly today (msize=512k in all tests). Good news 
first: the previous problems of v3 are gone. Great! But I'm still trying to 
make sense of the performance numbers I get with these patches.

So when doing some compilations with 9p, performance of mmap, writeback and 
readahead are basically all the same, and only loose being 6x faster than the 
other cache modes. Expected performance results? No errors at least. Good!

Then I tested simple linear file I/O. First linear writing a 12GB file
(time dd if=/dev/zero of=test.data bs=1G count=12):

writeback    3m10s [this series - v4]
readahead    0m11s [this series - v4]
mmap         0m11s [this series - v4]
mmap         0m11s [master]
loose        2m50s [this series - v4]
loose        2m19s [master]

That's a bit surprising. Why is loose and writeback slower?

Next linear reading a 12GB file
(time cat test.data > /dev/null):

writeback    0m24s [this series - v4]
readahead    0m25s [this series - v4]
mmap         0m25s [this series - v4]
mmap         0m9s  [master]
loose        0m24s [this series - v4]
loose        0m24s [master]

mmap degredation sticks out here, and no improvement with the other modes?

I always performed a guest reboot between each run BTW.






[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux