Re: dm-thin vs lvm performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jagan,

On Wed, Jan 18, 2012 at 11:30:54AM -0800, Jagan Reddy wrote:
> Joe,
>  Thanks for looking into the issue and running the tests and suggesting to use "direct" flag. I do see a difference with "direct" flag using dd. However the difference is significant when using bs=64M compared to bs=4k. 

I've spent a couple of days tinkering with aio-stress and thinp on
ramdisks.  More tests can be found here:

https://github.com/jthornber/thinp-test-suite/blob/master/ramdisk_tests.rb

It appears that wiping the device (ie. to ensure total allocation) is
causing the issue, and what's more this is a more general problem than
just thinp.

For instance see this test:

  def test_linear_aio_stress
    linear_table = Table.new(Linear.new(@volume_size, @data_dev, 0))
    @dm.with_dev(linear_table) do |linear_dev|
      aio_stress(linear_dev)
      wipe_device(linear_dev)   # cause slow down                                                    
      aio_stress(linear_dev)
    end
  end

For me, the first run of aio_stress manages a throughput of ~9G/s.
After the wipe, which is just a simple dd across the device,
performance drops to ~5.5 G/s.  Also throughput on the device under
the linear target also drops.  Permanently.

I don't know if this is specific to aio, or a more general slowdown.

Once we have got to the bottom of this I've written a couple of
experimental patches that we can try to boost read performance
further.

- Joe

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel



[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux