Re: dm-thin vs lvm performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Joe,
Thanks for spending time with aio_stress(much apperciated) and the information.   A week back I ran the tests on flash device and saw lower numbers. However all my runs since last week have been with ramdisk only. I run the following command to make sure ramdisk has backing store.

[root@lab-dm-cn4 ~]# dd if=/dev/zero of=/dev/ram bs=512 oflag=direct count=`blockdev --getsize /dev/ram`
8388608+0 records in
8388608+0 records out
4294967296 bytes (4.3 GB) copied, 19.2371 s, 223 MB/s

One interesting thing I notice is raw ramdisk has 9000+MB/s throughput in my tests and 5440MB/s throughput in your tests. I remember you mentioning that you are running tests in a VM with 4G memory, while I run the rests on a standalone server with 16 quad-core processors and 12G or ram (out of that 12G, I carve out 4G ramdisk). Attached are the cpu and memory information. Could that be causing an issue?

Thanks,
Jagan.


From: Joe Thornber <thornber@xxxxxxxxxx>
To: Jagan Reddy <gjmsreddy@xxxxxxxxx>; device-mapper development <dm-devel@xxxxxxxxxx>
Sent: Friday, January 20, 2012 10:07 AM
Subject: Re: [dm-devel] dm-thin vs lvm performance

Jagan,

On Fri, Jan 20, 2012 at 05:33:34PM +0000, Joe Thornber wrote:
> Back to testing ...

Here are my aio-stress results for various devices running on top of a
ramdisk:

| Device stack          | M/s  |
+------------------------+------+ 
| raw ramdisk            | 5440 |
| linear                | 5431 |
| 2 stacked linear      | 5304 |
| pool device            | 5351 |
| linear stacked on pool | 5243 |
| thin                  | 5375 |

I also tried the thin test after disabling the block locking in
dm-block-manager.c, and using a 128 entry cache local to the thin
device.  Neither made any difference to the 'thin' result.

Things for you to check:

i)  Are you benchmarking with a ramdisk, or your flash device?  In both
    cases I think you need to make sure the device has allocated backing
    store before you run any tests.  Remember that Discards may well
    removing this backing.

ii) You claim that io is getting deferred.  This will happen when you do
    the initial wipe of the thin device to force provisioning.  But for
    the aio-stress tests you should get nothing deferred (I instrumented
    to confirm this).


- Joe


Attachment: cpu_info
Description: Binary data

Attachment: meminfo
Description: Binary data

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux