Joe,
Thanks for spending time with aio_stress(much apperciated) and the information. A week back I ran the tests on flash device and saw lower numbers. However all my runs since last week have been with ramdisk only. I run the following command to make sure ramdisk has backing store.
[root@lab-dm-cn4 ~]# dd if=/dev/zero of=/dev/ram bs=512 oflag=direct count=`blockdev --getsize /dev/ram`
8388608+0 records in
8388608+0 records out
4294967296 bytes (4.3 GB) copied, 19.2371 s, 223 MB/s
8388608+0 records in
8388608+0 records out
4294967296 bytes (4.3 GB) copied, 19.2371 s, 223 MB/s
Thanks,
Jagan.
From: Joe Thornber <thornber@xxxxxxxxxx>
To: Jagan Reddy <gjmsreddy@xxxxxxxxx>; device-mapper development <dm-devel@xxxxxxxxxx>
Sent: Friday, January 20, 2012 10:07 AM
Subject: Re: [dm-devel] dm-thin vs lvm performance
Jagan,
On Fri, Jan 20, 2012 at 05:33:34PM +0000, Joe Thornber wrote:
> Back to testing ...
Here are my aio-stress results for various devices running on top of a
ramdisk:
| Device stack | M/s |
+------------------------+------+
| raw ramdisk | 5440 |
| linear | 5431 |
| 2 stacked linear | 5304 |
| pool device | 5351 |
| linear stacked on pool | 5243 |
| thin | 5375 |
I also tried the thin test after disabling the block locking in
dm-block-manager.c, and using a 128 entry cache local to the thin
device. Neither made any difference to the 'thin' result.
Things for you to check:
i) Are you benchmarking with a ramdisk, or your flash device? In both
cases I think you need to make sure the device has allocated backing
store before you run any tests. Remember that Discards may well
removing this backing.
ii) You claim that io is getting deferred. This will happen when you do
the initial wipe of the thin device to force provisioning. But for
the aio-stress tests you should get nothing deferred (I instrumented
to confirm this).
- Joe
Attachment:
cpu_info
Description: Binary data
Attachment:
meminfo
Description: Binary data
-- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel