Re: md raid performance with 3-18-rc3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Neil,

Any findings on some of the logs I shared earlier?

Thanks in advance for reply. I'm having trouble booting 3.12 kernel, should probably sort it out soon and come back with results.

Manish

On 12/10/2014 01:29 PM, Manish Awasthi wrote:
Here is the perf report for the tests run on 3.6-11 and 3.18. Compating both the results, it just appears that raid in older version is busier than it is with the latest version. I will also monitor the system activity via `perf top` now. Also, I should be back with results on 3.12 by the weekend

Manish

On 12/09/2014 01:56 PM, Manish Awasthi wrote:
this time with attachment:

manish
On 12/09/2014 01:54 PM, Manish Awasthi wrote:
resending:

 dirty_ratio same for both the kernels.

vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500


I re-ran the tests with the same set of kernel without enabling multithread support on 3.18 and measured a few things with perf.

perf-stat-<kernel>.txt: test ran for some time and measured various parameters.

Meanwhile I'm also running complete test under perf record. I'll share the results soon.

Manish

On 12/03/2014 11:51 AM, NeilBrown wrote:
On Wed, 26 Nov 2014 13:41:39 +0530 Manish Awasthi
<manish.awasthi@xxxxxxxxxxxxxxxxxx>  wrote:

Whatever data I have on comparison is attached, I have consolidated this
from log files to excel. See if this helps.
raid_3_18_performance.xls shows read throughput to be consistently 20% down
on 3.18 compared to 3.6.11.

Writes are a few percent better for 4G/8G files, 20% better for 16G/32G files.
unchanged above that.
Given that you have 8G of RAM, that seems like it could be some change in
caching behaviour, and not necessarily a change in RAID behaviour.

The CPU utilization roughly follows the throughput: 40% higher when write
throughput is 20% better.
Could you check if the value of /proc/sys/vm/dirty_ratio is the same for both tests. That number has changed occasionally and could affect these tests.


The second file, 3SSDs-perf-2-Cores-3.18-rc1 has the "change" numbers
negative where I expected positive.. i.e. negative mean an increase.

Writes consistently have higher CPU utilisation.
Reads consistently have much lower CPU utilization.

I don't know what that means ... it might not mean anything.

Could you please run the tests between the two kernels *with* RAID. i.e. directly on an SSD. That will give us a baseline for what changes are caused by other parts of the kernel (filesystem, block layer, MM, etc). Then we can
see how much change RAID5 is contributing.

The third file, 3SSDs-perf-4Core.xls seems to show significantly reduced
throughput across the board.
CPU utilization is less (better) for writes, but worse for reads. That is
the reverse of what the second file shows.

I might try running some tests across a set of kernel versions and see what I
can come up with.

NeilBrown





--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux