Re: Poor written performance with RAID5 on ARM64

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for reply.
I have traced the iostat changes under the both kernel version as below show.  As you can see, disk IO is not the bottleneck because bandwidth not reach 100%, and cpu for single core hold about 40% during testing.  
----------------------------------------------for kernel 3.14.64----------------------------------------------------------------------
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    3.65    6.25    0.00   90.10

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdb               0.00  1549.01    0.00  220.30     0.00 112033.91  1017.12    14.23   63.48    0.00   63.48   4.11  90.59
sdc               0.00  1549.01    0.00  222.77     0.00 113301.24  1017.19    10.95   48.76    0.00   48.76   4.02  89.60
sdd               0.00  1549.01    0.00  222.28     0.00 113047.77  1017.18    16.18   72.32    0.00   72.32   4.16  92.57
sdf               0.00  1549.01    0.00  216.34     0.00 110006.19  1016.99    14.61   65.01    0.00   65.01   4.28  92.57
sde               0.00  1549.01    0.00  222.28     0.00 113047.77  1017.18    12.37   55.21    0.00   55.21   3.99  88.61
 --------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------for kernel 4.4.3--------------------------------------------------------------
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.03    0.00    2.37    5.43    0.00   92.18

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz    avgqu-sz  await r_await   w_await  svctm  %util
sdb              30.00  1209.50    1.50   70.00  1056.00 85441.50  2419.51    19.62  331.41  114.67  336.06   9.68  69.20
sdc              30.00  1209.50    1.50   69.00  1056.00 84161.50  2417.52    18.02  303.06  120.00  307.04   9.28  65.40
sdd              30.50  1211.00    1.50   70.00  1088.00 85505.50  2422.20    20.88  351.13  142.67  355.60  10.13  72.40
sde              30.50  1210.50    1.50   69.50  1088.00 84866.25  2421.25    18.53  311.77  117.33  315.97   9.55  67.80
sdf               0.00  1224.50    0.00   72.00     0.00     86850.25  2412.51    21.38  353.64    0.00    353.64  10.75  77.40
 --------------------------------------------------------------------------------------------------------------------------------------
While disable bitmap the field %util could reach 100% and written speed of single disk could reach 120M/s under kernel 4.4.3. So, I doubt bitmap may be the mather.

                                                                                                                                 Best wishs (祝生活愉快)!
 
 
------------------ Original ------------------
From: "Shaohua Li";
Date: 2016年4月19日(星期二) 晚上9:59
To: "刘正元";
Cc: "linux-raid";
Subject: Re: Poor written performance with RAID5 on ARM64
 
On Mon, Apr 18, 2016 at 01:43:04PM +0800, 刘正元 wrote:
> Hi,everyone.  I upgrade kernel form 3.14.x to 4.4.x recently on my ARM64
> server.  I create a RAID5 device with 8 disks on the server and have a dd
> test which like this "dd if=/dev/zero of=/dev/md5 bs=64K count=400000".
> Before upgrade it can reach 700M/s written and only 500M/s after upgrade.
> Then I disable the bitmap which means "mdadm create --bitmap=none", the speed
> can reach 800M/s with 4.4 kernel. I have a fast view on driver/md/bitmap.c
> and got no answer.  I doubt X86 platform has the same question. So, what is
> the most difference between 3.14.x and 4.4.x about md driver. Where could I
> found the ChangeLog or commit about the driver. Any answer would be thankful.

did you observe any changes in iostat? Could you post blktrace from one of the
raid disk?

Thanks,
Shaohua��.n��������+%������w��{.n�����{����w��ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux