Re: [PATCH] md - 1 of 12 - Missing mddev_put in md resync code

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Neil,

> > Yes in fact, that is the one I was seeing.
> > 
> 
> That one *should* be fixed in 2.4.-current-bk, but probably not
> completely.  It is actually rather hard to get it just-right in 2.4.
> I substantially reworked the 2.5 code so that proper locking could be
> done so that this sort of problem could be avoided.  
> 2.4 will porbably have to stay with "close-enough".

I'm fairly certain I already had any relevent patches applied when I saw
the panic, but there were 1 or 2 that I didn't already have. At any
rate, I seem to have been able to work around it.

> > 
> > I have an odd problem where the first rebuild performs as expected
> > (about 40MB/s), but any subsequent rebuild plods along at about 6-7MB/s.
> > There is no IO activity on the system other than the rebuild during my
> > tests.
> 
> Does upping /proc/sys/dev/raid/speed_limit_min affect the speed?
> 
> Does increasing the magic "32" in is_mddev_idle help

I tried increasing it to 128, and no, it didn't make any difference,
however...

this just got really interesting now... found something I hadn't
noticed.

I've determined that the performance issues I'm seeing only happen when
I'm rebuilding from the second disk in the array to the first. ie: if I
remove disk0, and then re-insert it, the performance is lousy. If I
remove disk1 and then re-insert it, the performance is good.

I've played with both speed_limit_min and speed_limit_max... it makes no
difference. My speed_limit_min was set to 15000, but was still only
getting 7MB/s.

cat /proc/sys/dev/raid/speed_limit_max 
100000
cat /proc/sys/dev/raid/speed_limit_min 
15000

Good speed results: (sdb1/2 were failed, removed, and inserted)

cat /proc/mdstat
Personalities : [raid1] 
read_ahead 1024 sectors
md0 : active raid1 sdb1[1] sda1[0]
      104320 blocks [2/2] [UU]
      
md1 : active raid1 sdb2[2] sda2[0]
      35455360 blocks [2/1] [U_]
      [===========>.........]  recovery = 55.8% (19804352/35455360) finish=4.7min speed=55372K/sec
unused devices: <none>


Poor speed results: (sda1/2 were failed, removed and inserted)
cat /proc/mdstat
Personalities : [raid1] 
read_ahead 1024 sectors
md0 : active raid1 sda1[0] sdb1[1]
      104320 blocks [2/2] [UU]
      
md1 : active raid1 sda2[2] sdb2[1]
      35455360 blocks [2/1] [_U]
      [>....................]  recovery =  0.2% (88000/35455360) finish=86.9min speed=6769K/sec
unused devices: <none>

Any pointers as to where I should look now based on this? Possible SCSI
device driver issue?

Sean.


-- 

Sean C. Kormilo, STORM Software Architect, Nortel Networks
              email: skormilo@nortelnetworks.com
  

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux