Re: 4.11.2: reshape raid5 -> raid6 atop bcache deadlocks at start on md_attr_store / raid5_make_request

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 24 May 2017, NeilBrown said:
> Alternately, this might do it.

Bingo! Ticking away nicely:

[  147.538274] md/raid:md125: device sda3 operational as raid disk 0
[  147.574356] md/raid:md125: device sdd3 operational as raid disk 3
[  147.586482] md/raid:md125: device sdc3 operational as raid disk 2
[  147.598571] md/raid:md125: device sdb3 operational as raid disk 1
[  147.613949] md/raid:md125: raid level 6 active with 4 out of 5 devices, algorithm 18
[  147.776155] md: reshape of RAID array md125

md125 : active raid6 sda3[0] sdf3[5] sdd3[4] sdc3[2] sdb3[1]
      15391689216 blocks super 1.2 level 6, 512k chunk, algorithm 18 [5/4] [UUUU_]
      [>....................]  reshape =  0.0% (2669056/5130563072) finish=1363.5min speed=62678K/sec

(wow, this is a lot faster: only 1363 min / 70MiB/s, versus the
--backup-file one, for an array half the size, that was >2500 min,
12MiB/s. I mean, yes, it was at the slow end of the disk, but they don't
differ in speed *that* much.)

Machine still perfectly responsive, as if on reshape was happening at
all. (But then, I've got used to that. The last time I saw any md resync-
induced delays was long ago when I had a sym53c75 in the loop with an
I/O rate of 10MiB/s. I think delays are unavoidable with something
that slow in there.)
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux