Re: Failed --grow. Recovery possible?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Neil gave me these four steps to follow:

What I suggest you do is:
1/ find the backup of the first 1152K
2/ re-create the array as the original 6-drive raid5
3/ Check if the backup needs to be restored and possibly restore it
4/ Don't use the new drives until you are really sure they will work.

The only thing I am sure of is number 4 that the new drives work. I
have tested them individually in the same controller and they're
working fine.

1 - "You would need to look at the code in Grow.c to see where it is written"

The last time I have looked at C code was over ten years ago when I
was 15 years old. I wish I would have become a kernel hacker instead
of a lowly web developer, but this isn't my area of expertise. Grow.c
is a huge file it's quite daunting. I don't even know what I could
hope to learn from this data once I find it.

2 is not quite easy. I've re-created the original array (in the order
that I'm 99% certain the original array was) and no attempt to mount
or check a file system there will work. There must be something
between step 1 and 2 where I have to restore the "critical section" to
a certain spot, maybe?

On Tue, Mar 23, 2010 at 5:07 PM, Michael Evans <mjevans1983@xxxxxxxxx> wrote:
> On Tue, Mar 23, 2010 at 12:53 PM, Stephan Stachurski <ses1984@xxxxxxxxx> wrote:
>> Sorry, it looks like I sent two replies directly to NeilBrown instead
>> of the mailing list. Here they are:
>>
>> First reply:
>>
>> Before I checked this email, I upgraded to the newest version of mdadm
>> on advice I got from #linux on freenode. The array is no longer
>> segfaults when it's assembled. Instead it picks up exactly where it
>> left off trying to grow the array but not actually progressing. In the
>> new version of mdadm, however, after a short while the drives on the
>> mv_sas controller are dropped and mdstat reports that a resync is
>> pending.
>>
>> This looks like an improvement to me. I am going to try to test out
>> the controller and drives to see if I can find out what's going on.
>>
>> ----------------------
>>
>> Second reply:
>>
>> I hope I haven't screwed up my disks beyond saving. I was using this
>> earlier question on the mailing list as a reference
>> http://www.mail-archive.com/linux-raid@xxxxxxxxxxxxxxx/msg08907.html .
>> If I could re-assemble the original array, I would feel a lot more
>> comfortable proceeding. In the above referenced thread, Greg Nicholson
>> mentions that order matters when it comes to assembling arrays, so I
>> wrote a perl script (full of terrible hacks) that took the list of the
>> 6 original devices and iterated over the permutations of the order of
>> those devices, assembled the array with --assume-clean and attempt to
>> mount the file system.
>>
>> This failed for all 720 permutations. It was probably a stupid idea, anyway...
>>
>> I also tried plugging in a known working disk into the mv_sas
>> controller that may have had an issue and it looked like everything
>> was working OK.
>>
>> Now I'm really not sure what to do. I'm completely lost.
>>
>> Thanks again for your help.
>>
>> On Fri, Mar 19, 2010 at 12:02 AM, Neil Brown <neilb@xxxxxxx> wrote:
>>> On Thu, 18 Mar 2010 21:52:54 -0400
>>> Stephan Stachurski <ses1984@xxxxxxxxx> wrote:
>>>
>>>> I have had a RAID5 up for quite a while with 6 disks. I recently added
>>>> 4 and attempted to grow the array to span all 10 devices.
>>>>
>>>> For an hour after starting the grow command, the speed of the
>>>> operation was 0K/s the entire time. I thought something must be wrong,
>>>> and the best course of action would be to reboot and start over from a
>>>> clean boot. I'm not a linux expert so I thought that if I rebooted
>>>> everything would try to exit gracefully.
>>>>
>>>> After one hour, the system still had not finished shutting down. I
>>>> then did alt-sysreq RSEISUB waiting over one minute between each
>>>> command. I've included the syslog of what happened up until the next
>>>> start up, but put it last because it's by far the longest.
>>>
>>> I think you will be able to get your data back.  It won't be trivial, but it
>>> should be possible.
>>>
>>> I looks like the driver for the mv_sas controller has issues.  When md/raid5
>>> started writing data on to them to reshape the array something went wrong and
>>> the writes didn't complete, so nothing else happened.
>>>
>>> I don't know why mdadm is getting a segmentation fault.  Possibly this is
>>> fixed in a newer version of mdadm.  However it is possibly good that it
>>> didn't manage to restart the array fully as it would probably has just failed
>>> again and might have made more of a mess.
>>>
>>> To get you data back we need to understand exactly what happened.
>>> What should happen when you run "mdadm --grow ... " is that it sets
>>> up for a reshape but doesn't let it progress.
>>> Then it prints:
>>>
>>> mdadm: Need to backup 1152K of critical section..
>>>
>>> It then copies the first 1152K (in the case of 6->10 with 256K chunk)
>>> from the start of the array to near the end of each of the 'spares'.
>>> Then it allows the reshape to proceed.
>>> Once the reshape has progressed past that 1152K it removes the
>>> copy that it made (erases some metadata for it) and prints
>>>
>>> mdadm: ... critical section passed.
>>>
>>>
>>> I presume that it didn't successfully pass the critical section, else
>>> the Reshape Position would be greater than 0.
>>>
>>> It is possible that the reshape didn't start at all and your data is exactly
>>> where you left it, but we cannot be sure without looking...
>>>
>>> What I suggest you do is:
>>> 1/ find the backup of the first 1152K
>>> 2/ re-create the array as the original 6-drive raid5
>>> 3/ Check if the backup needs to be restored and possibly restore it
>>> 4/ Don't use the new drives until you are really sure they will work.
>>>
>>> 1 is the hardest.  I have a vague plan of giving mdadm the ability to do this
>>> but I haven't yet.  I could possibly do it next week some time if you can
>>> wait.
>>> You would need to look at the code in Grow.c to see where it is written. I
>>> think there is a block of metadata near the end of the device - just before
>>> the md metadata - which records what has been backed up where.  Once you find
>>> and decode that from one of the spares you can easily use 'dd' to extract the
>>> backup.
>>>
>>> 2 is quite easy:
>>>
>>>  mdadm -C /dev/md0 -e 0.90 -l 5 -n 6 -c 256 \
>>>  --assume-clean /dev/sdb /dev/sdg /dev/sdh /dev/sdd /dev/sda /dev/sdc
>>>
>>> Make sure you have the devices if the right order.  If you aren't sure, then
>>>  mdadm -E list..of..devices | grep this
>>> should give you an ascending series in columns 2 and 5.
>>>
>>> 3 is simply a 'cmp' between /dev/md0 and the backup that you restored. or
>>> maybe just 'fsck' of /dev/md0.
>>> If you decide to restore (be sure before you do), just dd the backup to the
>>> start of /dev/md0
>>>
>>> I don't know how you can make yourself sure that the drives really do work.
>>> Lots of testing of the new devices by themselves in an array ???
>>>
>>> Good luck.
>>>
>>> NeilBrown
>>>
>>>>
>>>> When I rebooted, the array seemed to be up, but mounting it resulted
>>>> in a bad FS type error, even when I tried to specify it (ext4). After
>>>> stopping the inactive array, and trying to reassemble it, mdadm
>>>> crashed to segmentation fault.
>>>> Is it possible to recover the data? We have backups, but they're
>>>> spread out over 1500 DVDs.
>>>>
>>>> When I examine the drives, the output looks pretty much like this for
>>>> each drive (6 drives say active and 4 say clean, corresponding to the
>>>> 6 original and 4 added drives):
>>>> $ mdadm --examine /dev/sda
>>>> /dev/sda:
>>>>           Magic : a92b4efc
>>>>         Version : 00.91.00
>>>>            UUID : 56c16545:07db76d6:e368bf24:bd0fce41
>>>>   Creation Time : Tue Feb  2 09:58:58 2010
>>>>      Raid Level : raid5
>>>>   Used Dev Size : 976762368 (931.51 GiB 1000.20 GB)
>>>>      Array Size : 8790861312 (8383.62 GiB 9001.84 GB)
>>>>    Raid Devices : 10
>>>>   Total Devices : 10
>>>> Preferred Minor : 0
>>>>   Reshape pos'n : 0
>>>>   Delta Devices : 4 (6->10)
>>>>     Update Time : Thu Mar 18 23:33:40 2010
>>>>           State : active
>>>>  Active Devices : 10
>>>> Working Devices : 10
>>>>  Failed Devices : 0
>>>>   Spare Devices : 0
>>>>        Checksum : 79904299 - correct
>>>>          Events : 270611
>>>>          Layout : left-symmetric
>>>>      Chunk Size : 256K
>>>>       Number   Major   Minor   RaidDevice State
>>>> this     4       8        0        4      active sync   /dev/sda
>>>>    0     0       8       16        0      active sync   /dev/sdb
>>>>    1     1       8       96        1      active sync   /dev/sdg
>>>>    2     2       8      112        2      active sync   /dev/sdh
>>>>    3     3       8       48        3      active sync   /dev/sdd
>>>>    4     4       8        0        4      active sync   /dev/sda
>>>>    5     5       8       32        5      active sync   /dev/sdc
>>>>    6     6       8      160        6      active sync   /dev/sdk
>>>>    7     7       8      144        7      active sync   /dev/sdj
>>>>    8     8       8      128        8      active sync   /dev/sdi
>>>>    9     9       8       80        9      active sync   /dev/sdf
>>>>
>>>> #syslog showing how mdadm detects the bad 10-drive array on boot
>>>> Mar 18 23:47:35 raidserver kernel: [    2.178393] md: bind<sdc>
>>>> Mar 18 23:47:35 raidserver kernel: [    2.193132] md: bind<sdd>
>>>> Mar 18 23:47:35 raidserver kernel: [    2.211906] md: bind<sdb>
>>>> Mar 18 23:47:35 raidserver kernel: [    2.220062] md: bind<sda>
>>>> Mar 18 23:47:35 raidserver kernel: [    2.230062] ohci1394: fw-host0:
>>>> OHCI-1394 1.1 (PCI): IRQ=[22]  MMIO=[fd6ff000-fd6ff7ff]  Max
>>>> Packet=[2048]  IR/IT contexts=[4/8]
>>>> Mar 18 23:47:35 raidserver kernel: [    3.551483] ieee1394: Host
>>>> added: ID:BUS[0-00:1023]  GUID[003635c7006cf049]
>>>> Mar 18 23:47:35 raidserver kernel: [    6.579349]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1214:port 0
>>>> attach dev info is 0
>>>> Mar 18 23:47:35 raidserver kernel: [    6.579353]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1216:port 0
>>>> attach sas addr is 0
>>>> Mar 18 23:47:35 raidserver kernel: [    6.780038]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1214:port 1
>>>> attach dev info is 0
>>>> Mar 18 23:47:35 raidserver kernel: [    6.780041]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1216:port 1
>>>> attach sas addr is 1
>>>> Mar 18 23:47:35 raidserver kernel: [    6.990054]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1214:port 2
>>>> attach dev info is 0
>>>> Mar 18 23:47:35 raidserver kernel: [    6.990057]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1216:port 2
>>>> attach sas addr is 2
>>>> Mar 18 23:47:35 raidserver kernel: [    7.200052]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1214:port 3
>>>> attach dev info is 0
>>>> Mar 18 23:47:35 raidserver kernel: [    7.200055]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1216:port 3
>>>> attach sas addr is 3
>>>> Mar 18 23:47:35 raidserver kernel: [    7.310035]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1214:port 4
>>>> attach dev info is 0
>>>> Mar 18 23:47:35 raidserver kernel: [    7.310038]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1216:port 4
>>>> attach sas addr is 0
>>>> Mar 18 23:47:35 raidserver kernel: [    7.420035]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1214:port 5
>>>> attach dev info is 0
>>>> Mar 18 23:47:35 raidserver kernel: [    7.420038]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1216:port 5
>>>> attach sas addr is 0
>>>> Mar 18 23:47:35 raidserver kernel: [    7.630052]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1214:port 6
>>>> attach dev info is 2000000
>>>> Mar 18 23:47:35 raidserver kernel: [    7.630055]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1216:port 6
>>>> attach sas addr is 6
>>>> Mar 18 23:47:35 raidserver kernel: [    7.840053]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1214:port 7
>>>> attach dev info is 0
>>>> Mar 18 23:47:35 raidserver kernel: [    7.840056]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1216:port 7
>>>> attach sas addr is 7
>>>> Mar 18 23:47:35 raidserver kernel: [    7.840062] scsi8 : mvsas
>>>> Mar 18 23:47:35 raidserver kernel: [    7.840605]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 380:phy 0 byte
>>>> dmaded.
>>>> Mar 18 23:47:35 raidserver kernel: [    7.840610]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 380:phy 1 byte
>>>> dmaded.
>>>> Mar 18 23:47:35 raidserver kernel: [    7.840614]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 380:phy 2 byte
>>>> dmaded.
>>>> Mar 18 23:47:35 raidserver kernel: [    7.840617]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 380:phy 3 byte
>>>> dmaded.
>>>> Mar 18 23:47:35 raidserver kernel: [    7.840621]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 380:phy 6 byte
>>>> dmaded.
>>>> Mar 18 23:47:35 raidserver kernel: [    7.840624]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 380:phy 7 byte
>>>> dmaded.
>>>> Mar 18 23:47:35 raidserver kernel: [    7.840871] mvsas 0000:03:00.0:
>>>> mvsas: driver version 0.8.2
>>>> Mar 18 23:47:35 raidserver kernel: [    7.840885] mvsas 0000:03:00.0:
>>>> PCI INT A -> GSI 16 (level, low) -> IRQ 16
>>>> Mar 18 23:47:35 raidserver kernel: [    7.840891] mvsas 0000:03:00.0:
>>>> setting latency timer to 64
>>>> Mar 18 23:47:35 raidserver kernel: [    7.841965]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1365:found
>>>> dev[0:5] is gone.
>>>> Mar 18 23:47:35 raidserver kernel: [    7.843312] ata9.00: ATA-8: WDC
>>>> WD10EARS-00Y5B1, 80.00A80, max UDMA/133
>>>> Mar 18 23:47:35 raidserver kernel: [    7.843316] ata9.00: 1953525168
>>>> sectors, multi 0: LBA48 NCQ (depth 31/32)
>>>> Mar 18 23:47:35 raidserver kernel: [    7.843480] mvsas 0000:03:00.0:
>>>> mvsas: PCI-E x4, Bandwidth Usage: 2.5 Gbps
>>>> Mar 18 23:47:35 raidserver kernel: [    7.845117] ata9.00: configured
>>>> for UDMA/133
>>>> Mar 18 23:47:35 raidserver kernel: [    7.845182] scsi 8:0:0:0:
>>>> Direct-Access     ATA      WDC WD10EARS-00Y 80.0 PQ: 0 ANSI: 5
>>>> Mar 18 23:47:35 raidserver kernel: [    7.845801]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1365:found
>>>> dev[1:5] is gone.
>>>> Mar 18 23:47:35 raidserver kernel: [    7.846588] ata10.00: ATA-8: WDC
>>>> WD10EADS-00L5B1, 01.01A01, max UDMA/133
>>>> Mar 18 23:47:35 raidserver kernel: [    7.846591] ata10.00: 1953525168
>>>> sectors, multi 0: LBA48 NCQ (depth 31/32)
>>>> Mar 18 23:47:35 raidserver kernel: [    7.847419] ata10.00: configured
>>>> for UDMA/133
>>>> Mar 18 23:47:35 raidserver kernel: [    7.847455] scsi 8:0:1:0:
>>>> Direct-Access     ATA      WDC WD10EADS-00L 01.0 PQ: 0 ANSI: 5
>>>> Mar 18 23:47:35 raidserver kernel: [    7.848069]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1365:found
>>>> dev[2:5] is gone.
>>>> Mar 18 23:47:35 raidserver kernel: [    7.848894] ata11.00: ATA-8: WDC
>>>> WD10EACS-00D6B1, 01.01A01, max UDMA/133
>>>> Mar 18 23:47:35 raidserver kernel: [    7.848897] ata11.00: 1953525168
>>>> sectors, multi 0: LBA48 NCQ (depth 31/32)
>>>> Mar 18 23:47:35 raidserver kernel: [    7.849713] ata11.00: configured
>>>> for UDMA/133
>>>> Mar 18 23:47:35 raidserver kernel: [    7.849751] scsi 8:0:2:0:
>>>> Direct-Access     ATA      WDC WD10EACS-00D 01.0 PQ: 0 ANSI: 5
>>>> Mar 18 23:47:35 raidserver kernel: [    7.850909]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1365:found
>>>> dev[3:5] is gone.
>>>> Mar 18 23:47:35 raidserver kernel: [    7.852188] ata12.00: ATA-8: WDC
>>>> WD10EARS-00Y5B1, 80.00A80, max UDMA/133
>>>> Mar 18 23:47:35 raidserver kernel: [    7.852192] ata12.00: 1953525168
>>>> sectors, multi 0: LBA48 NCQ (depth 31/32)
>>>> Mar 18 23:47:35 raidserver kernel: [    7.853488] ata12.00: configured
>>>> for UDMA/133
>>>> Mar 18 23:47:35 raidserver kernel: [    7.853524] scsi 8:0:3:0:
>>>> Direct-Access     ATA      WDC WD10EARS-00Y 80.0 PQ: 0 ANSI: 5
>>>> Mar 18 23:47:35 raidserver kernel: [    7.854665]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1365:found
>>>> dev[4:5] is gone.
>>>> Mar 18 23:47:35 raidserver kernel: [    7.855955] ata13.00: ATA-8: WDC
>>>> WD10EARS-00Y5B1, 80.00A80, max UDMA/133
>>>> Mar 18 23:47:35 raidserver kernel: [    7.855959] ata13.00: 1953525168
>>>> sectors, multi 0: LBA48 NCQ (depth 31/32)
>>>> Mar 18 23:47:35 raidserver kernel: [    7.857258] ata13.00: configured
>>>> for UDMA/133
>>>> Mar 18 23:47:35 raidserver kernel: [    7.857293] scsi 8:0:4:0:
>>>> Direct-Access     ATA      WDC WD10EARS-00Y 80.0 PQ: 0 ANSI: 5
>>>> Mar 18 23:47:35 raidserver kernel: [    7.858437]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1365:found
>>>> dev[5:5] is gone.
>>>> Mar 18 23:47:35 raidserver kernel: [    7.859713] ata14.00: ATA-8: WDC
>>>> WD10EARS-00Y5B1, 80.00A80, max UDMA/133
>>>> Mar 18 23:47:35 raidserver kernel: [    7.859716] ata14.00: 1953525168
>>>> sectors, multi 0: LBA48 NCQ (depth 31/32)
>>>> Mar 18 23:47:35 raidserver kernel: [    7.861014] ata14.00: configured
>>>> for UDMA/133
>>>> Mar 18 23:47:35 raidserver kernel: [    7.861051] scsi 8:0:5:0:
>>>> Direct-Access     ATA      WDC WD10EARS-00Y 80.0 PQ: 0 ANSI: 5
>>>> Mar 18 23:47:35 raidserver kernel: [    8.831403] sd 8:0:0:0: Attached
>>>> scsi generic sg5 type 0
>>>> Mar 18 23:47:35 raidserver kernel: [    8.831507] sd 8:0:1:0: Attached
>>>> scsi generic sg6 type 0
>>>> Mar 18 23:47:35 raidserver kernel: [    8.831610] sd 8:0:2:0: Attached
>>>> scsi generic sg7 type 0
>>>> Mar 18 23:47:35 raidserver kernel: [    8.831713] sd 8:0:3:0: Attached
>>>> scsi generic sg8 type 0
>>>> Mar 18 23:47:35 raidserver kernel: [    8.831823] sd 8:0:4:0: Attached
>>>> scsi generic sg9 type 0
>>>> Mar 18 23:47:35 raidserver kernel: [    8.831927] sd 8:0:5:0: Attached
>>>> scsi generic sg10 type 0
>>>> Mar 18 23:47:35 raidserver kernel: [    8.832481] sd 8:0:0:0: [sdf]
>>>> 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
>>>> Mar 18 23:47:35 raidserver kernel: [    8.832485] sd 8:0:1:0: [sdg]
>>>> 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
>>>> Mar 18 23:47:35 raidserver kernel: [    8.832547] sd 8:0:1:0: [sdg]
>>>> Write Protect is off
>>>> Mar 18 23:47:35 raidserver kernel: [    8.832551] sd 8:0:0:0: [sdf]
>>>> Write Protect is off
>>>> Mar 18 23:47:35 raidserver kernel: [    8.832555] sd 8:0:0:0: [sdf]
>>>> Mode Sense: 00 3a 00 00
>>>> Mar 18 23:47:35 raidserver kernel: [    8.832559] sd 8:0:1:0: [sdg]
>>>> Mode Sense: 00 3a 00 00
>>>> Mar 18 23:47:35 raidserver kernel: [    8.832586] sd 8:0:0:0: [sdf]
>>>> Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
>>>> Mar 18 23:47:35 raidserver kernel: [    8.832590] sd 8:0:1:0: [sdg]
>>>> Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
>>>> Mar 18 23:47:35 raidserver kernel: [    8.832806]  sdg:
>>>> Mar 18 23:47:35 raidserver kernel: [    8.832853]  sdf:
>>>> Mar 18 23:47:35 raidserver kernel: [    8.832914] sd 8:0:2:0: [sdh]
>>>> 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
>>>> Mar 18 23:47:35 raidserver kernel: [    8.832953] sd 8:0:2:0: [sdh]
>>>> Write Protect is off
>>>> Mar 18 23:47:35 raidserver kernel: [    8.832956] sd 8:0:2:0: [sdh]
>>>> Mode Sense: 00 3a 00 00
>>>> Mar 18 23:47:35 raidserver kernel: [    8.832976] sd 8:0:2:0: [sdh]
>>>> Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
>>>> Mar 18 23:47:35 raidserver kernel: [    8.833087]  sdh:
>>>> Mar 18 23:47:35 raidserver kernel: [    8.833182] sd 8:0:3:0: [sdi]
>>>> 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
>>>> Mar 18 23:47:35 raidserver kernel: [    8.833197] sd 8:0:4:0: [sdj]
>>>> 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
>>>> Mar 18 23:47:35 raidserver kernel: [    8.833238] sd 8:0:3:0: [sdi]
>>>> Write Protect is off
>>>> Mar 18 23:47:35 raidserver kernel: [    8.833241] sd 8:0:3:0: [sdi]
>>>> Mode Sense: 00 3a 00 00
>>>> Mar 18 23:47:35 raidserver kernel: [    8.833250] sd 8:0:4:0: [sdj]
>>>> Write Protect is off
>>>> Mar 18 23:47:35 raidserver kernel: [    8.833252] sd 8:0:4:0: [sdj]
>>>> Mode Sense: 00 3a 00 00
>>>> Mar 18 23:47:35 raidserver kernel: [    8.833270] sd 8:0:3:0: [sdi]
>>>> Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
>>>> Mar 18 23:47:35 raidserver kernel: [    8.833279] sd 8:0:4:0: [sdj]
>>>> Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
>>>> Mar 18 23:47:35 raidserver kernel: [    8.833442]  sdi:
>>>> Mar 18 23:47:35 raidserver kernel: [    8.833467]  sdj:
>>>> Mar 18 23:47:35 raidserver kernel: [    8.833553] sd 8:0:5:0: [sdk]
>>>> 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
>>>> Mar 18 23:47:35 raidserver kernel: [    8.833593] sd 8:0:5:0: [sdk]
>>>> Write Protect is off
>>>> Mar 18 23:47:35 raidserver kernel: [    8.833596] sd 8:0:5:0: [sdk]
>>>> Mode Sense: 00 3a 00 00
>>>> Mar 18 23:47:35 raidserver kernel: [    8.833617] sd 8:0:5:0: [sdk]
>>>> Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
>>>> Mar 18 23:47:35 raidserver kernel: [    8.833734]  sdk: unknown partition table
>>>> Mar 18 23:47:35 raidserver kernel: [    8.846211] sd 8:0:2:0: [sdh]
>>>> Attached SCSI disk
>>>> Mar 18 23:47:35 raidserver kernel: [    8.846599]  unknown partition table
>>>> Mar 18 23:47:35 raidserver kernel: [    8.846759] sd 8:0:1:0: [sdg]
>>>> Attached SCSI disk
>>>> Mar 18 23:47:35 raidserver kernel: [    9.316040]  unknown partition table
>>>> Mar 18 23:47:35 raidserver kernel: [    9.316243] sd 8:0:3:0: [sdi]
>>>> Attached SCSI disk
>>>> Mar 18 23:47:35 raidserver kernel: [    9.316249]  unknown partition table
>>>> Mar 18 23:47:35 raidserver kernel: [    9.316404] sd 8:0:0:0: [sdf]
>>>> Attached SCSI disk
>>>> Mar 18 23:47:35 raidserver kernel: [    9.317860]  unknown partition table
>>>> Mar 18 23:47:35 raidserver kernel: [    9.318033] sd 8:0:4:0: [sdj]
>>>> Attached SCSI disk
>>>> Mar 18 23:47:35 raidserver kernel: [    9.321964]  unknown partition table
>>>> Mar 18 23:47:35 raidserver kernel: [    9.322127] sd 8:0:5:0: [sdk]
>>>> Attached SCSI disk
>>>> Mar 18 23:47:35 raidserver kernel: [   12.220036]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1214:port 0
>>>> attach dev info is 0
>>>> Mar 18 23:47:35 raidserver kernel: [   12.220039]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1216:port 0
>>>> attach sas addr is 0
>>>> Mar 18 23:47:35 raidserver kernel: [   12.330034]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1214:port 1
>>>> attach dev info is 0
>>>> Mar 18 23:47:35 raidserver kernel: [   12.330037]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1216:port 1
>>>> attach sas addr is 0
>>>> Mar 18 23:47:35 raidserver kernel: [   12.440035]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1214:port 2
>>>> attach dev info is 0
>>>> Mar 18 23:47:35 raidserver kernel: [   12.440037]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1216:port 2
>>>> attach sas addr is 0
>>>> Mar 18 23:47:35 raidserver kernel: [   12.550034]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1214:port 3
>>>> attach dev info is 0
>>>> Mar 18 23:47:35 raidserver kernel: [   12.550038]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1216:port 3
>>>> attach sas addr is 0
>>>> Mar 18 23:47:35 raidserver kernel: [   12.660035]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1214:port 4
>>>> attach dev info is 0
>>>> Mar 18 23:47:35 raidserver kernel: [   12.660038]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1216:port 4
>>>> attach sas addr is 0
>>>> Mar 18 23:47:35 raidserver kernel: [   12.770035]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1214:port 5
>>>> attach dev info is 0
>>>> Mar 18 23:47:35 raidserver kernel: [   12.770037]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1216:port 5
>>>> attach sas addr is 0
>>>> Mar 18 23:47:35 raidserver kernel: [   12.880035]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1214:port 6
>>>> attach dev info is 0
>>>> Mar 18 23:47:35 raidserver kernel: [   12.880037]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1216:port 6
>>>> attach sas addr is 0
>>>> Mar 18 23:47:35 raidserver kernel: [   12.990034]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1214:port 7
>>>> attach dev info is 0
>>>> Mar 18 23:47:35 raidserver kernel: [   12.990037]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1216:port 7
>>>> attach sas addr is 0
>>>> Mar 18 23:47:35 raidserver kernel: [   12.990043] scsi9 : mvsas
>>>> Mar 18 23:47:35 raidserver kernel: [   13.595116] md: bind<sdi>
>>>> Mar 18 23:47:35 raidserver kernel: [   13.651656] md: bind<sdf>
>>>> Mar 18 23:47:35 raidserver kernel: [   13.653928] md: bind<sdj>
>>>> Mar 18 23:47:35 raidserver kernel: [   13.854601] md: bind<sdk>
>>>> Mar 18 23:47:35 raidserver kernel: [   14.055322] md: bind<sdh>
>>>> Mar 18 23:47:35 raidserver kernel: [   14.255683] md: bind<sdg>
>>>> Mar 18 23:47:35 raidserver kernel: [   14.259239] xor: automatically
>>>> using best checksumming function: generic_sse
>>>> Mar 18 23:47:35 raidserver kernel: [   14.300015]    generic_sse:
>>>> 6593.600 MB/sec
>>>> Mar 18 23:47:35 raidserver kernel: [   14.300018] xor: using function:
>>>> generic_sse (6593.600 MB/sec)
>>>> Mar 18 23:47:35 raidserver kernel: [   14.300599] async_tx: api
>>>> initialized (async)
>>>> Mar 18 23:47:35 raidserver kernel: [   14.470026] raid6: int64x1   1711 MB/s
>>>> Mar 18 23:47:35 raidserver kernel: [   14.640019] raid6: int64x2   2392 MB/s
>>>> Mar 18 23:47:35 raidserver kernel: [   14.810046] raid6: int64x4   1567 MB/s
>>>> Mar 18 23:47:35 raidserver kernel: [   14.980047] raid6: int64x8   1540 MB/s
>>>> Mar 18 23:47:35 raidserver kernel: [   15.150016] raid6: sse2x1    2931 MB/s
>>>> Mar 18 23:47:35 raidserver kernel: [   15.320030] raid6: sse2x2    3916 MB/s
>>>> Mar 18 23:47:35 raidserver kernel: [   15.490023] raid6: sse2x4    4088 MB/s
>>>> Mar 18 23:47:35 raidserver kernel: [   15.490025] raid6: using
>>>> algorithm sse2x4 (4088 MB/s)
>>>> Mar 18 23:47:35 raidserver kernel: [   15.493236] md: raid6
>>>> personality registered for level 6
>>>> Mar 18 23:47:35 raidserver kernel: [   15.493240] md: raid5
>>>> personality registered for level 5
>>>> Mar 18 23:47:35 raidserver kernel: [   15.493242] md: raid4
>>>> personality registered for level 4
>>>> Mar 18 23:47:35 raidserver kernel: [   15.493642] raid5: md0 is not
>>>> clean -- starting background reconstruction
>>>> Mar 18 23:47:35 raidserver kernel: [   15.493645] raid5:
>>>> reshape_position too early for auto-recovery - aborting.
>>>> Mar 18 23:47:35 raidserver kernel: [   15.493647] md: pers->run() failed ...
>>>> Mar 18 23:47:35 raidserver kernel: [   15.931347] md: linear
>>>> personality registered for level -1
>>>> Mar 18 23:47:35 raidserver kernel: [   15.934321] md: multipath
>>>> personality registered for level -4
>>>> Mar 18 23:47:35 raidserver kernel: [   15.936772] md: raid0
>>>> personality registered for level 0
>>>> Mar 18 23:47:35 raidserver kernel: [   15.940395] md: raid1
>>>> personality registered for level 1
>>>> Mar 18 23:47:35 raidserver kernel: [   15.949846] md: raid10
>>>> personality registered for level 10
>>>> #syslog of the segfault
>>>>
>>>> Mar 18 23:41:47 raidserver kernel: [  155.028406] md: md0 stopped.
>>>> Mar 18 23:41:47 raidserver kernel: [  155.028443] md: unbind<sdg>
>>>> Mar 18 23:41:47 raidserver kernel: [  155.051309] md: export_rdev(sdg)
>>>> Mar 18 23:41:47 raidserver kernel: [  155.051464] md: unbind<sdk>
>>>> Mar 18 23:41:47 raidserver kernel: [  155.091288] md: export_rdev(sdk)
>>>> Mar 18 23:41:47 raidserver kernel: [  155.091418] md: unbind<sdj>
>>>> Mar 18 23:41:47 raidserver kernel: [  155.131274] md: export_rdev(sdj)
>>>> Mar 18 23:41:47 raidserver kernel: [  155.131407] md: unbind<sdi>
>>>> Mar 18 23:41:47 raidserver kernel: [  155.161277] md: export_rdev(sdi)
>>>> Mar 18 23:41:47 raidserver kernel: [  155.161421] md: unbind<sdl>
>>>> Mar 18 23:41:47 raidserver kernel: [  155.191275] md: export_rdev(sdl)
>>>> Mar 18 23:41:47 raidserver kernel: [  155.191400] md: unbind<sdh>
>>>> Mar 18 23:41:47 raidserver kernel: [  155.221276] md: export_rdev(sdh)
>>>> Mar 18 23:41:47 raidserver kernel: [  155.221403] md: unbind<sda>
>>>> Mar 18 23:41:47 raidserver kernel: [  155.251276] md: export_rdev(sda)
>>>> Mar 18 23:41:47 raidserver kernel: [  155.251385] md: unbind<sdb>
>>>> Mar 18 23:41:47 raidserver kernel: [  155.281277] md: export_rdev(sdb)
>>>> Mar 18 23:41:47 raidserver kernel: [  155.281379] md: unbind<sdc>
>>>> Mar 18 23:41:47 raidserver kernel: [  155.311276] md: export_rdev(sdc)
>>>> Mar 18 23:41:47 raidserver kernel: [  155.311377] md: unbind<sdd>
>>>> Mar 18 23:41:47 raidserver mdadm[1738]: DeviceDisappeared event
>>>> detected on md device /dev/md0
>>>> Mar 18 23:41:47 raidserver kernel: [  155.341274] md: export_rdev(sdd)
>>>> Mar 18 23:43:46 raidserver kernel: [  274.246878] md: md0 stopped.
>>>> Mar 18 23:45:17 raidserver kernel: [  365.227246] md: md0 stopped.
>>>> Mar 18 23:45:18 raidserver kernel: [  365.828455] __ratelimit: 30
>>>> callbacks suppressed
>>>> Mar 18 23:45:18 raidserver kernel: [  365.828466] mdadm[2874]:
>>>> segfault at 38 ip 00000000004184ff sp 00007fffc2aa6bd0 error 4 in
>>>> mdadm[400000+2a000]
>>>>
>>>> #syslog of the grow and subsequent reboot
>>>>
>>>> Mar 18 23:27:41 raidserver kernel: [  861.411806] md: bind<sdf>
>>>> Mar 18 23:27:42 raidserver ata_id[2638]: HDIO_GET_IDENTITY failed for '/dev/sdi'
>>>> Mar 18 23:27:42 raidserver kernel: [  862.024028] md: bind<sdi>
>>>> Mar 18 23:27:43 raidserver ata_id[2650]: HDIO_GET_IDENTITY failed for '/dev/sdj'
>>>> Mar 18 23:27:43 raidserver kernel: [  863.133531] md: bind<sdj>
>>>> Mar 18 23:27:43 raidserver ata_id[2658]: HDIO_GET_IDENTITY failed for '/dev/sdk'
>>>> Mar 18 23:27:43 raidserver kernel: [  863.285276] md: bind<sdk>
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792375] RAID5 conf printout:
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792385]  --- rd:10 wd:10
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792393]  disk 0, o:1, dev:sdb
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792398]  disk 1, o:1, dev:sdg
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792402]  disk 2, o:1, dev:sdh
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792406]  disk 3, o:1, dev:sdd
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792410]  disk 4, o:1, dev:sda
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792415]  disk 5, o:1, dev:sdc
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792419]  disk 6, o:1, dev:sdk
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792443] RAID5 conf printout:
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792447]  --- rd:10 wd:10
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792450]  disk 0, o:1, dev:sdb
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792454]  disk 1, o:1, dev:sdg
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792457]  disk 2, o:1, dev:sdh
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792461]  disk 3, o:1, dev:sdd
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792465]  disk 4, o:1, dev:sda
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792469]  disk 5, o:1, dev:sdc
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792472]  disk 6, o:1, dev:sdk
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792476]  disk 7, o:1, dev:sdj
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792484] RAID5 conf printout:
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792487]  --- rd:10 wd:10
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792491]  disk 0, o:1, dev:sdb
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792494]  disk 1, o:1, dev:sdg
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792498]  disk 2, o:1, dev:sdh
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792502]  disk 3, o:1, dev:sdd
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792506]  disk 4, o:1, dev:sda
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792509]  disk 5, o:1, dev:sdc
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792513]  disk 6, o:1, dev:sdk
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792517]  disk 7, o:1, dev:sdj
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792520]  disk 8, o:1, dev:sdi
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792528] RAID5 conf printout:
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792531]  --- rd:10 wd:10
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792535]  disk 0, o:1, dev:sdb
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792538]  disk 1, o:1, dev:sdg
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792542]  disk 2, o:1, dev:sdh
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792545]  disk 3, o:1, dev:sdd
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792549]  disk 4, o:1, dev:sda
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792552]  disk 5, o:1, dev:sdc
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792556]  disk 6, o:1, dev:sdk
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792559]  disk 7, o:1, dev:sdj
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792563]  disk 8, o:1, dev:sdi
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792567]  disk 9, o:1, dev:sdf
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792713] md: reshape of RAID array md0
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792722] md: minimum
>>>> _guaranteed_  speed: 1000 KB/sec/disk.
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792728] md: using maximum
>>>> available idle IO bandwidth (but not more than 200000 KB/sec) for
>>>> reshape.
>>>> Mar 18 23:28:35 raidserver kernel: [  914.792746] md: using 128k
>>>> window, over a total of 976762368 blocks.
>>>> Mar 18 23:28:35 raidserver mdadm[1627]: RebuildStarted event detected
>>>> on md device /dev/md0
>>>> Mar 18 23:28:35 raidserver mdadm[1627]: SpareActive event detected on
>>>> md device /dev/md0, component device /dev/sdk
>>>> Mar 18 23:28:35 raidserver mdadm[1627]: SpareActive event detected on
>>>> md device /dev/md0, component device /dev/sdj
>>>> Mar 18 23:28:35 raidserver mdadm[1627]: SpareActive event detected on
>>>> md device /dev/md0, component device /dev/sdi
>>>> Mar 18 23:28:35 raidserver mdadm[1627]: SpareActive event detected on
>>>> md device /dev/md0, component device /dev/sdf
>>>> Mar 18 23:29:05 raidserver kernel: [  945.010492]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:29:05 raidserver kernel: [  945.010501]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:29:05 raidserver kernel: [  945.010517]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:29:05 raidserver kernel: [  945.010523]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:29:36 raidserver kernel: [  976.010049]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:29:36 raidserver kernel: [  976.010058]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:29:36 raidserver kernel: [  976.010071]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:29:36 raidserver kernel: [  976.010077]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:30:07 raidserver kernel: [ 1007.010495]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:30:07 raidserver kernel: [ 1007.010504]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:30:07 raidserver kernel: [ 1007.010518]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:30:07 raidserver kernel: [ 1007.010525]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:30:38 raidserver kernel: [ 1038.010051]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:30:38 raidserver kernel: [ 1038.010060]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:30:38 raidserver kernel: [ 1038.010075]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:30:38 raidserver kernel: [ 1038.010081]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:31:09 raidserver kernel: [ 1069.010493]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:31:09 raidserver kernel: [ 1069.010503]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:31:09 raidserver kernel: [ 1069.010517]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:31:09 raidserver kernel: [ 1069.010523]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:31:20 raidserver kernel: [ 1080.620078] INFO: task
>>>> md0_reshape:2679 blocked for more than 120 seconds.
>>>> Mar 18 23:31:20 raidserver kernel: [ 1080.620086] "echo 0 >
>>>> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> Mar 18 23:31:20 raidserver kernel: [ 1080.620092] md0_reshape   D
>>>> 00000000ffffffff     0  2679      2 0x00000000
>>>> Mar 18 23:31:20 raidserver kernel: [ 1080.620103]  ffff8800441e1ad0
>>>> 0000000000000046 ffff8800441e1a80 0000000000015880
>>>> Mar 18 23:31:20 raidserver kernel: [ 1080.620113]  ffff880068199a60
>>>> 0000000000015880 0000000000015880 0000000000015880
>>>> Mar 18 23:31:20 raidserver kernel: [ 1080.620122]  0000000000015880
>>>> ffff880068199a60 0000000000015880 0000000000015880
>>>> Mar 18 23:31:20 raidserver kernel: [ 1080.620131] Call Trace:
>>>> Mar 18 23:31:20 raidserver kernel: [ 1080.620169]
>>>> [<ffffffffa01d6ce1>] get_active_stripe+0x2a1/0x360 [raid456]
>>>> Mar 18 23:31:20 raidserver kernel: [ 1080.620185]
>>>> [<ffffffff81053d60>] ? default_wake_function+0x0/0x10
>>>> Mar 18 23:31:20 raidserver kernel: [ 1080.620197]
>>>> [<ffffffffa01d92e0>] reshape_request+0x4a0/0x980 [raid456]
>>>> Mar 18 23:31:20 raidserver kernel: [ 1080.620210]
>>>> [<ffffffffa01d9ada>] sync_request+0x31a/0x3a0 [raid456]
>>>> Mar 18 23:31:20 raidserver kernel: [ 1080.620221]
>>>> [<ffffffffa01d69ae>] ? raid5_unplug_device+0x7e/0x110 [raid456]
>>>> Mar 18 23:31:20 raidserver kernel: [ 1080.620233]
>>>> [<ffffffff813daa1e>] md_do_sync+0x5fe/0xba0
>>>> Mar 18 23:31:20 raidserver kernel: [ 1080.620242]
>>>> [<ffffffff813db774>] md_thread+0x44/0x120
>>>> Mar 18 23:31:20 raidserver kernel: [ 1080.620249]
>>>> [<ffffffff813db730>] ? md_thread+0x0/0x120
>>>> Mar 18 23:31:20 raidserver kernel: [ 1080.620257]
>>>> [<ffffffff81078746>] kthread+0xa6/0xb0
>>>> Mar 18 23:31:20 raidserver kernel: [ 1080.620266]
>>>> [<ffffffff810130ea>] child_rip+0xa/0x20
>>>> Mar 18 23:31:20 raidserver kernel: [ 1080.620273]
>>>> [<ffffffff810786a0>] ? kthread+0x0/0xb0
>>>> Mar 18 23:31:20 raidserver kernel: [ 1080.620279]
>>>> [<ffffffff810130e0>] ? child_rip+0x0/0x20
>>>> Mar 18 23:31:40 raidserver kernel: [ 1100.010040]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:31:40 raidserver kernel: [ 1100.010049]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:31:40 raidserver kernel: [ 1100.010065]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:31:40 raidserver kernel: [ 1100.010072]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:31:40 raidserver kernel: [ 1100.010127] sd 8:0:1:0: [sdg]
>>>> Unhandled error code
>>>> Mar 18 23:31:40 raidserver kernel: [ 1100.010132] sd 8:0:1:0: [sdg]
>>>> Result: hostbyte=DID_OK driverbyte=DRIVER_TIMEOUT
>>>> Mar 18 23:31:40 raidserver kernel: [ 1100.010141] end_request: I/O
>>>> error, dev sdg, sector 0
>>>> Mar 18 23:31:40 raidserver kernel: [ 1100.010310] sd 8:0:2:0: [sdh]
>>>> Unhandled error code
>>>> Mar 18 23:31:40 raidserver kernel: [ 1100.010314] sd 8:0:2:0: [sdh]
>>>> Result: hostbyte=DID_OK driverbyte=DRIVER_TIMEOUT
>>>> Mar 18 23:31:40 raidserver kernel: [ 1100.010321] end_request: I/O
>>>> error, dev sdh, sector 8
>>>> Mar 18 23:32:11 raidserver kernel: [ 1131.010514]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:32:11 raidserver kernel: [ 1131.010523]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:32:11 raidserver kernel: [ 1131.010545]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:32:11 raidserver kernel: [ 1131.010551]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:32:11 raidserver kernel: [ 1131.010562]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:32:11 raidserver kernel: [ 1131.010567]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:32:11 raidserver kernel: [ 1131.010580]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:32:11 raidserver kernel: [ 1131.010585]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:32:11 raidserver kernel: [ 1131.010596]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:32:11 raidserver kernel: [ 1131.010601]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:32:11 raidserver kernel: [ 1131.010612]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:32:11 raidserver kernel: [ 1131.010617]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:32:42 raidserver kernel: [ 1162.010054]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:32:42 raidserver kernel: [ 1162.010063]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:32:42 raidserver kernel: [ 1162.010084]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:32:42 raidserver kernel: [ 1162.010090]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:32:42 raidserver kernel: [ 1162.010101]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:32:42 raidserver kernel: [ 1162.010107]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:32:42 raidserver kernel: [ 1162.010119]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:32:42 raidserver kernel: [ 1162.010125]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:32:42 raidserver kernel: [ 1162.010136]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:32:42 raidserver kernel: [ 1162.010142]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:32:42 raidserver kernel: [ 1162.010153]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:32:42 raidserver kernel: [ 1162.010159]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:33:13 raidserver kernel: [ 1193.010053]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:33:13 raidserver kernel: [ 1193.010062]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:33:13 raidserver kernel: [ 1193.010083]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:33:13 raidserver kernel: [ 1193.010089]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:33:13 raidserver kernel: [ 1193.010100]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:33:13 raidserver kernel: [ 1193.010106]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:33:13 raidserver kernel: [ 1193.010118]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:33:13 raidserver kernel: [ 1193.010123]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:33:13 raidserver kernel: [ 1193.010134]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:33:13 raidserver kernel: [ 1193.010139]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:33:13 raidserver kernel: [ 1193.010150]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:33:13 raidserver kernel: [ 1193.010156]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:33:16 raidserver ata_id[2906]: HDIO_GET_IDENTITY failed for '/dev/sdj'
>>>> Mar 18 23:33:16 raidserver ata_id[2910]: HDIO_GET_IDENTITY failed for '/dev/sdk'
>>>> Mar 18 23:33:16 raidserver ata_id[2911]: HDIO_GET_IDENTITY failed for '/dev/sdi'
>>>> Mar 18 23:33:34 raidserver kernel: [ 1213.857434] md: md0 still in use.
>>>> Mar 18 23:33:40 raidserver kernel: [ 1219.907801] EXT4-fs: mballoc: 0
>>>> blocks 0 reqs (0 success)
>>>> Mar 18 23:33:40 raidserver kernel: [ 1219.907801] EXT4-fs: mballoc: 0
>>>> extents scanned, 0 goal hits, 0 2^N hits, 0 breaks, 0 lost
>>>> Mar 18 23:33:40 raidserver kernel: [ 1219.907801] EXT4-fs: mballoc: 0
>>>> generated and it took 0
>>>> Mar 18 23:33:40 raidserver kernel: [ 1219.907801] EXT4-fs: mballoc: 0
>>>> preallocated, 0 discarded
>>>> Mar 18 23:34:10 raidserver kernel: [ 1250.010051]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:34:10 raidserver kernel: [ 1250.010061]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:34:10 raidserver kernel: [ 1250.010087]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:34:10 raidserver kernel: [ 1250.010093]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:34:10 raidserver kernel: [ 1250.010109]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:34:10 raidserver kernel: [ 1250.010115]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:34:10 raidserver kernel: [ 1250.010130]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:34:10 raidserver kernel: [ 1250.010135]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:34:10 raidserver kernel: [ 1250.010150]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:34:10 raidserver kernel: [ 1250.010155]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:34:10 raidserver kernel: [ 1250.010166]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:34:10 raidserver kernel: [ 1250.010171]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:34:41 raidserver kernel: [ 1281.010517]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:34:41 raidserver kernel: [ 1281.010528]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:34:41 raidserver kernel: [ 1281.010550]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:34:41 raidserver kernel: [ 1281.010556]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:34:41 raidserver kernel: [ 1281.010572]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:34:41 raidserver kernel: [ 1281.010578]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:34:41 raidserver kernel: [ 1281.010592]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:34:41 raidserver kernel: [ 1281.010597]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:34:41 raidserver kernel: [ 1281.010611]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:34:41 raidserver kernel: [ 1281.010617]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:34:41 raidserver kernel: [ 1281.010627]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1669:mvs_abort_task:rc= 5
>>>> Mar 18 23:34:41 raidserver kernel: [ 1281.010633]
>>>> /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
>>>> 1608:mvs_query_task:rc= 5
>>>> Mar 18 23:35:06 raidserver kernel: Kernel logging (proc) stopped.
>>>> Mar 18 23:35:06 raidserver rsyslogd: [origin software="rsyslogd"
>>>> swVersion="4.2.0" x-pid="1027" x-info="http://www.rsyslog.com";]
>>>> exiting on signal 15.
>>>> Mar 18 23:39:31 raidserver kernel: imklog 4.2.0, log source =
>>>> /var/run/rsyslog/kmsg started.
>>>> Mar 18 23:39:31 raidserver rsyslogd: [origin software="rsyslogd"
>>>> swVersion="4.2.0" x-pid="647" x-info="http://www.rsyslog.com";]
>>>> (re)start
>>>> Mar 18 23:39:31 raidserver rsyslogd: rsyslogd's groupid changed to 102
>>>> Mar 18 23:39:31 raidserver rsyslogd: rsyslogd's userid changed to 101
>>>>
>>>> --
>>>> Stephan E Stachurski
>>>> 773-315-1684
>>>> ses1984@xxxxxxxxx
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Stephan E Stachurski
>>>> 773-315-1684
>>>> ses1984@xxxxxxxxx
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>>
>>
>>
>>
>> --
>> Stephan E Stachurski
>> 773-315-1684
>> ses1984@xxxxxxxxx
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
> As Niel Brown stated in his reply, the issue is more complicated than that.
>
> Your array has entered a less than known state; likely while working
> with the critical section.  Some parts may be stored as before, while
> others may be stored as things should look now.  This is why the
> critical section backup and possible recovery is very important.
> Please follow Niel Brown's directions /very/ carefully.  You probably
> also want to save the start of each device.
>
> dd if=/dev/container of=someplace/array_dev_X.raw bs=1024k count=64
>  for each device in the array (saving to separate files) would
> probably be a tolerable safety net to start with.  You could also load
> these much smaller segments in hex-editors more easily.
>



-- 
Stephan E Stachurski
773-315-1684
ses1984@xxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux