Hello, I have similar findings related to the Srinivas patch. 1) with the patch from Srinivas applied to 2.6.32.7, I cannot get my raid6 on it's knees (yet). Using 8 drives on the marvell controller, 1 drive on onboard sata_nv . Created raid6: mdadm --create /dev/md2 --verbose --level=6 --chunk=1024 --raid-devices=9 /dev/sd[bcdefghij]1 XFS on top mkfs.xfs -f -d su=1m,sw=7 /dev/md2 During the first raid resync, I'm also dumping 2TB of data on this 11TB xfs volume. It no longer drops drives. Currently copied 1.4T without glitches. 2) with the patches from Andy Yan applied to 2.6.32.3, the first resync worked, but when also stressing the system by copying data during the initial resync, I observed mvs_abort_task which would make drives kick out of raid after 0.8TB of copying or even quicker. 3) without any patches and a stock Fedora Core 12 kernel, the initial resync NEVER worked. So if I can fill my 11TB volume with data, and no drives are ever kicked out, and xfs does not get corrupt, this patch is a huge improvement. But this will take some more days to fill up. I'll report the status when done. Thanks ! On Tue, Feb 23, 2010 at 11:11 AM, Caspar Smit <c.smit@xxxxxxxxxx> wrote: > Hi Srinivas, > > I finally had some time to test your new patch. > > 1) After numerous hotplug actions with SAS and SATA disks I still can't > get any kernel panic to occur :) > > 2) I can finally boot a system with 3x 6480 controllers loaded with SATA > disks without a kernel panic. > > 3) Raid5/6 initialization completes without dropping the disks one after > another. > > 4) One thing that occured was the following: during a raid1 initialization > of 2 SAS disks and a raid5 init of 8x SSD's i got a call trace by > libata-core.c (see attachment for details). The system continued to work > fine after the trace. > > Great work, this is a much more stable driver now! > > Kind regards, > Caspar Smit > >> On Wed, Feb 17, 2010 at 12:53 PM, Srinivas Naga Venkatasatya >> Pasagadugula - ERS, HCL Tech <satyasrinivasp@xxxxxx> wrote: >>> Hi Smit, >>> >>> This patch is not exactly replaced with Nov-09 patches. >>> My patch addresses the RAID5/6 issues also. Below issues are addressed >>> by my patch. >>> 1. Tape issues. >>> 2. RAID-5/6 I/O fails. >>> 3. LVM IO fails and subsequent init 6 hang (connect SAS+SATA in cascaded >>> expanders, crate volume group and logical volumes, run file I/O >>> (alltest), unplug one drive) >>> 4. Disk stress I/O on 4096 sector size. >>> 5. Hot insertion of drives giving panic. >>> 6. 'fdisk -l' hangs with hot plugging of SATA/SAS drives in expander >>> while IO (Diskstress and alltest) is going on and IO stopped. >>> >>> I can't combined my patch with November-09 patches. James also rejected >>> those patches as those are not proper. Let me know if you have issues >>> with my patch. >>> >>> --Srini. >> >> >> I haven't tested yet, but looks like you're doing excellent work, and >> your documentation/overview of the work is superb. >> -- >> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in >> the body of a message to majordomo@xxxxxxxxxxxxxxx >> More majordomo info at http://vger.kernel.org/majordomo-info.html >> > -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html