--- On Thu, 21/1/10, Farkas Levente <lfarkas@xxxxxxxxxxx> wrote: > From: Farkas Levente <lfarkas@xxxxxxxxxxx> > Subject: Re: Why does one get mismatches? > To: "Steven Haigh" <netwiz@xxxxxxxxx> > Cc: "Asdo" <asdo@xxxxxxxxxxxxx>, linux-raid@xxxxxxxxxxxxxxx > Date: Thursday, 21 January, 2010, 11:48 > On 01/21/2010 11:52 AM, Steven Haigh > wrote: > > On Thu, 21 Jan 2010 09:08:42 +0100, Asdo<asdo@xxxxxxxxxxxxx> > wrote: > >> Steven Haigh wrote: > >>> On Wed, 20 Jan 2010 17:43:45 -0500, Brett > Russ<bruss@xxxxxxxxxxx> > > wrote: > >>> > >>> CUT! > >> Might that be a problem of the disks/controllers? > >> Jon and Steven, what hardware do you have? > > > > I'm running some fairly old hardware on this > particular server. It's a > > dual P3 1Ghz. > > > > After running a repair on /dev/md2, I now see: > > # cat /sys/block/md2/md/mismatch_cnt > > 1536 > > > > Again, no smart errors, nothing to indicate a disk > problem at all :( > > > > As this really keeps killing the machine and it is a > live system - the > > only thing I can really think of doing is to break the > RAID and just rsync > > the drives twice daily :\ > > the same happened with many people. and we all hate it > since it cause a > huge load at all weekend on most of our servers:-( > according to redhat it's not a bug:-( > > -- > Levente > > "Si vis pacem para bellum!" > -- > To unsubscribe from this list: send the line "unsubscribe > linux-raid" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > Well i am running a Semperon based desktop system that has 4 in built sata and 2 IDE, and 2 PCI-E controller cards exposing 2 sata ports each. I have off the IDE 2 320GB HDD's that is split across 3 md's boot/swap/main. Only the Main is 'check'ed/repaired. Very rarely have a problem here! On the Sata's I have 7 HDD's of varying size(4x500, 2x750, 1x1TB) and makes(Samsung, Hitachi, Seagate) strung together to form a now raid6 (raid5 until a couple of weeks ago). On top of that i have a VG split into ~6 LV's and in some of those i have mount SquashFS filesystems. until i moved the drive order around at the weekend for access issues. I didn't really have any problems - except the occasional issue - I scrub it weekly currently. BUT I have only just converted from raid5 to 6 and probably not run that many checks since so it could be related to that! -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html