Hello Those WD RE drives are intended for use with TLER aware RAID controllers and should not be directly attached to a server through a regular controller as individual drives. The reason is that TLER (Time Limited Error Recovery) will occasionally not bother to recover data if it takes longer than a set period of time to do so, this does not bother a TLER aware controller since it would just rebuild the data from parity and would know enough that it's not really a failed drive unless the issue keeps reoccurring in the same spot. If you use these drives as regular drives, they could report more errors than a non RE drive which is probably why you're getting the resyncs. The difference between the two systems could simply be the number of bordeline flaws on the drive which would normally be recovered by the drive itself but is being passed on up to the OS as the drive assumes the RAID controller could make better decisions as to what to do. You may want to get yourself a Promise RAID controller as they are considered TLER aware although their approach is quite simple i.e.: retry several times before assuming it's a hard error (might as well not have TLER with that approach). Better yet, get non-RE drives... Regards, John On 6/13/07 3:28 PM, "Leonard Smith" <lrsmith@gmail.com> wrote: > Forgot to mention, the drives are WD 500 GB "Raid enabled" SATA > drives. They are some of WD newer drives, and I think they are > techincally SATA2. The other system also has the same drives. > > > On 6/13/07, Matthew Gillen <me@mattgillen.net> wrote: >> Leonard Smith wrote: >>> I am running CentOS4.4 on two "black" box systems that are >>> identically configured. Both have 500 GB internal drives, ( same type) >>> and were installed using the same kickstart configuration. The drives >>> are being mirrored using LVM. >>> >>> When I check the first system it is re-syncing and the resync time and >>> speed are >>> >>> finish=18559.6min speed=355K/sec >>> >>> On the second system the time and speed are >>> >>> finish=89.5min speed=68170K/sec >>> >>> I can't figure out why the speeds are different between the two. I >>> checked /proc/sys/dev/raid/speed_limit_ and they are the same. The >>> priority of the md proceses are they same. >>> >>> I tweaked the setting of the speel_limit_min and increase the nice >>> priority of the resync process, on the first box, but I could never >>> get it better than >>> >>> finish=4551.8min speed=1445K/sec >>> >>> >>> I've googled and I haven't found much more useful information that to >>> adjust those settings. Besides those setting what else dictates the >>> speed used? >> >> You might compare the hd settings using 'hdparm'. I'm not sure factor of >> 100+ >> can be explained by an incorrect DMA setting or something like that, but it >> might be a contributor. >> >> Matt >> >> _______________________________________________ >> linux-lvm mailing list >> linux-lvm@redhat.com >> https://www.redhat.com/mailman/listinfo/linux-lvm >> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ >> > > _______________________________________________ > linux-lvm mailing list > linux-lvm@redhat.com > https://www.redhat.com/mailman/listinfo/linux-lvm > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ _______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/