Re: RAID LargeDisks and UATA Serious Problems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 17 Jan 2003, Ed Wilts wrote:

> On Fri, Jan 17, 2003 at 12:55:04PM +0100, zeist wrote:
> > Greetings
> > 
> > I'm facing serious problems with a Raid partition on UATA133 bus.
> > The box is a dual athlon MP with asus MB and REDHAT 8.0.
> > The entire system except /home stand on two IBM 80Gb disks (on uata 100 
> > bus) in Raid 1 with ext3.
> > The /home partition stand on 3 Maxtor 120Gb "diamondmax plus 9" uata133 in 
> > Raid 5 with ext3.
> > The system ran fine with 2.4.18-18.8.0smp official redhat kernel and the 3 
> > Maxtor disks attached to and external Adaptec controller based on 
> > highpoint hpt370 chipset (note, i used just the ide bus functionality, 
> > since Raid was software).
> 
> I think you're starting to realize why we like the Red Hat kernels.  Red
> Hat does a *lot* of work testing and patching the kernels so that they
> work well in different configurations.  Many patches don't appear in
> vanilla kernels until a future release.
Well i didn't start now to realize that, i always appreciated the rh 
kernels since i discovered them with release 5.2, and i'm well aware of 
the fine tuning and patching works behind the redhat kernels, but also i never had problems to switch 
to vanilla kernel when i needed it, (ex: firewalls and bastionhost kernels 
with grsec, lsm, StJude and others hardening patches).

> > When i switched to 2.4.19 vanilla kernel i started to have problems, first 
> 
> May I ask why you switched?  Although the release number went up, you
> lost a lot of patches that Red Hat had applied.  What problem were you
> trying to solve?
I'm working on the debug of the International Crypto Patch (the one of 
kerneli.org), and i'm working on some application that work around this 
patch, i needed a vanilla kernel that could be patched witn the integral 
version of kerneli.

> My suggesting at this point would be to restore your system back to the
> way it was before you had problems.  Depending on the level of
> corruption you've already created, you may need to re-init the disks and
> restore the files from backups.
I fear this is the only solution.
 
> > Theres nothing to do, problems persist, not only during high i/o 
> > throughtput, but also when i start to store large data on partition.
> > I fell back to 2.4.18-18.8.0smp I didn't install the promise patch since 
> > it seem to me that is already included in rh kernel) without succes, it 
> > seem that when the data on disks reach about 30% of capacity troubles and corruptions 
> 
> You should also go back to your original controller.  After all, it
> worked!
Yes but also promise controller shuold work, i've found over net lots of 
people very satisfied by it using the patch released by promise (that is 
already included in rh kernel :)).
 
> > I also thinking about basic design errors, since i have 3 uata 133 disks 
> > attached to a single pci controller all with dma access activated, 
> > somebody can confirm that this could represent a bandwith problem?
> 
> Even if you have a bandwidth problem, you should not have corruption.
> At worst you should slow down, not corrupt data unless you have have a
> faulty motherboard or controller.
That's the point, lots of people is reporting data corruption with Large 
disks (120 Gb or more) with uata 100 and 133 interfaces, i'm trying to 
figure out the exact nature of the problem.

Greetings
Nicola Ragozzino
- ----------------------------------------------------------------------------------------------
`The true value of a human being can be found in the degree to which he has attained 
 liberation from the self`
 - ----------------------------------------------------------------------------------------------
 GPG/PGP keys available on key-servers
 [RSA 2048] PGP Key fingerprint = 82 78 5A 58 8D E0 31 C9  B4 9D 92 04 0D F6 C1 82
 [DSA 4096] GPG Key fingerprint = D5 84 BA F3 24 64 7E B6  97 D0 1A 3B F0 40 89 72  E2 CE 1F C5
 - ----------------------------------------------------------------------------------------------



-- 
Psyche-list mailing list
Psyche-list@redhat.com
https://listman.redhat.com/mailman/listinfo/psyche-list

[Index of Archives]     [Fedora General Discussion]     [Red Hat General Discussion]     [Centos]     [Kernel]     [Red Hat Install]     [Red Hat Watch]     [Red Hat Development]     [Red Hat 9]     [Gimp]     [Yosemite News]

  Powered by Linux