Thank you so much! It took me half a day to go from uber-newbie to being able to run the new image ( I've never done the patch -> rpmbuild thing ). Strangely enough though, this patch only dramatically REDUCED the number of "Buffer I/O error on device hde1, logical block BLA-BLA-BLA" errors, it did not eliminate them. So,now I'm more than tempted to go into nash and turn off all attempts to access hde, sil60, or whatever I have to in order to get it to boot quickly and quietly. After all, on this machine, I know exactly where the root filesystem is. Any advice? Charlweed ________________________________ From: James Olson [mailto:big_spender12@xxxxxxxxx] Sent: Thursday, October 26, 2006 6:39 PM To: ATARAID (eg, Promise Fasttrak, Highpoint 370) related discussions Subject: [RE]Initrd boot phase errors reading bogus partition on ATA raiddrive I had a similar problem some months ago. It was caused by the redhat nash program's mount command in the initrd probing drives when it shouldn't (like when you mount the /proc filesystem). I wrote a patch to the nash source code to fix it on my system. # diff -Naur block.c.orig block.c --- block.c.orig 2006-03-08 11:46:59.000000000 -0800 +++ block.c 2006-03-30 02:49:19.000000000 -0800 @@ -337,6 +337,23 @@ return NULL; } +static char * +block_populate_cache() +{ + bdev_iter biter; + bdev dev = NULL; + blkid_dev bdev = NULL; + + biter = block_sysfs_iterate_begin("/sys/block"); + while(block_sysfs_next(biter, &dev) >= 0) { + + bdev = blkid_get_dev(cache, dev->dev_path, BLKID_DEV_FIND); + } + block_sysfs_iterate_end(&biter); + + return NULL; +} + char * block_find_fs_by_label(const char *label) { @@ -356,7 +373,7 @@ if (!access("/sys/block", F_OK)) { /* populate the whole cache */ - block_find_fs_by_keyvalue("unlikely","unlikely"); + block_populate_cache(); /* now look our device up */ bdev = blkid_get_dev(cache, name, BLKID_DEV_NORMAL); ---------[ Received Mail Content ]---------- >Subject : Initrd boot phase errors reading bogus partition on ATA raid drive >Date : Thu, 26 Oct 2006 15:19:01 -0700 >From : "charlweed" >To : > >Hi gurus! > >Booting my Fedora 5 (2.6.18-1.2200.fc5) system takes an extra couple of >minutes because of disk errors. The system is trying to "do something" with >a partition on a drive that is part of a raid set, and failing. These errors >occur during the initrd boot phase. After the system boot, the system is >apparently ok. > >When I boot Linux, I get several screens worth of the following error > > Buffer I/O error on device hde1, logical block 625153152 > > > >hde is part of a 2 disk striped raid set. I can stop the errors if I add >"hde=noprobe hdf=noprobe" as a kernel boot parameter, but then I cannot use >dmraid to access my raid partition, because /dev/hde & /dev/hde don't exist. >I tried adding boot parameter "hde=19457,255,63" but the device remains >invisible. > > > >My naive guess is that I can solve my problem by either > >1) Getting dmraid to see my drives after a boot that uses "noprobe" > >2) Stopping whatever program is trying to access hde1 during the initrd >boot. > >All my attempts at 1) have failed, and I have no idea how to do 2). > > > >My System has > > An Abit AN7 nforce chipset motherboard with > > 2 ATA onboard channels > > onboard Si3112 SATA Raid ( I use SATA, but not the SATA raid) > > A SiI0680 (CMD 680) pci ATA card > > > >The drive layout is > > hda onboard ata [hda1=/boot ,ext3 : hda2=unmounted, ntfs] > > hdc onboard ata [ hdc1= /, ext3 : hdc2=swap : hdc3=/var, ext3] > >** hde SiI0680 striped raid set 1 > > hdf SiI0680 [ hdf1=unmounted, ntfs ] > >** hdg SiI0680 striped raid set 1 > > sda onboard SATA [sda1=unmounted, ntfs] > >** dm-0 raid 1[dm-0p1=unmounted, ntfs] > >Thanks! > >Charlweed > > > > _______________________________________________ Ataraid-list mailing list Ataraid-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/ataraid-list