On Mon, Aug 30, 2010 at 10:46:26PM -0500, Robert wrote: > > > On 08/30/2010 09:24 PM, fred smith informed us: > > <snip>another curious thing I just noticed is this: the list of kernels > available > > at boot time (in the actual grub menu shown at boot) IS NOT THE SAME LIST > > THAT APPEARS IN GRUB.CONF. in the boot-time menu, the kernel it boots is > > the most recent one shown, and there are other older ones that do not > > appear in grub.conf. while in grub.conf there are several newer ones that > > do not appear on the boot-time grub menu. > > > > most strange. > > > > BTW, this is a raid-1 array using linux software raid, with two matching > > drives. Is there possibly some way the two drives could have gotten out > > of sync such that whichever one is the actual boot device has invalid > > info in /boot? > > > > and while thinking along those lines, I see a number of mails in root's > > mailbox from "md" notifying us of a degraded array. these all appear to have > > happened, AFAICT, at system boot, over the last several months. > > > > also, /var/log/messages contains a bunch of stuff like the below, also > > apparently at system boot, and I don't really know what it means, though > > > <snip> > > This is not the magic solution that you quite understandably would > prefer. I hope > someone can pinpoint your trouble. UNTIL THEN, I think you would be 'way > ahead to make a full backup (or 2) to an external drive, disconnect that > baby > and start troubleshooting, confident that you won't lose all your data. > > I'll bet that #cat /proc/mdstat looks really scary. Mine looks like this: > [root@madeleine grub]# cat /proc/mdstat > Personalities : [raid1] > md0 : active raid1 sdb1[1] sda1[0] > 409536 blocks [2/2] [UU] > > md2 : active raid1 sdb3[1] sda3[0] > 3903680 blocks [2/2] [UU] > > md3 : active raid1 sdb4[1] sda4[0] > 108502912 blocks [2/2] [UU] > > md1 : active raid1 sdb2[1] sda2[0] > 375567488 blocks [2/2] [UU] > > unused devices: <none> > [root@madeleine grub]# here's mine (indented for readability): cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1[1] 104320 blocks [2/1] [_U] md1 : active raid1 sdb2[1] 312464128 blocks [2/1] [_U] unused devices: <none> > > Other than that, the system boots from /boot/grub/grub.conf and that should > be what you see during the boot process. The other two, /etc/grub.conf and > /boot/grub/menu.lst are symlinks to the real deal yes, they're all symlinked correctly. > It might be interesting to have a look at /etc/fstab then issue a mount > command with no arguments to see if anything is mounted on /boot hmmmm.... I find th is in /etc/fstab: /dev/md0 /boot ext3 defaults 1 2 and this in the output of a bare mount command: /dev/md0 on /boot type ext3 (rw) so those look OK. > > You might find valuable RAID 1 information at: > http://www.howtoforge.com/how-to-set-up-software-raid1-on-a-running-system-incl-grub-configuration-centos-5.3 I'll take a look at that link. thanks. I'll also dig for the HOWTO I used when setting it up. As I look at this I recall that I had to tweak the scripts that create the initrd. so, if one of the updates since has reinstalled that, I may no longer be getting the desired initird built. sounds ominous... Thanks for the info! -- ---- Fred Smith -- fredex@xxxxxxxxxxxxxxxxxxxxxx ----------------------------- "For the word of God is living and active. Sharper than any double-edged sword, it penetrates even to dividing soul and spirit, joints and marrow; it judges the thoughts and attitudes of the heart." ---------------------------- Hebrews 4:12 (niv) ------------------------------ _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos