Did you rebuild initrd? During boot, initrd uses staticly linked modules to load what otherwise would be kernel modules. It does, if LVM is enabled, do a vgchange -y a as part of that process. Since LVM is part of the default initrd, it may be the drivers for your SAN that you need to get in place - remember to include the high-end Ethernet drivers too that your NAS (I presume?) uses. -- Peter H. Larsen Technical Architect Enterprise Security Services Phone: 703 610 6442 (direct) -----Original Message----- From: linux-lvm-bounces@redhat.com [mailto:linux-lvm-bounces@redhat.com] On Behalf Of Eugene Vilensky Sent: Thursday, September 03, 2009 9:27 AM To: LVM general discussion and development Subject: vg availability during boot Greetings, I have a host that refuses to recognize my san-attached logical volumes during boot. When mounting from /etc/fstab, it drops out of boot complaining that the LVs in question have an invalid superblock and need to be fsck-ed, however because the lv is "mounted" the fsck cannot proceed. I commented out the entries, rebooted, and noticed that the LVs in the VG in question are "NOT AVAILABLE." A vgchange -a y vgname later and everything is working perfectly. This is my first RHEL5 san-attached host; might I have missed some sort of 'vg persistency' settings by doing the same things I've been doing on RHEL4? Thanks, _______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ _______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/