Hi All, I have what I believe to be a pretty basic LVM & RAID setup on my CentOS 5 machine: Raid Partitions: /dev/sda1,sdb1 /dev/sda2,sdb2 /dev/sda3,sdb3 During the install I created a RAID 1 volume md0 out of sda1,sdb1 for the boot partition and then added sda2,sdb2 to a separate RAID 1 volume as well (md1). I then setup md1 as a LVM physical volume for volume group 'system'. I left the sda3,sdb3 partitions available for future use. Next I created swap, /, /usr, /var, etc. logical volumes in the system volume group and continued with this install as normal. Everything went fine. I was able to use the system, reboot, etc., without problems. I then discovered that I needed more space in my /var volume than was available in the system volume group. So, I created another RAID device, /dev/md2 (using sda3,sdb3), and created an LVM physical volume on top of that. Finally, I extended the system physical volume to contain this new physical volume and expanded the size of the /var volume. This worked fine, but on reboot I get a ton of errors from LVM saying that volume with id xxxx-xxxx-xxxx... was not found and the system automatically reboots. This seems to happen for all volumes, not just the ones I changed. This error even happens for a separate volume group (called 'extended') that is on a separate set of disks and was existing prior to the CentOS 5 install. Any idea on some step I missed? I know things are still fine on the disks, as when I boot with the CentOS DVD with the 'linux rescue' option all RAID & LVM volumes are available for use. So from this it seems I need to update some CentOS config file? Here are some config files: http://pastebin.com/m6d5075dc Thanks! Nick _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos