Isolation of volume groups

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear list,

I have a device-mapper configuration with two volume groups named, for reference, "ravol" and "datvol". The ravol VG has two LVs with XFS filesystems on them, and the datvol VG has two LVs, one with an XFS filesystem, and one with ReiserFS.

Last night, one of the PVs in datvol had a transient SATA link failure and popped in and out of existence for a little while, which caused datvol and the LVs on it to fail. No permanent damage seems to have occurred, though, so I'm not too worried about that. I could bring datvol down and up again to make it work, so I guess everything worked as should be expected.

What concerns me a little, however, is that ravol also seems to have been oddly affected by the failure of datvol. At times, the filesystems on it could hang for seconds, not responding; and while it was responding, XFS was intermittently outputting messages like

May 28 14:58:58 nerv kernel: [30350.996032] xfs_force_shutdown(dm-33,0x1) called from line 335 of file /build/buildd-linux-2.6_2.6.32-38-amd64-bk66e4/linux-2.6-2.6.32/debian/build/source_amd64_none/fs/xfs/xfs_rw.c.  Return address = 0xffffffffa01df02c

or

May 28 14:51:38 nerv kernel: [29911.468028] Filesystem "dm-33": xfs_log_force: error 5 returned.

Once I brought datvol down and back up again, it stopped misbehaving, but I don't really understand why this would happen. Why would ravol be affected, at all, by what happens on datvol? Shouldn't they be isolated from each other?

--

Fredrik Tolf

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux