Re: dmraid comments and a warning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2006-02-06 at 21:02 -0500, Peter Jones wrote:
> On Mon, 2006-02-06 at 13:08 -0700, Dax Kelson wrote:
> > The standard root=LABEL=/ was used on the kernel command line and what
> > happened is that it booted up to one side of the mirror. All the updates
> > and new packages (including a new kernel install which modified the
> > grub.conf) activity just happened on that one side of the mirror.
> 
> This should be fixed in the current rawhide tree.

And now it uses root=/dev/mapper/$DEV ?

> > When I rebooted, GRUB read a garbled grub.conf because at that stage ist
> > *is* using a 'activated' RAID (via the RAID BIOS support). I couldn't
> > boot.
> 
> What do you mean by "garbled" here?  From what you've said so far, at
> this point you should have two perfectly coherent filesystems -- which
> just don't match.  Each of them should have a grub.conf, both of which
> should be properly formed -- one of them just doesn't match one disk.

My guess follows:

GRUB always sees the "activated" RAID because of the BIOS RAID driver.
When it reads the "grub.conf" it is interleaving pieces of the two (now
different) grub.conf files and the result most likely has bogus syntax
and content.

> > So I booted to the rescue environment, which did the right thing and
> > activated the RAID and it even mounted the filesystems. When I went and
> > inspected the files though, anything that got touched while it booted to
> > the one side of the mirror was trashed.
> 
> So the Really Important Thing about BIOS-based raids is that if you
> _ever_ get into the situation where one disk has been written and the
> other hasn't, you need to go into your bios and re-sync the disks.  And
> unfortunately, it's very difficult to automatically detect that you're
> in this situation with bios raid.

Indeed.

> > --Event Two--
> > 
> > With the benefit of the experience of event one. I did a new install,
> > but this time I let Anaconda's disk druid do the "auto setup" thing and
> > create a LVM. I figured that LVM using device mapper and dmraid would
> > always "do the right thing" in regards to *always* using the activated
> > RAID partitions as the PVs.
> 
> What distro were you installing?  AFAIK, both this and your previous
> configuration should have worked if you installed on a tree after
> January 9th or so.  That'd mean test2 should have been ok.

Jan 14th 2006 rawhide for event one, and jan 14th 2006 initial install
with yum updates every couple days for event two.

> (I haven't really looked at upgrades yet; hopefully very soon even
> though it's not really possible to be "upgrading" from a fc4 dmraid
> setup.)

I'm not upgrading distros. Just keeping my rawhide install current.

> > On bootup I noticed an error flash by something to the effect of "LVM
> > ignoring duplicate PV".
> 
> Ok, so this means one of several possible things:
> 
> 1) you're using lvm2 < 2.02.01-1

I installed rawhide on Jan 14th and it has been OK (including updates)
until a few days ago.

> 2) there's no entry for the dm device in /etc/blkid.tab
> 3) for some reason, the priority isn't set on the dm device
> in /etc/blkid.tab
> 4) there's no dm rules in your initrd

See below.

> I think that's actually the whole list of practical reasons you'd get to
> this point, but it's always possible I've overlooked something.

I booted to the rescue environment with a Jan 14th boot.iso and NFS
tree. The rescue environment properly activated the dmraid and
"pvdisplay" showed "/dev/mapper/nvidia-foo"

I looked inside the two initrd files I had:

2.6.15-1.1884 = dm commands inside "init"
2.6.15-1.1889 - no dm commands inside "init" -- dated Feb 4th on my box

> One interesting note is that given any of these you should be getting
> the same disk mounted each time.  Which means there's a good chance that
> sda and sdb are both fine, one of them just happens to represent your
> machine 3 weeks ago.

It installed OK on Jan 14th, and has been successfully booting and using
the dmraid until (I think) Feb 4th.

> Do you still have this disk set, or have you wiped it and reinstalled
> already?  If you've got it, I'd like to see /etc/blkid.tab from either
> disk (both if possible).

Since the / filesystem is in a LVM LV sitting ontop of a dmraid
partition PV, it seems non-trivial to force the PV for the LV to change
back and both to access the separate files. If you know a way, let me
know.

When I booted to the rescue environment it activated the dmraid and LVM
and I was able to get this /etc/blkid.tab:

<device DEVNO="0xfd01" TIME="1139069826" PRI="40" TYPE="swap">/dev/dm-1</device>
<device DEVNO="0xfd05" TIME="1137182541" PRI="40" TYPE="swap">/dev/dm-5</device>
<device DEVNO="0xfd02" TIME="1137182541" PRI="40" TYPE="ntfs">/dev/dm-2</device>
<device DEVNO="0xfd04" TIME="1137182541" PRI="40" UUID="faffb8d3-2562-4489-a1f8-a7e0077e1e6c" SEC_TYPE="ext2" TYPE="ext3">/dev/dm-4</device>
<device DEVNO="0x0801" TIME="1137182541" TYPE="ntfs">/dev/sda1</device>
<device DEVNO="0x0802" TIME="1139162151" LABEL="/boot" UUID="f49b0225-bdd4-430a-a3b0-f0f7c20daaff" SEC_TYPE="ext2" TYPE="ext3">/dev/sda2</device>
<device DEVNO="0x0811" TIME="1137182541" TYPE="ntfs">/dev/sdb1</device>
<device DEVNO="0x0812" TIME="1137182541" LABEL="/boot" UUID="f49b0225-bdd4-430a-a3b0-f0f7c20daaff" SEC_TYPE="ext2" TYPE="ext3">/dev/sdb2</device>
<device DEVNO="0x0813" TIME="1137182541" TYPE="swap">/dev/sdb3</device>
<device DEVNO="0xfd03" TIME="1137182541" TYPE="swap">/dev/dm-3</device>
<device DEVNO="0xfd01" TIME="1139162137" TYPE="swap">/dev/VolGroup00/LogVol01</device>


> > There needs to be more checks in place to prevent booting off of one
> > half of the mirror, or at a minimum only allowing a read-only boot on
> > one side of the mirror. Dead systems are no fun. Loosing your personal
> > data is hell.
> 
> Well, we should have the appropriate checks there at this point -- so
> I'd be curious to find out exactly which versions you installed with.
> It could be that one of the checks was introduced after you installed,
> and the "yum update" process caused it to believe it was *not* a raid
> system.

As I noted above I discovered the initramfs for 1884 was OK and had dm
activation commands but the 1889 initramfs did not. Why the change? I
don't know. I've only run yum on the box and haven't touched the LVM or
device mapper config myself.

> (I haven't been extensively checking to make sure every daily rawhide
> would work perfectly as an update from the previous one, just that
> they'd install if possible...)
> 
> > This isn't purely a Linux problem. Any operating system using fake RAID1
> > needs to be robust in this regard. I saw a Windows box using 'fake'
> > motherboard RAID and the motherboard BIOS got flashed which reset the
> > "Use RAID" setting to 'off'. Then Windows booted off of half the RAID.
> 
> That's interesting.  It means there's some way to query the BIOS to tell
> if it's installed the int13 "raid" hook or not.  I wish I knew what that
> magic is.

Are you sure that's what it means? The motherboard BIOS upgrade turned
off RAID and Windows still booted. That wasn't surprising. The writes to
one side of the mirror and the subsequent re-activation of the mirror
without a proper re-sync in the RAID bios utility caused total foobage.

> > The rules are:
> 
> > 1. Don't boot off half of the RAID1 in read-write mode
> 
> Yeah, we definitely still need some fallback stuff here.

Excellent. We don't want users complaining that FC5 ate their data.

> > 2. If rule 1 is violated, don't ever again boot using the RAID1
> > - If you can abide by rule 2, you can do so indefinitely
> 
> This isn't enforceable in any meaningful way in the software.  In fact,
> it's scarcely even detectable currently :/

Agreed. It is more of a observation.

> > 3. There is no way to recover from a violated rule 1 without
> > reinstalling.
> 
> That's not the case -- you can go into the bios and sync from the
> "newer" disk to the older one.  Or if your bios is total junk, you can
> boot some other media and (carefully) re-sync each partition with "dd".

This should have occurred to me. Since my RAID bios utility is rather
limiting (junk as you say) I overlooked this good suggestion (also noted
by Reuben Farrelly)

Dax Kelson
Guru Labs

-- 
fedora-devel-list mailing list
fedora-devel-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/fedora-devel-list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]
  Powered by Linux