Re: FC5 test3 -- dmraid broken?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2006-02-22 at 09:18 -0500, Peter Jones wrote:
> On Tue, 2006-02-21 at 22:55 -0700, Dax Kelson wrote:
> 
> > I added echos such as "about to dm create" and then some "sleep 5" after
> > each of those commands.
> >
> > There is zero output from mkdmnod on down until the "lvm vgscan" runs.
> 
> Well, that means nothing thinks it's not working.  Not an encouraging
> sign :/

It used to work when I installed rawhide last month.

I guess there is no verbose mode? 

> > It produces this output:
> > 
> > device-mapper: 4.5.0-ioctl (2005-10-04) initialised: dm-devel@xxxxxxxxxx
> >   Reading all physical volumes. This may take a while...
> >   No volume groups found
> >   Unable to find volume group "VolGroup00"
> > ...
> > 
> > Booting into the rescue environment the dm raid is brought up and LVM
> > activated automatically and correctly.
> 
> Hrm.  If you run "dmsetup table" from this environment, does the output
> match the "dm create" line in the initrd?
> 
> It's almost as if lvm isn't checking the dm volumes, but that shouldn't
> be the case with even remotely recent lvm2.

It does match. Here is the output from dmsetup table inside the rescue
environment.

nvidia_hcddciddp1: 0 409368267 linear 253:0 241038
nvidia_hcddcidd: 0 586114702 mirror core 2 64 nosync 2 8:16 0 8:0 0
VolGroup00-LogVol01: 0 4063232 linear 253:3 83952000
VolGroup00-LogVol00: 0 83951616 linear 253:3 384
nvidia_hcddciddp3: 0 176490090 linear 253:0 409609305
nvidia_hcddciddp2: 0 208782 linear 253:0 63

As a reference here is what is in the initramfs init file:

dm create nvidia_hcddcidd 0 586114702 mirror core 2 64 nosync 2 8:16 0 8:0 0
dm partadd nvidia_hcddcidd

> > Incidentally in the rescue environment I chrooted into my rootfilesytem
> > and brought up my network interface (/etc/init.d/network start), and ran
> > yum -y update.
> > 
> > There were about 40 packages downloaded, but every rpm install attempt
> > puked out out with errors from the preinstall scripts. Unsurprisingly
> > running rpm -Uvh /path/to/yum/cache/kernel*rpm resulted in the same
> > error. :(
> 
> This could be related, but my gut reaction says it's not caused by your
> raid problems.  Obviously it's still bad.

Indeed. And it looks like Jermey Katz just fixed that.

Now if I can get control-c working and ssh/scp able to grab terminal in
the rescue environment my complaints with it will be gone.

Dax Kelson
Guru Labs

-- 
fedora-devel-list mailing list
fedora-devel-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/fedora-devel-list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]
  Powered by Linux