Re: mdadm fails to start raid in pv fc16 DomU on old host

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 30, 2012 at 12:05:04PM +1100, Virgil wrote:
> Hi Konrad,
> 
> Thanks for the heads up.
> 
> My main motivation for just plugging the FC15 kernel into the FC14 machine(s) 
> is that 3 of the hosts live in remote geographies. It's likely they'll probz 
> live the rest of their lives as FC14 machines.
> 
> The graphics adaptor is:
> 05:05.0 VGA compatible controller: ATI Technologies Inc ES1000 (rev 02)

Ah, then you might want to wait until F17 ships with 3.3. In 3.3 I've added
a TTM DMA pool code that can work with those 32 bit PCI cards.
> 
> Thanks again.

Sure!
> V
> 
> On Thu, 29 Mar 2012 10:51:14 AM Konrad Rzeszutek Wilk wrote:
> > On Thu, Mar 29, 2012 at 01:38:38PM +1100, Virgil wrote:
> > > Hi Konrad,
> > > 
> > > Firstly, thanks for the reply.
> > > 
> > > What I've ended up doing is blindly forcing the FC15 kernel in.
> > > 
> > > Linux rich.wwrich.xxx 2.6.42.9-2.fc15.x86_64 #1 SMP Mon Mar 5 20:55:32
> > > UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
> > > 
> > > This has worked and resolved by issue as the host is advertising flush
> > > diskcache.
> > > 
> > > [    1.051933] blkfront: xvda: flush diskcache: enabled
> > > [    1.252959]  xvda: xvda1 xvda2
> > > [    1.297201] blkfront: xvdb: flush diskcache: enabled
> > > [    1.303110]  xvdb: xvdb1 xvdb2
> > > [    1.565645] md: bind<xvda1>
> > > [    1.668360] md: bind<xvdb1>
> > > [    1.691897] md: raid1 personality registered for level 1
> > > [    1.696683] bio: create slab <bio-1> at 1
> > > [    1.700177] md/raid1:md127: active with 2 out of 2 mirrors
> > > [    1.701113] md127: detected capacity change from 0 to 7516180480
> > > [    1.974317] EXT4-fs (md127): mounted filesystem with ordered data
> > > mode. Opts: (null) [    2.065875] dracut: Checking ext4:
> > > /dev/disk/by-label/rootfs
> > > [    2.067090] dracut: issuing e2fsck -a  /dev/disk/by-label/rootfs
> > > [    2.147305] dracut: rootfs: clean, 42918/454272 files, 417708/1835005
> > > blocks [    2.149062] dracut: Remounting /dev/disk/by-label/rootfs with
> > > -o ro [    2.182952] EXT4-fs (md127): mounted filesystem with ordered
> > > data mode. Opts: (null) [    2.195483] dracut: Mounted root filesystem
> > > /dev/md127
> > > [    2.340884] dracut: Switching root
> > > 
> > > As a side issue, I also recompiled Xen 4.1.2 on fc14 and installed it.
> > > Everything worked *except* libvirtd. No domU's listed. So I recompiled
> > > FC15's libvirtd too. No go.
> > I think the F16 has them fixed.
> > 
> > > I just backed out. Only the FC15 kernel is left. All running well now.
> > > 
> > > Oh and also the kernel works great on bare metal except, when under Xen,
> > > the graphics went nuts. Added nomodeset to overcomes this.
> > 
> > Which graphics is this? Linux 3.3 has some new fixes for this so if you use
> > F17, the issue should disappear.
> > 
> > > The last thing I might try is to recompile the FC15 kernel on an FC14
> > > host to make it easy to install via rpm (assuming it can compile).
> > > 
> > > On Mon, 26 Mar 2012 01:17:49 PM Konrad Rzeszutek Wilk wrote:
> > > > On Mon, Mar 19, 2012 at 01:35:19PM +1100, Virgil wrote:
> > > > > I'm having a problem with mdraid running in a DomU. The issue is
> > > > > that
> > > > > mdraid declares one leg of the raid to have failed (when there's
> > > > > actually nothing wrong).
> > > > > 
> > > > > DomU is fc16 - 3.2.2-1.fc16.x86_64
> > > > > Dom0 is fc14 - 2.6.32.26-174.2.xendom0.fc12.x86_64
> > > > > 
> > > > > The same DomU running on Dom0 fc16 - 3.2.7-1.fc16.x86_64 runs
> > > > > perfectly.
> > > > > 
> > > > > This appears to be a known issue, however the resolution (which
> > > > > seems to be to disable barriers on the fly) doesn't seem to
> > > > > work in this case.> > 
> > > > hm, that is true - it wouldn't as the workarounds are for
> > > > filesystems.
> > > > 
> > > > But perhaps - is there a way to turn barriers of in the raid system?
> > > > 
> > > > > My question is: Is it possible to pass a parameter to the
> > > > > blkfront
> > > > > driver to ask it not to enable barrier during initialization? or
> > > > > is
> > > > > there another work around?
> > > > 
> > > > Not that I know of. You could back-port the proper fix to 2.6.32
> > > > (or just "Fix" the older 2.6.32 to not advetise feature-barrier).
> > > > 
> > > > > [    1.033058] blkfront: xvda: barrier: enabled
> > > > > [    1.099153]  xvda: xvda1 xvda2
> > > > > [    1.102871] blkfront: xvdb: barrier: enabled
> > > > > [    1.130876]  xvdb: xvdb1 xvdb2
> > > > > [    1.292692] md: bind<xvdb1>
> > > > > [    1.413416] md: bind<xvda1>
> > > > > [    1.419411] md: raid1 personality registered for level 1
> > > > > [    1.419836] bio: create slab <bio-1> at 1
> > > > > [    1.419953] md/raid1:md127: active with 2 out of 2 mirrors
> > > > > [    1.419992] md127: detected capacity change from 0 to
> > > > > 7516180480
> > > > > [    1.424562]  md127: unknown partition table
> > > > > [    1.547284] EXT4-fs (md127): barriers disabled
> > > > > [    1.553107] EXT4-fs (md127): mounted filesystem with ordered
> > > > > data
> > > > > mode. Opts: (null)
> > > > > [    1.669483] dracut: Checking ext4: /dev/disk/by-label/rootfs
> > > > > [    1.669592] dracut: issuing e2fsck -a 
> > > > > /dev/disk/by-label/rootfs
> > > > > [    1.690595] blkfront: barrier: empty write xvdb op failed
> > > > > [    1.690611] blkfront: xvdb: barrier or flush: disabled
> > > > > [    1.690628] end_request: I/O error, dev xvdb, sector 14682096
> > > > > [    1.690638] end_request: I/O error, dev xvdb, sector 14682096
> > > > > [    1.690646] md: super_written gets error=-5, uptodate=0
> > > > > [    1.690655] md/raid1:md127: Disk failure on xvdb1, disabling
> > > > > device. [    1.690657] md/raid1:md127: Operation continuing on
> > > > > 1 devices. [    1.690677] blkfront: barrier: empty write xvda
> > > > > op failed [    1.690684] blkfront: xvda: barrier or flush:
> > > > > disabled
> > > > > [    1.690696] end_request: I/O error, dev xvda, sector 14682096
> > > > > [    1.690705] end_request: I/O error, dev xvda, sector 14682096
> > > > > [    1.690713] md: super_written gets error=-5, uptodate=0
> > > > > [    1.692991] RAID1 conf printout:
> > > > > [    1.692997]  --- wd:1 rd:2
> > > > > [    1.693002]  disk 0, wo:0, o:1, dev:xvda1
> > > > > [    1.693006]  disk 1, wo:1, o:0, dev:xvdb1
> > > > > [    1.693010] RAID1 conf printout:
> > > > > [    1.693013]  --- wd:1 rd:2
> > > > > [    1.693016]  disk 0, wo:0, o:1, dev:xvda1
> > > > > [    1.702896] dracut: rootfs: clean, 25635/454272 files,
> > > > > 293967/1835005 blocks [    1.703682] dracut: Remounting
> > > > > /dev/disk/by-label/rootfs with -o ro [    1.773347] EXT4-fs
> > > > > (md127): barriers disabled
> > > > > [    1.774552] EXT4-fs (md127): mounted filesystem with ordered
> > > > > data
> > > > > mode. Opts: (null)
> > > > > [    1.797159] dracut: Mounted root filesystem /dev/md127
> > > > > [    1.937620] dracut: Switching root
> > > > > 
> > > > > --
> > > > > xen mailing list
> > > > > xen@xxxxxxxxxxxxxxxxxxxxxxx
> > > > > https://admin.fedoraproject.org/mailman/listinfo/xen
--
xen mailing list
xen@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/xen



[Index of Archives]     [Fedora General]     [Fedora Music]     [Linux Kernel]     [Fedora Desktop]     [Fedora Directory]     [PAM]     [Big List of Linux Books]     [Gimp]     [Yosemite News]

  Powered by Linux