Problems installing Fedora-22 on a system with 2 lvms

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have a system with 4 disk, formatted as follows:

      # gdisk -l /dev/sda
      GPT fdisk (gdisk) version 1.0.0
      
      Partition table scan:
        MBR: protective
        BSD: not present
        APM: not present
        GPT: present
      
      Found valid GPT with protective MBR; using GPT.
      Disk /dev/sda: 976773168 sectors, 465.8 GiB
      Logical sector size: 512 bytes
      Disk identifier (GUID): 43F1E071-26B9-4D53-8BDA-A3D530A2FFDC
      Partition table holds up to 128 entries
      First usable sector is 34, last usable sector is 976773134
      Partitions will be aligned on 2048-sector boundaries
      Total free space is 2014 sectors (1007.0 KiB)
      
      Number  Start (sector)    End (sector)  Size       Code  Name
         1            2048            4095   1024.0 KiB  EF02  bios
         2            4096        20975615   10.0 GiB    FD00  boot
         3        20975616       976773134   455.8 GiB   FD00  lvm
      
      # gdisk -l /dev/sdb
      GPT fdisk (gdisk) version 1.0.0
      
      Partition table scan:
        MBR: protective
        BSD: not present
        APM: not present
        GPT: present
      
      Found valid GPT with protective MBR; using GPT.
      Disk /dev/sdb: 976773168 sectors, 465.8 GiB
      Logical sector size: 512 bytes
      Disk identifier (GUID): 2949E789-9EE3-4456-BBCF-604EECD823D3
      Partition table holds up to 128 entries
      First usable sector is 34, last usable sector is 976773134
      Partitions will be aligned on 2048-sector boundaries
      Total free space is 2014 sectors (1007.0 KiB)
      
      Number  Start (sector)    End (sector)  Size       Code  Name
         1            2048            4095   1024.0 KiB  EF02  bios
         2            4096        20975615   10.0 GiB    FD00  boot
         3        20975616       976773134   455.8 GiB   FD00  lvm
      
      # gdisk -l /dev/sdc
      GPT fdisk (gdisk) version 1.0.0
      
      Partition table scan:
        MBR: protective
        BSD: not present
        APM: not present
        GPT: present
      
      Found valid GPT with protective MBR; using GPT.
      Disk /dev/sdc: 11721045168 sectors, 5.5 TiB
      Logical sector size: 512 bytes
      Disk identifier (GUID): BA3D89D5-BB20-4CA5-9B53-18A1189D825A
      Partition table holds up to 128 entries
      First usable sector is 34, last usable sector is 11721045134
      Partitions will be aligned on 2048-sector boundaries
      Total free space is 2014 sectors (1007.0 KiB)
      
      Number  Start (sector)    End (sector)  Size       Code  Name
         1            2048     11721045134   5.5 TiB     FD00  vg1
      
      # gdisk -l /dev/sdd
      GPT fdisk (gdisk) version 1.0.0
      
      Partition table scan:
        MBR: protective
        BSD: not present
        APM: not present
        GPT: present
      
      Found valid GPT with protective MBR; using GPT.
      Disk /dev/sdd: 11721045168 sectors, 5.5 TiB
      Logical sector size: 512 bytes
      Disk identifier (GUID): EB695880-E336-4814-87CF-818C37D0939C
      Partition table holds up to 128 entries
      First usable sector is 34, last usable sector is 11721045134
      Partitions will be aligned on 2048-sector boundaries
      Total free space is 2014 sectors (1007.0 KiB)
      
      Number  Start (sector)    End (sector)  Size       Code  Name
         1            2048     11721045134   5.5 TiB     FD00  vg1
      
When I try to use this with kickstart like this:

  part raid.0 --noformat --onpart=sda2
  part raid.2 --noformat --onpart=sda3
  part raid.1 --noformat --onpart=sdb2
  part raid.3 --noformat --onpart=sdb3
  part raid.4 --noformat --onpart=sdc1
  part raid.5 --noformat --onpart=sdd1
  raid pv.0 --device=UUID=f7593b3e-6c01-df74-af43-6febfa2a73d7 --noformat
  raid pv.1 --device=UUID=48b2669a-0463-e3ee-4a47-4d3ff89a9662 --noformat
  raid /boot --device=UUID=bdc393c8-22b6-55be-d360-30a7ba44fd0f --fstype=ext4 --label=/boot --useexisting
  volgroup vg0  --noformat
  volgroup vg1  --noformat
  logvol /opt --fstype=ext4 --label=/opt --name=opt --useexisting --vgname=vg0
  logvol / --fstype=ext4 --label=/ --name=root --useexisting --vgname=vg0
  logvol /usr/src --fstype=ext4 --label=/usr/src --name=src --noformat --vgname=vg0
  logvol swap --fstype=swap --name=swap --useexisting --vgname=vg0
  logvol /dvdbackup --fstype=ext4 --label=/dvdbackup --name=dvdbackup --noformat --vgname=vg1
  logvol swap --fstype=swap --name=swap --useexisting --vgname=vg1

Anaconda gets an exception:

      anaconda 22.20.13-1 exception report
      Traceback (most recent call first):
        File "/usr/lib64/python2.7/site-packages/gi/overrides/BlockDev.py", line 384, in wrapped
          raise transform[1](msg)
        File "/usr/lib/python2.7/site-packages/blivet/devices/lvm.py", line 628, in _setup
          blockdev.lvm.lvactivate(self.vg.name, self._name)
        File "/usr/lib/python2.7/site-packages/blivet/devices/storage.py", line 430, in setup
          self._setup(orig=orig)
        File "/usr/lib/python2.7/site-packages/blivet/deviceaction.py", line 661, in execute
          self.device.setup(orig=True)
        File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 362, in processActions
          action.execute(callbacks)
        File "/usr/lib/python2.7/site-packages/blivet/blivet.py", line 162, in doIt
          self.devicetree.processActions(callbacks)
        File "/usr/lib/python2.7/site-packages/blivet/osinstall.py", line 1057, in turnOnFilesystems
          storage.doIt(callbacks)
        File "/usr/lib64/python2.7/site-packages/pyanaconda/install.py", line 196, in doInstall
          turnOnFilesystems(storage, mountOnly=flags.flags.dirInstall, callbacks=callbacks_reg)
        File "/usr/lib64/python2.7/threading.py", line 766, in run
          self.__target(*self.__args, **self.__kwargs)
        File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 244, in run
          threading.Thread.run(self, *args, **kwargs)
      LVMError: Process reported exit code 1280:   Volume group "vg1" not found
        Cannot process volume group vg1
      
      
      Local variables in innermost frame:
      e: g-bd-utils-exec-error-quark: Process reported exit code 1280:   Volume group "vg1" not found
        Cannot process volume group vg1
       (0)
      orig_obj: <function lvm_lvactivate at 0x7fcd8908af50>
      self: <gi.overrides.BlockDev.ErrorProxy object at 0x7fcd890911d0>
      args: ('vg1', 'swap')
      transform: (<class 'GLib.Error'>, <class 'gi.overrides.BlockDev.LVMError'>)
      e_type: <class 'GLib.Error'>
      kwargs: {}
      msg: Process reported exit code 1280:   Volume group "vg1" not found
        Cannot process volume group vg1
  
The reason seems to be that anaconda stops the second lvm (vg1) at some point
(i.e it's not visible when doing an 'lvs' until I do a 'pvscan --cache' in tty2), even
though it shows up in program.log (grep 'vg1' /tmp/program.log):

      MD_NAME=lie:vg1
      17:35:32,359 INFO program: stdout[30]: ARRAY /dev/md/vg1  metadata=1.2 UUID=f7593b3e:6c01df74:af436feb:fa2a73d7 name=lie:vg1
      MD_NAME=lie:vg1
      17:35:32,501 INFO program: stdout[34]: ARRAY /dev/md/vg1  metadata=1.2 UUID=f7593b3e:6c01df74:af436feb:fa2a73d7 name=lie:vg1
        LVM2_PV_NAME=/dev/md/vg1 LVM2_PV_UUID=Ick4oK-yuuv-Q0e6-7IMy-haTS-ojMY-kTtiJa LVM2_PE_START=1048576 LVM2_VG_NAME=vg1 LVM2_VG_UUID=GND1Fh-9uOJ-VozS-gZHX-c87h-m3HU-uszs5G LVM2_VG_SIZE=6001038196736 LVM2_VG_FREE=1706066706432 LVM2_VG_EXTENT_SIZE=4194304 LVM2_VG_EXTENT_COUNT=1430759 LVM2_VG_FREE_COUNT=406758 LVM2_PV_COUNT=1
        LVM2_VG_NAME=vg1 LVM2_LV_NAME=dvdbackup LVM2_LV_UUID=RK6vlx-XFQb-6Fzz-Webs-jcen-k3HJ-icxdr1 LVM2_LV_SIZE=4294967296000 LVM2_LV_ATTR=-wi-a----- LVM2_SEGTYPE=linear
        LVM2_VG_NAME=vg1 LVM2_LV_NAME=swap LVM2_LV_UUID=rxYQGq-2GLg-b0Go-EqCi-09YJ-ay3z-pMyiCc LVM2_LV_SIZE=4194304 LVM2_LV_ATTR=-wi-a----- LVM2_SEGTYPE=linear
      17:35:35,364 INFO program: Running [50] multipath -c /dev/md/vg1 ...
      17:35:35,370 INFO program: stdout[50]: /dev/md/vg1 is not a valid multipath device path
      17:35:35,567 INFO program: Running [51] multipath -c /dev/mapper/vg1-swap ...
      17:35:35,703 INFO program: Running [52] multipath -c /dev/mapper/vg1-dvdbackup ...
      17:35:35,710 INFO program: Running... e2fsck -f -p -C 0 /dev/mapper/vg1-dvdbackup
      17:35:59,639 INFO program: Running... dumpe2fs -h /dev/mapper/vg1-dvdbackup
      17:35:59,695 INFO program: Running... resize2fs -P /dev/mapper/vg1-dvdbackup
      17:35:59,817 INFO program: Running [53] multipath -c /dev/mapper/vg1-swap ...
      17:35:59,848 INFO program: Running [54] multipath -c /dev/md/vg1 ...
      17:35:59,854 INFO program: stdout[54]: /dev/md/vg1 is not a valid multipath device path
      17:36:01,812 INFO program: Running [61] lvm lvchange -an vg1/swap --config= devices { preferred_names=["^/dev/mapper/", "^/dev/md/", "^/dev/sd"] }  ...
      17:36:01,893 INFO program: Running [62] lvm vgchange -an vg1 --config= devices { preferred_names=["^/dev/mapper/", "^/dev/md/", "^/dev/sd"] }  ...
      17:36:01,946 INFO program: stdout[62]:   0 logical volume(s) in volume group "vg1" now active
      17:36:01,971 INFO program: Running [63] mdadm --stop /dev/md/vg1 ...
      17:36:02,505 INFO program: stderr[63]: mdadm: stopped /dev/md/vg1
      17:36:08,328 INFO program: Running [81] mdadm --assemble /dev/md/vg1 --run --uuid=f7593b3e:6c01df74:af436feb:fa2a73d7 /dev/sdc1 /dev/sdd1 ...
      17:36:08,443 INFO program: stderr[81]: mdadm: /dev/md/vg1 has been started with 2 drives.
      17:36:08,591 INFO program: Running [82] lvm lvchange -ay vg1/dvdbackup --config= devices { preferred_names=["^/dev/mapper/", "^/dev/md/", "^/dev/sd"] }  ...
      17:36:08,635 INFO program: Running... mount -t ext4 -o defaults,ro /dev/mapper/vg1-dvdbackup /mnt/sysimage
      17:36:08,812 INFO program: Running [83] lvm lvchange -an vg1/dvdbackup --config= devices { preferred_names=["^/dev/mapper/", "^/dev/md/", "^/dev/sd"] }  ...
      17:36:08,882 INFO program: Running [84] lvm vgchange -an vg1 --config= devices { preferred_names=["^/dev/mapper/", "^/dev/md/", "^/dev/sd"] }  ...
      17:36:08,930 INFO program: stdout[84]:   0 logical volume(s) in volume group "vg1" now active
      17:36:08,950 INFO program: Running [85] mdadm --stop /dev/md/vg1 ...
      17:36:09,766 INFO program: stderr[85]: mdadm: stopped /dev/md/vg1
      17:36:32,782 INFO program: Running [86] mdadm --assemble /dev/md/vg1 --run --uuid=f7593b3e:6c01df74:af436feb:fa2a73d7 /dev/sdc1 /dev/sdd1 ...
      17:36:32,818 INFO program: stderr[86]: mdadm: /dev/md/vg1 has been started with 2 drives.
      17:36:32,923 INFO program: Running [87] lvm lvchange -ay vg1/swap --config= devices { preferred_names=["^/dev/mapper/", "^/dev/md/", "^/dev/sd"] }  ...
      17:36:32,949 INFO program: stderr[87]:   Volume group "vg1" not found
        Cannot process volume group vg1

      
Anybody that has an idea of how to work around this? I get similar results when trying an
ordinary install when trying to reuse the same partitions, see: https://bugzilla.redhat.com/show_bug.cgi?id=1234994

Regards

Anders Blomdell
  

-- 
Anders Blomdell                  Email: anders.blomdell@xxxxxxxxxxxxxx
Department of Automatic Control
Lund University                  Phone:    +46 46 222 4625
P.O. Box 118                     Fax:      +46 46 138118
SE-221 00 Lund, Sweden

_______________________________________________
Kickstart-list mailing list
Kickstart-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/kickstart-list



[Index of Archives]     [Red Hat General]     [CentOS Users]     [Fedora Users]     [Fedora Maintainers]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]

  Powered by Linux