Re: Installing F26; Question about ks.cfg when raid1 is used

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 26 Aug 2017 13:06:43 -0500
D&R <dwoody5654@xxxxxxxxx> wrote:

> On Mon, 21 Aug 2017 22:19:05 -0500
> D&R <dwoody5654@xxxxxxxxx> wrote:
> 
> > On Sat, 19 Aug 2017 17:18:53 -0500
> > dwoody5654@xxxxxxxxx wrote:
> >   
> > > On Wed, 9 Aug 2017 14:27:07 -0700
> > > Rick Stevens <ricks@xxxxxxxxxxxxxx> wrote:
> > >     
> > > > On 08/09/2017 12:08 PM, D&R wrote:      
> > > > > On Wed, 9 Aug 2017 12:00:00 -0700
> > > > > Rick Stevens <ricks@xxxxxxxxxxxxxx> wrote:
> > > > >         
> > > > >> On 08/09/2017 11:52 AM, D&R wrote:        
> > > > >>> When I boot into the install there is an error in the destination
> > > > >>> section.
> > > > >>>
> > > > >>> I looked at the debug info in the storage.log and there was an
> > > > >>> error about sdb1 did not exist. But...
> > > > >>>
> > > > >>> When I reboot to F24 then ...
> > > > >>>
> > > > >>> cat /proc/mdstat
> > > > >>>
> > > > >>> md126 : active raid1 sda2[2] sdb2[1]
> > > > >>>       961261568 blocks super 1.2 [2/2] [UU]
> > > > >>>       bitmap: 2/8 pages [8KB], 65536KB chunk
> > > > >>>
> > > > >>> md127 : active raid1 sdb1[1] sda1[2]
> > > > >>>       15368064 blocks super 1.0 [2/2] [UU]
> > > > >>>       bitmap: 1/1 pages [4KB], 65536KB chunk
> > > > >>>
> > > > >>> The section of ks.cfg for hard drive setup is as follows:
> > > > >>>
> > > > >>> ignoredisk --only-use=sda,sdb
> > > > >>> bootloader --location=mbr --boot-drive=sda
> > > > >>>
> > > > >>> # Partition clearing information
> > > > >>> clearpart --none --initlabel
> > > > >>>
> > > > >>> # Disk partitioning information
> > > > >>>
> > > > >>> part raid.6 --fstype=mdmember --noformat --onpart=sda1
> > > > >>> part raid.27 --fstype=mdmember --noformat --onpart=sdb1
> > > > >>> part raid.14 --fstype=mdmember --noformat --onpart=sda2
> > > > >>> part raid.32 --fstype=mdmember --noformat --onpart=sdb2
> > > > >>>
> > > > >>> raid / --device=root --fstype=ext4 --level=raid1 --useexisting
> > > > >>> raid /home --device=home --fstype=ext4 --level=raid1 --noformat
> > > > >>> --useexisting
> > > > >>>
> > > > >>> I currently have a raid1 setup with 2 drives sda and sdb
> > > > >>>
> > > > >>> Since I am using the option --useexisting do I still need to use
> > > > >>> the part commands?
> > > > >>>
> > > > >>> The last time I did an upgrade was to F24 I have not found anything
> > > > >>> that says the syntax has changed.
> > > > >>>
> > > > >>> Any Ideas?          
> > > > >>
> > > > >> Uhm, when you're booting the install, is it possible that the CD/DVD
> > > > >> you're booting from becomes /dev/sda? If so, then your first hard
> > > > >> drive is /dev/sdb and the second is /dev/sdc and the
> > > > >>
> > > > >> 	ignoredisk --only-use=sda,sdb
> > > > >>
> > > > >> would block using the second hard drive, since it's /dev/sdc at this
> > > > >> time. This is just a wild guess.        
> > > > > 
> > > > > I am booting from an iso file from another computer. As I recall that
> > > > > is what I did when I installed F24 over F22.        
> > > > 
> > > > How are you booting an ISO file from another computer? Is this a
> > > > network kickstart install, where the iso image is located on an NFS or
> > > > CIFS server?
> > > > 
> > > > Whatever it is, can you boot it again without invoking kickstart? If
> > > > you can, open up a command line window and do "fdisk -l", which should
> > > > list the disks the system sees. Verify the devices are the ones you
> > > > think they are. Remember that when you're booting F24 from the hard
> > > > disk, you are absolutely making /dev/sda the first hard drive. When
> > > > booting from the network, a CD/DVD or a bootp server, that may NOT be
> > > > the case and your drive letters may be different, in which the limits
> > > > in your "ignoredisk" line would prevent finding the second drive.      
> > > 
> > > Sorry it took so long to reply, I was out of town on vacation.
> > > However, I copied the Server iso for F24, F25, F26 to the home directory
> > > on a second computer. The directory listings is:
> > > -rw-r--r--. 1 root root 2401239040 Aug 17
> > > 21:33 /home/Fedora-Server-dvd-x86_64-26-1.5.iso -rw-r--r--. 1 root root
> > > 2018508800 Aug 19 14:49 /home/Fedora-Server-dvd-x86_64-25-1.3.iso
> > > -rw-r--r--. 1 root root 1868562432 Aug 19
> > > 16:28 /home/Fedora-Server-dvd-x86_64-24-1.2.iso
> > > 
> > > The grub.cfg is setup up as:
> > > 
> > > menuentry 'Remote Install' {
> > > load_video
> > > set gfxpayload=keep
> > > insmod gzio
> > > insmod part_msdos
> > > insmod diskfilter
> > > insmod mdraid1x
> > > insmod ext2
> > > set root='hd0,msdos1'
> > > echo    'Loading Linux'
> > > # linux16 /boot/vmlinuz-remote acpi=off audit=0 selinux=0
> > >           inst.repo=nfs:10.10.1.2:/home/Fedora-Server-dvd-x86_64-24-1.2.iso
> > >           ramdisk_size=8192 panic=30
> > > linux16 /boot/vmlinuz-remote acpi=off audit=0 selinux=0
> > > 	inst.repo=nfs:10.10.1.2:/home/Fedora-Server-dvd-x86_64-25-1.3.iso
> > > 	ramdisk_size=8192 panic=30
> > > echo 'Loading initial ramdisk ...'
> > > initrd16  /boot/initrd-remote.img }
> > > 
> > > F24 came up in the installer with no error.
> > > F25 came up in the installer with an error 'device already in tree'
> > > F26 came up in the installer with an error 'device already in tree'
> > > 
> > > From a F25 install fdisk -l:
> > > 
> > > Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
> > > Units: sectors of 1 * 512 = 512 bytes
> > > Sector size (logical/physical): 512 bytes / 4096 bytes
> > > I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> > > Disklabel type: dos
> > > Disk identifier: 0x0009d086
> > > 
> > > Device     Boot    Start        End    Sectors   Size Id Type
> > > /dev/sda1  *        2048   30738431   30736384  14.7G fd Linux raid
> > > autodetect /dev/sda2       30738432 1953523711 1922785280 916.9G fd Linux
> > > raid autodetect
> > > 
> > > 
> > > Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
> > > Units: sectors of 1 * 512 = 512 bytes
> > > Sector size (logical/physical): 512 bytes / 4096 bytes
> > > I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> > > Disklabel type: dos
> > > Disk identifier: 0x0009d086
> > > 
> > > Device     Boot    Start        End    Sectors   Size Id Type
> > > /dev/sdb1  *        2048   30738431   30736384  14.7G fd Linux raid
> > > autodetect /dev/sdb2       30738432 1953523711 1922785280 916.9G fd Linux
> > > raid autodetect
> > > 
> > > 
> > > Disk /dev/sdc: 7.2 GiB, 7743995904 bytes, 15124992 sectors
> > > Units: sectors of 1 * 512 = 512 bytes
> > > Sector size (logical/physical): 512 bytes / 512 bytes
> > > I/O size (minimum/optimal): 512 bytes / 512 bytes
> > > Disklabel type: dos
> > > Disk identifier: 0xc3072e18
> > > 
> > > Device     Boot Start      End  Sectors  Size Id Type
> > > /dev/sdc1          16 15124479 15124464  7.2G 83 Linux
> > > 
> > > 
> > > Disk /dev/loop0: 1.9 GiB, 2018508800 bytes, 3942400 sectors
> > > Units: sectors of 1 * 512 = 512 bytes
> > > Sector size (logical/physical): 512 bytes / 512 bytes
> > > I/O size (minimum/optimal): 512 bytes / 512 bytes
> > > Disklabel type: dos
> > > Disk identifier: 0x50e78d4f
> > > 
> > > Device       Boot Start     End Sectors  Size Id Type
> > > /dev/loop0p1 *        0 3942399 3942400  1.9G  0 Empty
> > > /dev/loop0p2      11236   21875   10640  5.2M ef EFI (FAT-12/16/32)
> > > 
> > > 
> > > Disk /dev/loop1: 405 MiB, 424710144 bytes, 829512 sectors
> > > Units: sectors of 1 * 512 = 512 bytes
> > > Sector size (logical/physical): 512 bytes / 512 bytes
> > > I/O size (minimum/optimal): 512 bytes / 512 bytes
> > > 
> > > 
> > > Disk /dev/loop2: 2 GiB, 2147483648 bytes, 4194304 sectors
> > > Units: sectors of 1 * 512 = 512 bytes
> > > Sector size (logical/physical): 512 bytes / 512 bytes
> > > I/O size (minimum/optimal): 512 bytes / 512 bytes
> > > 
> > > 
> > > Disk /dev/loop3: 512 MiB, 536870912 bytes, 1048576 sectors
> > > Units: sectors of 1 * 512 = 512 bytes
> > > Sector size (logical/physical): 512 bytes / 512 bytes
> > > I/O size (minimum/optimal): 512 bytes / 512 bytes
> > > 
> > > 
> > > Disk /dev/mapper/live-rw: 2 GiB, 2147483648 bytes, 4194304 sectors
> > > Units: sectors of 1 * 512 = 512 bytes
> > > Sector size (logical/physical): 512 bytes / 512 bytes
> > > I/O size (minimum/optimal): 512 bytes / 512 bytes
> > > 
> > > 
> > > Disk /dev/mapper/live-base: 2 GiB, 2147483648 bytes, 4194304 sectors
> > > Units: sectors of 1 * 512 = 512 bytes
> > > Sector size (logical/physical): 512 bytes / 512 bytes
> > > I/O size (minimum/optimal): 512 bytes / 512 bytes
> > > 
> > > 
> > > Disk /dev/md127: 14.7 GiB, 15736897536 bytes, 30736128 sectors
> > > Units: sectors of 1 * 512 = 512 bytes
> > > Sector size (logical/physical): 512 bytes / 4096 bytes
> > > I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> > > 
> > > 
> > > Disk /dev/md126: 916.7 GiB, 984331845632 bytes, 1922523136 sectors
> > > Units: sectors of 1 * 512 = 512 bytes
> > > Sector size (logical/physical): 512 bytes / 4096 bytes
> > > I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> > > 
> > > 
> > > Disk /dev/loop4: 1.9 GiB, 2018508800 bytes, 3942400 sectors
> > > Units: sectors of 1 * 512 = 512 bytes
> > > Sector size (logical/physical): 512 bytes / 512 bytes
> > > I/O size (minimum/optimal): 512 bytes / 512 bytes
> > > Disklabel type: dos
> > > Disk identifier: 0x50e78d4f
> > > 
> > > Device       Boot Start     End Sectors  Size Id Type
> > > /dev/loop4p1 *        0 3942399 3942400  1.9G  0 Empty
> > > /dev/loop4p2      11236   21875   10640  5.2M ef EFI (FAT-12/16/32)
> > > 
> > > David    
> > 
> > Additional info:
> > 
> > Doing some more research I found the following bug report:
> > 
> > https://bugzilla.redhat.com/show_bug.cgi?id=1225184
> > 
> > I am unsure if it refers to the same problem I am having but I assume they
> > are at least related.
> > 
> > One Note: This computer has been running a 32 bit F24 and I was planning to
> > move to the 64 bit. I have install F26 64 bit on about 10 computers that
> > had F24 32 bit on them. The installs worked with no problems. They were
> > also plain single drive computers.
> > 
> > I did an install using nfs for F24 Server 64 bit with no problem.
> > 
> > I have tried the F25 and F26 versions for Server, netinstall, Workstation.
> > I did this using nfs from another computer and a flash drive. None worked.
> > 
> > At this point I see two options: do a dnf upgrade from F24 to F25, then do
> > a dnf upgrade from F25 to F26, or install Centos7, which I would prefer
> > not to do. Nothing against Centos, I used it for 4/5 years but changed to
> > Fedora because I wanted to deal with incremental changes instead of a lot
> > of changes after running Centos for 8 -10 years.
> > 
> > Does anyone have other ideas or workarounds?
> > 
> > How solid is the dnf upgrade process?
> > 
> > Thanks for all the input,
> > 
> > David
> >   
> I did a dnf upgrade from F24 x86_64 to F25. All appeared to go well.
> On reboot it stopped after starting command scheduler.
> Going to a console (F2) I could login as my normal user and then run
> startxfce4 From there all looked good.
> But going back to 'F1' the boot process had not completed.
> I looked at grub.cfg and it looked correct.
> Is there any additional info I need to add?
> 
> Any thoughts?
> 
> David

I did a dnf upgrade to F26. With the same result as above. I looked at the
altered config files with rpmconf -a and adjusted as appropriate. Rebooted and
came up in the login screen as it should. There were some changes to
/etc/lightdm/lightdm.conf that needed to be changed to match the new config
file.

David

> 
> > > >       
> > > > > In the setup above it shows raid.<number> (ie. raid.6). Do you know
> > > > > what the number represents? Can it be changed from one install to the
> > > > > next?        
> > > > 
> > > > The "raid" bit of the label simply means they're to be used in a
> > > > software RAID. I have no idea why they're numbered in that manner
> > > > rather than sequentially.
> > > > 
> > > > Right below those "part" definitions, you see "raid" definitions where
> > > > those labels are normally used. In your case,
> > > > 
> > > > 	raid / --device=root --fstype=ext4 --level=raid1 --useexisting
> > > > 
> > > > tells the system to use the first two devices in the "part" section
> > > > (/dev/sda1 and /dev/sdb1) as a RAID1, format it as ext4 and mount it at
> > > > "/". Since no partitions are specified, it uses the first two in the
> > > > "part" section. In reality, that line with all the bits specified would
> > > > be:
> > > > 
> > > > 	raid / --device=root --fstype=ext4 --level=raid1 --useexisting
> > > > raid.6 raid.27
> > > > 
> > > > If the partitions to use weren't sequential (e.g. you wanted to use the
> > > > first and third partitions), you'd need to specify them explicitly at
> > > > the end of the line:
> > > > 
> > > > 	raid / --device=root --fstype=ext4 --level=raid1 --useexisting
> > > > raid.6 raid.14
> > > > 
> > > > You should be able to rename the labels in your ks.cfg if you wish, but
> > > > again if your RAID definition doesn't use sequential partitions, make
> > > > sure you specify them appropriately. The labels have no significance
> > > > outside of Anaconda/kickstart as far as I know.
> > > > ----------------------------------------------------------------------
> > > > - Rick Stevens, Systems Engineer, AllDigital    ricks@xxxxxxxxxxxxxx -
> > > > - AIM/Skype: therps2        ICQ: 226437340           Yahoo: origrps2 -
> > > > -                                                                    -
> > > > -       Charter Member of the International Sarcasm Society          -
> > > > -                "Yeah, like we need YOUR support!"                  -
> > > > ----------------------------------------------------------------------
> > > > _______________________________________________
> > > > users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
> > > > To unsubscribe send an email to
> > > > users-leave@xxxxxxxxxxxxxxxxxxxxxxx      
> > > _______________________________________________
> > > users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
> > > To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx    
> > _______________________________________________
> > users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
> > To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx  
> _______________________________________________
> users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
> To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx
_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx



[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [EPEL Devel]     [Fedora Magazine]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Desktop]     [Fedora Fonts]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Fedora Sparc]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux