On Sun, Jan 1, 2017 at 10:10 AM, Mayavimmer <mayavimmer@xxxxxxxxx> wrote: > On 01/01/2017 17:23, Matthew Miller wrote: >> On Sun, Jan 01, 2017 at 10:10:55AM +0100, Mayavimmer wrote: >>> I tried to do an identical second install on the same machine, but the >>> installer Anaconda gives an error about being unable to set a root >>> partition. >> >> This isn't _forbidden_, but it also isn't something we test offically — >> and in fact I'm not sure if anyone has actually tested it ever. So, >> while I don't see why it couldn't be made to work, I also am not >> surprised to hear it doesn't. > > I tested about 10 F25 installs yesterday, plus 2 Rosalinux R8 and 2 Mint > 18, on an old server with 2GB RAM and a new laptop with 12GB RAM. All 3 > OS' had to deal with previous installed versions of the same, except a > couple of cases where I restarted from an empty disk. Only the F25's > gave me problems on both boxes and in different independent ways. > > An interesting behavior, as I explained a few posts ago, happens when > you install a second or a third F25, all in the standard LVM device > configuration. They seem to work ok, though there no indication on the > grub menu which one you are running. The problem appears when you > install a new F25 with the /boot partition _inside_ the LVM container, > which seems to work. Except, upon reboot the others are gone! A possible explanation for this, is this old bug. The installer doesn't make all LV's active, therefore grub2-mkconfig won't find them, and won't create boot entries for them. https://bugzilla.redhat.com/show_bug.cgi?id=825236 However annoying that is though, about as suboptimal is the way grub2-mkconfig makes generic boot entries for other OS's rather than just pointing to their "native" grub.cfg using the configfile command. This forwarding command is a vastly better workflow than the grub.cfg of Distro X becoming responsible for Distro Y. When Distro Y gets a kernel update, only Distro Y's grub.cfg is updated; so if you're using a configfile forwarding workflow, you'll see that new kernel automatically whereas if you depend on GRUB as-designed (including as it works in Fedora), you're totally stuffed. Distro X's grub.cfg won't reflect the change until you run grub2-mkconfig. > Also I tried my preferred configuration: Btrfs RAID1 over LVM, which > should give the best of both worlds: awesome scrub autorepair and proper > pooling of same disk spare partitions! The installer barks. It seems to > think that If I want to use Btrfs as a raid fs I also have to use it as > a volume manager. According the the Fedora info mentioned a few posts > back this should only cost a slight, not consistent as somebody said, > performance hit. Is is true that the installer cannot put a Btrfs fs on > a LVM partition? I could have missed something. The Fedora installer will not put Btrfs on either LVM or md RAID. You could use blivet-gui to get the layout you want in advance, and the installer should recognize all of those pieces (blivet-gui and anaconda both leverage python-blivet and libblockdev to recognize and create storage stacks) and let you set them up as mount points. For a pre-created Btrfs, the installer will force the creation of a new Btrfs subvolume for the "/" mount point; otherwise it will let you reuse existing subvolumes and file systems. Blivet-gui is supposedly going to be integrated into the Fedora 26 installer as an advanced partitioning option. The installer is supposed to enforce /boot on a standard partition or md RAID; but not allowing it to be in LVM or Btrfs. > >> >> Can I ask what you are aiming to accomplish with this? There might be a >> better way — virtualization or containers, perhaps. >> > > I have a remote customer with an old server with a Rosalinux and Mes5 on > top of a 2x2TB ext4 over raid. I cannot easily access the location and > need to do most maintenance remotely. They could only be trusted to > reboot the machine at most, or perhaps select a different boot device > from the old BIOS. The old OS is failing but cannot suffer downtime. I > was hoping to install two different F25's in the small 20GB partition > left unraided on the second disk: /dev/sdb17. Reboot to F25. Check > everything. Then do the rest of the work slowly, carefully and > incrementally from remote. Slowly copying files, enlarging partitions > and finally, online raiding the root partition to the other disk, and > finally attaining full redundancy. With at most a single remote reboot > or possible none, and no downtime. There is more, but this already can > only be done _only_, I believe, with Btrfs (ZFS) RAID1 over LVM volumes. If the OS itself is failing, you have no choice but to accept a moment of downtime to reboot new binaries. If it were just a case of a hard drive dying, migration to replacement hardware can be done with either Btrfs or LVM, independently. For LVM setups it's pvcreate > vgextend > pvmove > vgreduce. For Btrfs it's either 'btrfs replace' or more conventionally with 'btrfs dev add' followed by 'btrfs dev remove' - the former requires a replacement at least as large as the original device, where the add/remove method will work if the replacement is smaller. The fs is resized automatically in all cases. -- Chris Murphy _______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx