On 01/01/2017 17:23, Matthew Miller wrote: > On Sun, Jan 01, 2017 at 10:10:55AM +0100, Mayavimmer wrote: >> I tried to do an identical second install on the same machine, but the >> installer Anaconda gives an error about being unable to set a root >> partition. > > This isn't _forbidden_, but it also isn't something we test offically — > and in fact I'm not sure if anyone has actually tested it ever. So, > while I don't see why it couldn't be made to work, I also am not > surprised to hear it doesn't. I tested about 10 F25 installs yesterday, plus 2 Rosalinux R8 and 2 Mint 18, on an old server with 2GB RAM and a new laptop with 12GB RAM. All 3 OS' had to deal with previous installed versions of the same, except a couple of cases where I restarted from an empty disk. Only the F25's gave me problems on both boxes and in different independent ways. An interesting behavior, as I explained a few posts ago, happens when you install a second or a third F25, all in the standard LVM device configuration. They seem to work ok, though there no indication on the grub menu which one you are running. The problem appears when you install a new F25 with the /boot partition _inside_ the LVM container, which seems to work. Except, upon reboot the others are gone! Also I tried my preferred configuration: Btrfs RAID1 over LVM, which should give the best of both worlds: awesome scrub autorepair and proper pooling of same disk spare partitions! The installer barks. It seems to think that If I want to use Btrfs as a raid fs I also have to use it as a volume manager. According the the Fedora info mentioned a few posts back this should only cost a slight, not consistent as somebody said, performance hit. Is is true that the installer cannot put a Btrfs fs on a LVM partition? I could have missed something. > > Can I ask what you are aiming to accomplish with this? There might be a > better way — virtualization or containers, perhaps. > I have a remote customer with an old server with a Rosalinux and Mes5 on top of a 2x2TB ext4 over raid. I cannot easily access the location and need to do most maintenance remotely. They could only be trusted to reboot the machine at most, or perhaps select a different boot device from the old BIOS. The old OS is failing but cannot suffer downtime. I was hoping to install two different F25's in the small 20GB partition left unraided on the second disk: /dev/sdb17. Reboot to F25. Check everything. Then do the rest of the work slowly, carefully and incrementally from remote. Slowly copying files, enlarging partitions and finally, online raiding the root partition to the other disk, and finally attaining full redundancy. With at most a single remote reboot or possible none, and no downtime. There is more, but this already can only be done _only_, I believe, with Btrfs (ZFS) RAID1 over LVM volumes. It's not crazy, I have done similar things in the past. The customer never complained. Oh, I would have preferred a more stable environment, like RedHat or CentOS, but I need a recent kernel and btrfs-tools to do this. Going for coffee, back in an hour. _______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx