Re: Fedora 33 System-Wide Change proposal: Make btrfs the default file system for desktop variants

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jun 27, 2020 at 9:25 PM Gabriel Ramirez <gabrielo@xxxxxxxxxx> wrote:
>
> On 6/27/20 9:06 PM, Chris Murphy wrote:
> > On Sat, Jun 27, 2020 at 7:32 PM Garry T. Williams <gtwilliams@xxxxxxxxx> wrote:
> >> On Saturday, 27 June 2020 17:29:23 EDT Chris Murphy wrote:
> >>> For btrfs, it is either 'single' or 'raid0' profile for data, but
> >>> 'raid1' for metadata (the file system itself).
> >>>
> >>> I need to test it or maybe someone beats me to it by looking at the
> >>> code. But either way it's equal to or better than the current default.
> >> I just did that install (KDE) and it was raid0 for data (raid1 for
> >> metatdata).
> >>
> >> I switched to raid1 for data as soon as I noticed what had happened.
> >
> > Just a PSA: btrfs raid1 does not have a concept of automatic degraded
> > mount in the face of a device failure. By default systemd will not
> > even attempt to mount it if devices are missing. And it's not advised
> > to use 'degraded' mount option in fstab. If you do need to mount
> > degraded and later the missing device is found (?) you need to scrub
> > to catch up the formerly missing device to the current state.
> >
> That seems a step back,

Yes. You do gain self-healing and unambiguous scrubs, which apply only
to the used portion of the drives. Three steps forward half step back?
The priority would be to replace the failing/failed drive before a
reboot. Yes, the use case where the drive dies on startup and
unattended boot is needed is weaker.

> in my current scenario almost all my machines
> have 2 disks (not hotswap) in raid 1 with mdadm and the partitions are
> ext4, so when a disk failed:
>
> the machine keeps working until I do a shutdown'

Same on btrfs raid1. The gotcha is with the reboot while the device is
still failed and not yet replaced.

It is also valid to do btrfs on mdadm raid1, in which case you get
integrity checking, but lose the self-healing of btrfs native raid1.

> replace the disk

'btrfs replace' -

There will be a "decoder ring" guide to adapt to various commands.  A
super rough draft, more of a quick and dirty concept, is here:
https://fedoraproject.org/wiki/User:Chrismurphy/lvm2btrfs

And hopefully this will expand into "how do I?" step by step use cases.

Of course the single drive case folks aren't expected to know btrfs
commands. If they want more detailed info however, they're there.

>
> start the machine with all services running
>
> create the disk partitions, if needed reboot again, and add the
> partitions to the raid devices

Off hand I'm not thinking a reboot is needed. You can partition the
replacement drive, and then do 'btrfs replace' - whether it's
pre-emptive or for a missing device. This command combines: mkfs /
device add, and replication of raid1 block groups using a variation on
scrub.


> So seems with btrfs raid in my situation more downtime will be required,
> but I always do custom partitioning, so this change doesn't impact me,
> but if more people will be use btrfs it need more documentation.

Likely.


-- 
Chris Murphy
_______________________________________________
devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Users]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]

  Powered by Linux