Re: if we want to advocate btrfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jul 18, 2020 at 12:51 PM Andy Mender <andymenderunix@xxxxxxxxx> wrote:
>
> On Sun, 12 Jul 2020 at 18:09, Chris Murphy <lists@xxxxxxxxxxxxxxxxx> wrote:
>>
>> On Sun, Jul 12, 2020 at 5:39 AM Andy Mender <andymenderunix@xxxxxxxxx> wrote:
>> >
>> >On updates, a single automatic corrupted snapshot can
>> > potentially hose the entire snapshotted volume.
>>
>> How do you mean? If this is a sort of superficial corruption like a
>> bad/failed/partial update, inconsistency between package manager and
>> what's installed - this can be self-contained to a specific snapshot.
>> One possible idea for updates is snapshot and do the update out of
>> band (not the current running sysroot) on a snapshot. If the update
>> fails for whatever reason, destroy the snapshot. Corruption that
>> affects multiple subvolumes wouldn't be related to snapshotting, but
>> the shared trees: extent, chunk, csum, uuid, etc. trees.
>
>
> I'm sorry, I should've been a little more specific. What I meant was that a corrupted snapshot can potentially impact the subvolume and put it in a state in which simply deleting the latest snapshot is not going to help or can't easily be done.

It depends on the nature of the corruption. If it's file system
corruption, e.g. due to some kind of hardware problem, then yeah it
could affect any number of things and not be isolated to a particular
subvolume.

Whereas if I make a snapshot and intentionally corrupt it, e.g. by
deleting half the files in it, and truncating the rest - all of those
changes are isolated to the snapshot and in no way affect the original
subvolume.

>>
>>
>> >Also, if your system is almost broken after the change,
>> > no snapshot will help.
>>
>> I'm not sure about the nature of the brokenness in your example. Btrfs
>> does have a concept of a volume wide snapshot, which is the seed
>> device. The file system is merely marked read-only, but can have a
>> second device added that accepts all writes. If this two device volume
>> were to become irreversibly confused, it'd still be possible to revert
>> to the read-only device - even temporarily - as a kind of "recovery"
>> boot. With extreme prejudice, a true factory reset is possible by
>> wiping the read-write 2nd device and starting over. It's also possible
>> to use it for replication - by adding a 2nd device and removing the
>> 1st, an exact copy is made. This is a whole separate ball of wax, and
>> while there are ideas how it might be leveraged, there's no plan to do
>> so yet.
>>
> I agree, but it requires adding a second device and sometimes that's not possible or tricky.

Yeah, anyone using that particular feature would have something like
an A/B partition setup as the two devices. The A partition would be
the read-only "recovery" image, and the second device is just a second
partition that accepts the changes from A. And in fact, you can boot
either A or B. Or you could even boot C, made from A and a ramdisk as
a volatile 2nd device.

>I extrapolated a lot, but sometimes btrfs tools are marketed as a "catch all" which can save the user from accidental installations or updates and that's not always true.

Do you have an example? I'm not sure I follow.

In my case, I'm not using any automated tools. I don't consistently
take snapshots before updates. If I don't make a snapshot, and foul an
update somehow, Btrfs offers no magic solution for that. If I do make
a snapshot first - I can do (and have done) truly vile and malicious
things like rm -rf / and watch everything pancake; but I can pull the
plug, boot, point to the snapshot as the new root, and the system
boots fine. The messed up original root subvolume can just be deleted.


-- 
Chris Murphy
_______________________________________________
devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Users]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]

  Powered by Linux