Re: Btrfs going forward, was: Errors on an SSD drive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Fri, Aug 11, 2017 at 11:37 AM, hw <hw@xxxxxxxx> wrote:

> I want to know when a drive has failed.  How can I monitor that?  I´ve begun
> to use btrfs only recently.

Maybe checkout epylog and have it monitor for BTRFS messages. That's
your earliest warning because Btrfs will complain with any csum
mismatch even if the hardware is not reporting problems. For impending
drive failures, still your best bet is smartd even though the stats
are that it only predicts drive failures maybe 60% of the time.





>Chris Murphy wrote:
>> There's 1500 to 3000 line changes to Btrfs code per kernel release.
>> There's too much to backport most of it. Serious fixes do get
>> backported by upstream to longterm kernels, but to what degree, you
>> have to check the upstream changelogs to know about it.
>>
>> And right now most backports go to only 4.4 and 4.9. And I can't tell
>> you what kernel-3.10.0-514.10.2.el7.x86_64.rpm translates into, that
>> requires a secret decoder ring near as I can tell as it's a kernel
>> made from multiple branches,  and then also a bunch of separate
>> patches.
>
>
> So these kernels are a mess.  What´s the point of backports when they aren´t
> done correctly?

*sigh* Can we try to act rationally instead of emotionally?
Backporting is fucking hard. Have you bothered to look at kernel code
and how backporting is done? Or do you just assume that it's like
microwaving a hot pocket or something trivial? If it were easy, it
would be automated. It's not easy. A human has to look at the new
code, new fixes for old problems, and they have to graft it on old
ways of doing it and very often the new code does not cleanly apply to
old kernels. It's just a fact. And now that person has to come up with
a fix with old methods. That's a backport.

It is only messy to an outside observer, which includes me. People who
are doing the work at Red Hat very clearly understand it, the whole
point is to have a thoroughly understood stable conservative kernel.
They're very picky about taking on new features which tends to include
new regressions.



> This puts a big stamp "stay away from" on RHEL/Centos.

You have to pick your battles is what it comes down to. It is
completely legitimate to CentOS for stability elsewhere, and use a
nearly upstream kernel from elrepo.org or Fedora.

Of hand I'm not sure who is building CentOS compatible kernel packages
based on upstream longterm. A really good compromise right now is the
4.9 series, so if someone has a 4.9.42 kernel somewhere that'd be
neat. It's not difficult to build yourself either for that matter. I
can't advise you with Nvidia stuff though.


>Chris Murphy wrote
>> Red Hat are working on a new user space wrapper and volume format
>> based on md, device mapper, LVM, and XFS.
>> http://stratis-storage.github.io/
>> https://stratis-storage.github.io/StratisSoftwareDesign.pdf
>>
>> It's an aggressive development schedule and as so much of it is
>> journaling and CoW based I have no way to assess whether it ends up
>
>
> So in another 15 or 20 years, some kind of RH file system might become
> usable.

Lovely more hyperbole...

Read the document. It talks about an initial production quality
release 1st half of next year. It admits they're behind, *and* it also
says they can't wait 10 more years. So maybe 3? Maybe 5? I have no
idea. File systems are hard. Backups are good.


>Chris Murphy wrote:
>> tested. But this is by far the most cross platform solution: FreeBSD,
>> Illumos, Linux, macOS. And ZoL has RHEL/CentOS specific packages.
>
>
> That can be an advantage.
>
> What is the state of ZFS for Centos?  I´m going to need it because I have
> data on some disks that were used for ZFS and now need to be read by a
> machine running Centos.
>
> Does it require a particular kernel version?

Well, not to be a jerk but RTFM:
http://zfsonlinux.org/

It's like - I can't answer your question without reading it myself. So
there you go. I think it's DKMS based, so it has some kernel
dependencies but I think it's quite a bit more tolerant of different
kernel versions while maintain the same relative ZFS feature/bug set
for that particular release - basically it's decoupled from Linux.



>> But I can't tell you for sure what ZoL's faulty device behavior is
>> either, whether it ejects faulty or flaky devices and when, or if like
>> Btrfs is just tolerates it.
>
>
> You can monitor the disks and see when one has failed.


That doesn't tell me anything about how it differs from anything else.
mdadm offers email notifications as an option; LVM has its own
notification system I haven't really looked at but I don't think it
including email notifications; smartd can do emails but also dumps
standard messages to dmesg.


>
>> The elrepo.org folks can still sanely set CONFIG_BTRFS_FS=m, but I
>> suspect if RHEL unsets that in RHEL 8 kernels, that CentOS will do the
>> same.
>
>
> Sanely?  With the kernel being such a mess?

I don't speak for elrepo I have no idea how their config option
differs from RHEL or CentOS. But I do know elrepo offers stable
upstream kernels very soon after kernel.org posts them. It seems
completely reasonable to me for them to include the Btrfs module. If
there's a big regression that bites people in the ass, you can rest
assured you will not be the only person pissed off. Btrfs has been
really good about few regressions in the kernel for a few  years now.
The maintainers are running a bunch of the more risky patches for
months, and sometimes even once in mainline kernel they aren't the
default (for example the v2 space cache has been in the kernel since
4.5, but is still not the default in 4.13).



-- 
Chris Murphy
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos




[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]


  Powered by Linux