1) Topic one - SCTERC on NVMe I'm in the middle of installing Linux on a new PC. I have a pair of 1TB NVMe drives. All of my non-NVMe drives support "smartctl -l scterc,70,70" but the NVMe drives do not seem to. How can one ensure that SCTERC is configured properly in an NVMe drive that is part of a software RAID constructed using mdadm? Is this an issue that has been solved or asked or addressed before? The searching I did didn't bring anything up. 2) Topic two - RAID on /boot and /boot/efi It looks like RHEL 8 and clones don't support the installer building LVM on top of RAID as they used to. I kind of suspect that the installer would prefer that if I want LVM, that I use the RAID built into LVM at this point. But it looks to me like the mdadm tools are still much more mature than the RAID built into LVM. (Even though it appears that this is built on top of the same code base?) This means I have to do that work before running the installer, by running the installer in rescue mode, then run the installer and "reformat' the partitions I have created by hand. I haven't gone all the way through this process but it looks like it works. It also looks like maybe I cannot use the installer to set up RAID mirroring for /boot or /boot/efi. I may have to set that up after the fact. It looks like I have to use metadata format 1.0 for that? I'm going to go through a couple experimental installs to see how it all goes (using wipefs, and so on, between attempts). I've made a script to do all the work for me so I can experiment. The good thing about this is it gets me more familiar with the command-line tools before I have an issue, and it forces me to document what I'm doing in order to set it up. One of my goals for this install is that any single disk can fail, including a disk containing / or /boot or /boot/efi, with a simple recovery process of replacing the failed disk and rebuilding an array and no unscheduled downtime. I'm not sure it's possible (with /boot and /boot/efi in particular) but I'm going to find out. All I can tell from research so far is that metadata 1.1 or 1.2 won't work for such partitions. 3) Topic three - WD Red vs Red Plus vs Red Pro In the Wiki, it might be worth mentioning that while WD Red are currently shingled, the Red Plus and Red Pro are not. I can search again and provide links to that information if it would help. I thought I had bought bad drives (Red Pro) but then discovered that the Red Pro is CMR now SMR. Whew. While this page is clear about the difference between Red, Red Pro, and Red Plus: https://raid.wiki.kernel.org/index.php/Timeout_Mismatch this page is not: https://raid.wiki.kernel.org/index.php/Drive_Data_Sheets I would be happy to propose some text changes if that would help. 4) Topic four -- Wiki Would it be worth it if I documented some of the work I've gone through to get this set up? I'm just an enthusiast who works with RHEL at my employer and has been running Red Hat in some form or another at home since 1996, but I'm not a sysadmin. I'm certain I'm going overkill in trying to ensure that every single filesystem is ultimately on some form of mdadm RAID, but I just don't want to deal with unscheduled downtime any longer. Thanks Eddie