Re: How does one enable SCTERC on an NVMe drive (and other install questions)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Eddie,

On 6/21/21 1:00 AM, Edward Kuns wrote:
1) Topic one - SCTERC on NVMe

I'm in the middle of installing Linux on a new PC.  I have a pair of
1TB NVMe drives.  All of my non-NVMe drives support "smartctl -l
scterc,70,70" but the NVMe drives do not seem to.  How can one ensure
that SCTERC is configured properly in an NVMe drive that is part of a
software RAID constructed using mdadm?  Is this an issue that has been
solved or asked or addressed before?  The searching I did didn't bring
anything up.

You can't, if the firmware doesn't allow it. Do try reading the values before writing them, though. I've seen SSDs that start up with "40,40" and refuse changes. 40,40 is fine.

Please report here any brands/models that either don't support SCTERC at all, or are stuck on disabled. So the rest of us can avoid them.

2) Topic two - RAID on /boot and /boot/efi

It looks like RHEL 8 and clones don't support the installer building
LVM on top of RAID as they used to.  I kind of suspect that the
installer would prefer that if I want LVM, that I use the RAID built
into LVM at this point.  But it looks to me like the mdadm tools are
still much more mature than the RAID built into LVM.  (Even though it
appears that this is built on top of the same code base?)

This means I have to do that work before running the installer, by
running the installer in rescue mode, then run the installer and
"reformat' the partitions I have created by hand.  I haven't gone all
the way through this process but it looks like it works.  It also
looks like maybe I cannot use the installer to set up RAID mirroring
for /boot or /boot/efi.  I may have to set that up after the fact.  It
looks like I have to use metadata format 1.0 for that?  I'm going to
go through a couple experimental installs to see how it all goes
(using wipefs, and so on, between attempts).  I've made a script to do
all the work for me so I can experiment.

The good thing about this is it gets me more familiar with the
command-line tools before I have an issue, and it forces me to
document what I'm doing in order to set it up.  One of my goals for
this install is that any single disk can fail, including a disk
containing / or /boot or /boot/efi, with a simple recovery process of
replacing the failed disk and rebuilding an array and no unscheduled
downtime.  I'm not sure it's possible (with /boot and /boot/efi in
particular)  but I'm going to find out.  All I can tell from research
so far is that metadata 1.1 or 1.2 won't work for such partitions.

I don't use CentOS, but you seem to be headed down the same path I would follow. And yes, use metadata v1.0 so the BIOS can treat /boot/efi as a normal partition. You may be able to use other metadata on /boot itself if the grub shim in /boot/efi supports it. (Not sure on that.)

[trimmed what Wol responded to]

Regards,

Phil



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux