Re: Data migration for replacing HDD with SSD - suggestions?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Dec 17, 2017 at 2:55 PM, Tony Nelson
<tonynelson@xxxxxxxxxxxxxxxxx> wrote:
> On 17-12-17 14:38:02, Chris Murphy wrote:
>  ...
>>
>> smarctl -l scterc /dev/
>>
>> This will reveal if this drive supports SCT ERC. If it does, and it's
>> not enabled, it can be set to an obscenely high value, and then a
>> matching SCSI block device command timer might permit deep recovery by
>> the drive's firmware. And then you can just rsync the data from one
>> drive to another. I would not depend on SMB for this.
>
>  ...
>
> Was it you who told us of a script to cope with drives that don't
> support SCT ERC?
>
> https://raid.wiki.kernel.org/index.php/Timeout_Mismatch


That's for multiple devices not using hardware RAID (mdadm, lvm, or
Btrfs raids and concats). Basically you need the drive to time out
before the SCSI command timer, hence you either want the drive to have
a short recovery, which as it suggests at that page, 70 deciseconds.
The drive times out, i.e. stops trying to read the bad sector,
produces a discrete read error with a sector address, and then
md,lvm,btrfs can get a copy from someother device and do a repair. If
the drive does not support SCT ERC, then increasing the block layer
command timer to something extreme ensures the drive can produce this
read error before the command expires. The kernel tracks every SCSI
command sent to a drive (SCSI, SATA, USB block devices) and puts a
timer on it, by default this timer is 30 seconds. If the command
hasn't completed correctly, nor produced a discrete error, the kernel
will reset the device - which is pretty bad behavior almost always.

Any way in the single device scenario, there are drives floating
around that support SCT ERC but it's disabled and therefore unknown
what their time out it. So I'd set it's SCT ERC to 180 seconds (1800
deciseconds is the unit used by smartctl -l scterc), *and* also
increase the SCSI command timer for that device to 180 as well. If it
hasn't recovered in 3 minutes, it's not recoverable.




>
> OP: generally, if a drive can't read a sector in a few seconds, it
> won't ever be able to read that sector.

You'd think so but there are cases where the deep recoveries are well
over a minute. It's hard to imagine why but... I've seen it happen.
For RAID setups obviously you do want it to error out fast because you
have another copy on another device, so it's best if the drive gives
up fast, so that might be what you're used to. There's a lot of data
about drives accumulating marginally readable sectors (hard drives
anyway) and the manifestation of this is a sluggish system, really
sluggish. You'll see Windows boards full of this, and people are
invariably told to just reinstall Windows and it fixes the problem
leading to this myth that it's a "crusty file system" with too much
junk. It's bull. It's just that the Windows kernel has very high
command timeouts, so it's waiting for the drive to give it the
requested sector for a really long time. On Linux by default this
turns into piles of link reset errors because the kernel gives up well
before the drive gives a discrete read error.

And yeah, reinstalling fixes it because sectors are being overwritten.
So they have clean and clear signal, and any sector that fails to
write gets removed from use, with the LBA remapped to a reserve
sector.



 Possibly more data can be
> recovered (with some holes) using GNU ddrescue, or the alternative
> dd_rescue with dd_rhelp.  Note that wither would be used to copy a
> whole partition or disk.


For several reasons I don't recommend imaging file systems other than
to have a backup to work on and recover data from. But ff you need to
get data off a volume and put it onto another device for production
use, use rsync or use a file system specific cloning tool like
xfs_copy or btrfs seed (or subvolume send receive).

-- 
Chris Murphy
_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx



[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [EPEL Devel]     [Fedora Magazine]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Desktop]     [Fedora Fonts]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Fedora Sparc]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux