thank you, very valuable!
Am 09.04.23 um 20:53 schrieb Roger Heflin:
On Sun, Apr 9, 2023 at 1:21 PM Roland <devzero@xxxxxx> wrote:
Well, if the LV is being used for anything real, then I don't know of
anything where you could remove a block in the middle and still have a
working fs. You can only reduce fs'es (the ones that you can reduce)
by reducing off of the end and making it smaller.
yes, that's clear to me.
It makes zero sense to be able to remove a block in the middle of a LV
used by just about everything that uses LV's as nothing supports being
able to remove a block in the middle.
yes, that critics is totally valid. from a fs point of view you completely
corrupt the volume, that's clear to me.
What is your use case that you believe removing a block in the middle
of an LV needs to work?
my use case is creating some badblocks script with lvm which intelligently
handles and skips broken sectors on disks which can't be used otherwise...
my plan is to scan a disk for usable sectors and map the logical volume
around the broken sectors.
whenever more sectors get broken, i'd like to remove the broken ones to have
a usable lv without broken sectors.
since you need to rebuild your data anyway for that disk, you can also
recreate the whole logical volume.
my question and my project is a little bit academic. i'd simply want to try
out how much use you can have from some dead disks which are trash otherwise...
the manpage is telling this:
Resize an LV by specified PV extents.
lvresize LV PV ...
[ -r|--resizefs ]
[ COMMON_OPTIONS ]
so, that sounds like that i can resize in any direction by specifying extents.
Now if you really need to remove a specific block in the middle of the
LV then you are likely going to need to use pvmove with specific
blocks to replace those blocks with something else.
yes, pvmove is the other approach for that.
but will pvmove continue/finish by all means when moving extents located on a
bad sector ?
the data may be corrupted anywhy, so i thought it's better to skip it.
what i'm really after is some "remap a physical extent to a healty/reserved
section and let zfs selfheal do the rest". just like "dismiss the problematic
extents and replace with healthy extents".
i'd better like remapping instead of removing a PE, as removing will invalidate
the whole LV....
roland
Create an LV per device, and when the device is replaced then lvremove
the devices list. Once a sector/area is bad I would not trust the
sectors until you replace the device. You may be able to try the
pvmove multiple times and the disk may be able to eventually rebuild
the data.
My experience with bad sectors is once it reports bad the disks will
often rewrite it at the same location and call it "good" when it is
going to report bad again almost immediately, or be a uselessly slow
sector. Sometimes it will replace the sector on a
re-write/successful read but that seems unreliable.
On non-zfs fs'es I have found the "bad" file and renamed it
badfile.#### and put it in a dir called badblocks. So long as the bad
block is in the file data then you can contain the badblock by
containing the bad file. And since most of the disk will be file
data that should also be a management scheme not requiring a fs
rebuild.
The re-written sector may also be "slow" and it might be wise to treat
those sectors as bad, and in the "slow" sector case pvmove should
actually work. For that you would need a badblocks that "timed" the
reads to disk and treats any sector taking longer that even say .25
seconds as slow/bad. At 5400 rpm, .25/250ms translates to around 22
failed re-read tries. If you time it you may have to do some testing
on the entire group of reads in smaller aligned sectors to figure out
which sector in the main read was bad. If you scanned often enough
for slows you might catch them before they are completely bad.
Technically the disk is supposed to do that on its scans, but even
when I have turned the scans up to daily it does not seem to act
right.
And I have usually found that the bad "units" are 8 units of 8
512-byte sectors for a total of around 32k (aligned on the disk).
_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/