Re: Filesystem corruption with LVM's pvmove onto a PV with a larger physical block size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 5, 2019 at 12:45 AM Cesare Leonardi <celeonar@xxxxxxxxx> wrote:
On 02/03/19 21:25, Nir Soffer wrote:
> # mkfs.xfs /dev/test/lv1
> meta-data=""         isize=512    agcount=4, agsize=25600 blks
>           =                       sectsz=512   attr=2, projid32bit=1
>           =                       crc=1        finobt=1, sparse=0,
> rmapbt=0, reflink=0
> data     =                       bsize=4096   blocks=102400, imaxpct=25
>           =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
> log      =internal log           bsize=4096   blocks=855, version=2
>           =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0

Has the problem here the same root as for ext4? I guess sectsz should be
 >=4096 to avoid troubles, isn't it?

Just to draw some conlusion, could we say that currently, if we are
going to move data around with LVM, it's better to check that the
filesystem is using a block size >= than "blockdev --getbsz
DESTINATIONDEVICE"? At least with ext4 and xfs.

Something that couldn't be true with really small devices (< 500 MB).

Is there already an open bug regarding the problem discussed in this thread?

There is this bug about lvextend:

And this old bug from 2011, discussing mixing PVs with different block size.
Comment 2 is very clear about this issue:

Nir
_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux