Re: lvresize and XFS, was: default file system

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Feb 27, 2014, at 3:32 PM, Chris Murphy <lists@xxxxxxxxxxxxxxxxx> wrote:

> 
> On Feb 27, 2014, at 3:02 PM, Jochen Schmitt <Jochen@xxxxxxxxxxxxxxx> wrote:
> 
>> On Thu, Feb 27, 2014 at 04:08:46PM -0500, James Wilson Harshaw IV wrote:
>>> A question I have is XFS worth it?
>> 
>> I have done some testing with RHEL 7 Beta which use XFS as a default file system.
>> 
>> I have to recorgnize, that the -r switch of the lvresize command doesn't cooperate
>> with xfs in oppoiste of ext4.
> 
> Where you growing or shrinking the fs, and was it mounted at the time, and what error did you get? XFS doesn't support shrink, and only can be grown online. I'm pretty sure lvresize -r supports xfs_growfs via fsadm.

worksforme

Starting with a 10TB XFS volume, 5TB x 5 disk VG.


# lvresize -r -v --size 15T VG/LV
    Finding volume group VG
    Executing: fsadm --verbose check /dev/VG/LV
fsadm: "xfs" filesystem found on "/dev/mapper/VG-LV"
fsadm: Skipping filesystem check for device "/dev/mapper/VG-LV" as the filesystem is mounted on /mnt
    fsadm failed: 3
    Archiving volume group "VG" metadata (seqno 2).
  Extending logical volume LV to 15.00 TiB
    Loading VG-LV table (253:0)
    Suspending VG-LV (253:0) with device flush
    Resuming VG-LV (253:0)
    Creating volume group backup "/etc/lvm/backup/VG" (seqno 3).
  Logical volume LV successfully resized
    Executing: fsadm --verbose resize /dev/VG/LV 16106127360K
fsadm: "xfs" filesystem found on "/dev/mapper/VG-LV"
fsadm: Device "/dev/mapper/VG-LV" size is 16492674416640 bytes
fsadm: Parsing xfs_info "/mnt"
fsadm: Resizing Xfs mounted on "/mnt" to fill device "/dev/mapper/VG-LV"
fsadm: Executing xfs_growfs /mnt
meta-data=/dev/mapper/VG-LV      isize=256    agcount=10, agsize=268435455 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=2684354550, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 2684354550 to 4026531825

# df -h
Filesystem         Size  Used Avail Use% Mounted on
/dev/mapper/VG-LV   15T   33M   15T   1% /mnt


However, I don't know what "fsadm failed: 3" means.


Chris Murphy
-- 
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]
  Powered by Linux