RE: noob question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yeah, I walked into this one.  Today's my 7th day at a new job, came into this with it already built, and now the business wants to expand the size of the file system that will house pictures served up on the web. I'm pretty sure I wouldn't have chosen xfs for this app; overkill.

It is a VMWare VM, so we do have snapshots. First thing I did when I inherited this was to test those.

Everyone here is calling this thing an "appliance", and the version of Debian doesn't have parted or gdisk.

I'm thinking I'll go into the meeting I have on this in 40 minutes and ask for a vendor contact, and work with them to rebuild the whole thing with better planning.

-----Original Message-----
From: Stan Hoeppner [mailto:stan@xxxxxxxxxxxxxxxxx]
Sent: Monday, January 06, 2014 8:59 PM
To: Fitzgerald, Dan; xfs@xxxxxxxxxxx
Subject: Re: noob question

On 1/6/2014 4:11 PM, Fitzgerald, Dan wrote:
> I had our VMWare admin extend the file system on which /space is
> mounted (/dev/sda8).

Repeat at least three times:  Storage requires planning.

xfs_growfs only works on contiguous LBA sectors.  The free space to be grown into must begin one LBA sector after the last LBA sector of the current XFS filesystem.  If XFS resides on a partition, then the partition itself must be expanded into the free space before XFS can be grown into the newly expanded partition.  This seems to be your situation.  Resizing partitions is not a fun exercise, and if not done properly you can lose everything, literally.

If a block device is directly formatted with XFS things are easy.
Simply expand the block device capacity, then run xfs_growfs.

Because of the limitations up above, those wishing to add capacity using partitions, in an ad hoc manner, such as you seem to desire here, put their block device space in LVM volumes.  LVM can create a linear LBA address space from little pieces of capacity strewn all over the place on many different storage devices, regardless of their native LBA addressing.  I don't really care for this method either.

I've worked with ESX and bare metal hosts on FC SANs fairly extensively.
 For each of my guests/hosts I assign a 10GB LUN for the guest's boot/root filesystems, and a separate LUN(s), sized appropriately, for its data volume(s).  I directly format each LUN, no partitions.  In the event I need to expand a LUN to increase capacity, xfs_growfs simply works, the way it should, with no hoop jumping.

If you need PIT snapshot capability you must use LVM, unless your storage has a PIT snapshot facility, or you use VMware's snapshot utility.

--
Stan


_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ This email is confidential and intended solely for the use of the individual to whom it is addressed. If you have received this email in error please contact the sender and be advised that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. The terms for the purchase and sale of any property referenced in this email shall be solely determined by a ratified Purchase Agreement. Any information provided in this email, including but not limited to, pricing, financing, features of a property and/or community, is not to be construed as the basis of the bargain for the purchase and sale of any such property.

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux