Re: Question regarding XFS on LVM over hardware RAID.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Stan Hoeppner (2014-02-18 18:07:24)
> Create each LV starting on a stripe boundary.  There will be some
> unallocated space between LVs.  Use the mkfs.xfs -d size= option to
> create your filesystems inside of each LV such that the filesystem total
> size is evenly divisible by the stripe width.  This results in an
> additional small amount of unallocated space within, and at the end of,
> each LV.

Of course, this occurred to me just after sending the message... ;)

> It's nice if you can line everything up, but when using RAID6 and one or
> two bays for hot spares, one rarely ends up with 8 or 16 data spindles.
> 
> > If not, I'll tweak things to ensure my stripe width is a power of 2.
> 
> That's not possible with 12 data spindles per RAID, not possible with 42
> drives in 3 chassis.  Not without a bunch of idle drives.

The closest I can come is with 4 RAID 6 arrays of 10 disks each, then
striped over:

8 * 128k = 1024k
1024k * 4 = 4096k

Which leaves me with 5 disks unused.  I might be able to live with that
if it makes things work better.  Sounds like I won't have to.


> I still don't understand why you believe you need LVM in the mix, and
> more than one filesystem.

> Backup software is unaware of mount points.  It uses paths just like
> every other program.  The number of XFS filesystems is irrelevant to
> "minimizing the effects of the archive maintenance jobs".  You cannot
> bog down XFS.  You will bog down the drives no matter how many
> filesystems when using RAID60.

A limitation of the software in question is that placing multiple
archive paths onto a single filesystem is a bit ugly: the software does
not let you specifiy a maximum size for the archive paths, and so will
think all of them are the size of the filesystem.  This isn't an issue
in isolation, but we need to make use of a data-balancing feature the
software has, which will not work if we place multiple archive paths on
a single filesystem.  It's a stupid issue to have, but it is what it is.

> Here is what you should do:
> 
> Format the RAID60 directly with XFS.  Create 3 or 4 directories for
> CrashPlan to use as its "store points".  If you need to expand in the
> future, as I said previously, simply add another 14 drive RAID6 chassis,
> format it directly with XFS, mount it at an appropriate place in the
> directory tree and give that path to CrashPlan.  Does it have a limit on
> the number of "store points"?

Yes, this is what I *want* to do.  There's a limit to the number of
store points, but it's large, so this would work fine if not for the
multiple-stores-on-one-filesystem issue.  Which is frustrating.

The *only* reason for LVM in the middle is to allow some flexibility of
sizing without dealing with the annoyances of the partition table.
I want to intentionally under-provision to start with because we are
using a small corner of this storage for a separate purpose but do not
know precisely how much yet.  LVM lets me leave, say, 10TB empty, until
I know exactly how big things are going to be.

It's a pile of little annoyances, but so it goes with these kinds of things.

It sounds like the little empty spots method will be fine though.

Thanks, yet again, for all your help.
--
Morgan Hamill

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux