Re: raid5/lvm setup questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Aug 05, 2006 at 06:31:37PM +0100, David Greaves wrote:
> > Say going from 300gbx4 to 500gbx4.  Can one replace them
> > one at a time, going through fail/rebuild as appropriate
> > and then expand the array into the unused space
> Yes.

I didn't see anything in the mdadm manual on this.  Would
one just do a --grow /dev/md0 once the disks were changed
out?  It looks like --grow is used to change the number of
devices in the array but not the device size itself.

> That's not to say don't do it - but you certainly don't
> *need* to do it.

Well, the reason I was looking at LVM is because since this
is a fairly big array, I didn't want to lose a bunch of
space with ext3 inodes.  For example, the PostGreSQL
partition could allocate less inodes, say 1 for every 1mb
where a partition with many tiny files could go with a 1k
FS block size so as not to waist too much disk space.  Just
a method to optimize the FS storage per major application
really.  I haven't done performance tests for layering lvm
over md but I'm sure others have.  Will search the archives
on that.

Thanks for the info though, was very helpful.

Shane


-- 
http://www.cm.nu/~shane/
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux