Moving external storage between bricks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Excellent and clearly explained. Thanks Carl!

James Burnash, Unix Engineering
T. 201-239-2248 
jburnash at knight.com | www.knight.com

545 Washington Ave. | Jersey City, NJ


-----Original Message-----
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Craig Carl
Sent: Wednesday, December 01, 2010 12:19 PM
To: gluster-users at gluster.org
Subject: Re: Moving external storage between bricks

James -
    The setup you've described is pretty standard, if we assume that you 
are going to mount each array at /mnt/array{1-8}, your volume will be 
called vol1, and your servers are named server{1-4} your gluster volume 
create command would be -

Without replicas -

#gluster volume create vol1 transport tcp server1:/mnt/array1 
server2:/mnt/array1 server3:/mnt/array1 server4:/mnt/array1 
server1:/mnt/array2 server2:/mnt/array2 server3:/mnt/array2 
server4:/mnt/array2 server1:/mnt/array3 server2:/mnt/array3 
server3:/mnt/array3 server4:/mnt/array3 server1:/mnt/array4 
server2:/mnt/array4 server3:/mnt/array4 server4:/mnt/array4 
server1:/mnt/array5 server2:/mnt/array5 server3:/mnt/array5 
server4:/mnt/array5 server1:/mnt/array6 server2:/mnt/array6 
server3:/mnt/array6 server4:/mnt/array6 server1:/mnt/array7 
server2:/mnt/array7 server3:/mnt/array7 server4:/mnt/array7 
server1:/mnt/array8 server2:/mnt/array8 server3:/mnt/array8 
server4:/mnt/array8
This would get you a single 512TB NFS mount.

With replicas(2) -

#gluster volume create vol1 replica 2 transport tcp server1:/mnt/array1 
server2:/mnt/array1 server3:/mnt/array1 server4:/mnt/array1 
server1:/mnt/array2 server2:/mnt/array2 server3:/mnt/array2 
server4:/mnt/array2 server1:/mnt/array3 server2:/mnt/array3 
server3:/mnt/array3 server4:/mnt/array3 server1:/mnt/array4 
server2:/mnt/array4 server3:/mnt/array4 server4:/mnt/array4 
server1:/mnt/array5 server2:/mnt/array5 server3:/mnt/array5 
server4:/mnt/array5 server1:/mnt/array6 server2:/mnt/array6 
server3:/mnt/array6 server4:/mnt/array6 server1:/mnt/array7 
server2:/mnt/array7 server3:/mnt/array7 server4:/mnt/array7 
server1:/mnt/array8 server2:/mnt/array8 server3:/mnt/array8 
server4:/mnt/array8
This would get you a single 256TB HA NFS mount.

Gluster specifically doesn't care about LUN/brick size, the ability to 
create smaller LUNs without affecting the presentation of that space is 
a positive side effect of using Gluster. Smaller LUN's are useful in 
several ways, faster fsck's on the LUN if that is ever required, there 
is a minor performance hit to running bricks of different sizes in the 
same volume, small LUNs make that easier.


Thanks,

Craig

-->
Craig Carl
Senior Systems Engineer
Gluster

On 12/01/2010 08:29 AM, Burnash, James wrote:
> Hello.
>
> So, here's my problem.
>
> I have 4 storage servers that will be configured as replicate + distribute, each of which has two external storage arrays, each with their own controller. Those external arrays will be used to store archived large (10GB) files that will only be read-only after their initial copy to the glusterfs storage.
>
> Currently, the external arrays are the items of interest. What I'd like to do is this:
>
> - Create multiple hardware RAID 5 arrays on each storage server, which would present to the OS as approx 8 16TB physical drives.
> - Create an ext3 file system on each of those devices (I'm using CentOS 5.5. so ext4 is still not really an option for me)
> - Mount those multiple file systems to the storage server, and then aggregate them all under gluster to export under a single namespace to NFS and the Gluster client.
>
> How do I aggregate those multiple file systems without involving LVM in some way.
>
> I've read that Glusterfs likes "small" bricks, though I haven't really been able to track down why. Any pointers to good technical info on this subject would also be greatly appreciated.
>
> Thanks,
>
> James Burnash, Unix Engineering
>
>
> DISCLAIMER:
> This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this in error, please immediately notify me and permanently delete the original and any copy of any e-mail and any printout thereof. E-mail transmission cannot be guaranteed to be secure or error-free. The sender therefore does not accept liability for any errors or omissions in the contents of this message which arise as a result of e-mail transmission.
> NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its discretion, monitor and review the content of all e-mail communications. http://www.knight.com
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux