System Requirements (and mild rant re: lack-of-documentation)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jonathan - 
I think this might be the first time I've ever been called a "critter"! :). All of your points are completely valid, thanks for the feedback. I believe we have addressed the Glossary and Technical Concepts requirements with the Introduction to Gluster document, I will update the wiki to include a link to the document, please give it a look and let me know what you think. 
You would need to add a Linux server in front of your storage devices. You could add a single server and attach all 4 storage devices to it, but the performance wouldn't be great. That would be considered a single storage server. If you attached each storage device (1 SATAbeast, 1SATAboy, 2 FalconStor's) to its own Linux server that would count as 4 storage servers. Gluster would then present all of the blocks from all of the devices as a single 84TB NAS filesystem. (NFS, CIFS, DAV, GlusterFS, etc) 



Please let me know if you have any other questions. 

Thanks, 

Craig 

-- 
Craig Carl 













Sales Engineer; Gluster, Inc. 
Cell - ( 408) 829-9953 (California, USA) 
Office - ( 408) 770-1884 
Gtalk - craig.carl at gmail.com 
Twitter - @gluster 
Installing Gluster Storage Platform, the movie! 
http://rackerhacker.com/2010/08/11/one-month-with-glusterfs-in-production/ 




From: "Jonathan B. Horen" <jbhoren at alaska.edu> 
To: "Craig Carl" <craig at gluster.com> 
Sent: Monday, September 27, 2010 12:30:32 PM 
Subject: Re: System Requirements (and mild rant re: lack-of-documentation) 

Craig, Shalom! 

Thank you very much for replying -- "Sales Critter" ID notwithstanding, I appreciate your taking the time, and I'm grateful for the information you've shared with me. I have some questions, which I'll ask in the context of your reply. 

Thanks in advance for your assistance. 

--Jonathan B. Horen 


On Sat, Sep 25, 2010 at 6:38 PM, Craig Carl < craig at gluster.com > wrote: 




Jonathan - 
We are actively working on the documentation, we will release an entirely new set with Gluster 3.1, currently in beta. In the mean time you can find the system requirements here - 
http://www.gluster.com/community/documentation/index.php/Storage_Server_Installation_and_Configuration#Technical_Requirements . 


A Glossary would be very welcome. For example, I was going nuts over the abbreviation "AFR" -- it's used frequently, and until I made a Google search on "AFR site: gluster.org ", I didn't know what it meant. Glossaries are, by definition, user-friendly and an important pre-sales tool. 

Also, a Technical Concepts document is vital. It could/should be hyperlinked to the Glossary, and ought to contain lots of block diagrams, flow charts, etc. -- visual aids reinforce and "anchor" text-based descriptions/discussions. 




I'd also suggest the Introduction to Gluster document - 
http://ftp.gluster.com/pub/gluster/documentation/Introduction_to_Gluster.pdf 
We also have a Gluster Architecture document - 
http://ftp.gluster.com/pub/gluster/documentation/Gluster_Architecture.pdf 

Gluster has standardized on some language recently, a 'node' or a 'brick' is a single host with block storage that Gluster will add to a cluster and export via a NAS protocol. We call it a 'storage server'. 


OK. Let's say I've got a NexSan SATAbeast storage unit, with 32T of disk space, configured as a single RAID-6 volume (or, alternatively, as four 8T volumes). In the first instance, is it a single "brick"? In the second instance, is it four separate "bricks"? 

We're using a SATAbeast and SATAboy (10T) via iSCSI over 1Ge -- how do we integrate these SAN storage units into a Gluster setup? Can we? The same question for two FalconStor storage units (32T each). 

Given that you refer to a host+block_storage as a "storage server", how could (would?) these storage units be used? 




You will need some number of 'storage servers' to meet your performance requirements. The general guidelines for determining the performance of a single storage server are; 

1. For files > 64KB Gluster will run at hardware speeds. 
2. In 90% of Gluster environments the hardware limitation is network I/O. 
3. For files <64KB CPU speeds become more relevant and performance may be reduced to ~60% of hardware performance due to context switching. This is a general NAS issue, not specifically a Gluster problem. 
4. If you use the GlusterFS client and setup a mirror write performance will be reduced by ~50%. 
5. Write performance of a single file is limited to the performance of the storage server on which the file physically exists. 
6. Memory is read cache. 

We generally test the theoretically maximum speeds of a storage server by running iozone locally on the server without using Gluster and then over the network in clustered mode from at least 4 clients. While the iozone tests are running you should monitor resources on the storage server to find your bottleneck. If the performance of the second test matches the performance of the first then you can begin to scale the number of storage servers up to meet your total performance requirements. 

I hope I answered your questions, please let me know if you have more, I'm happy to help. We do offer a 60 day POC that includes support, please let me know if you think it might be helpful for you. - http://www.gluster.com/products/poc.php . 



Thanks, 

Craig 

-- 
Craig Carl 













Sales Engineer; Gluster, Inc. 
Cell - ( 408) 829-9953 (California, USA) 
Office - ( 408) 770-1884 
Gtalk - craig.carl at gmail.com 
Twitter - @gluster 
Installing Gluster Storage Platform, the movie! 
http://rackerhacker.com/2010/08/11/one-month-with-glusterfs-in-production/ 



From: "Jonathan B. Horen" < jbhoren at alaska.edu > 
To: gluster-users at gluster.org 
Sent: Friday, September 24, 2010 2:04:39 PM 
Subject: System Requirements (and mild rant re: lack-of-documentation) 




What little documentation that there is for Gluster doesn't include anything 
regarding system requirements -- minimum or recommended. 

We're trying to spec-out a setup using four SAN storage units: two NexSans 
(30T SATAbeast and 10T SATAboy) and two FalconStors (32T NSS650 and 32T 
NSS620), all connected via iSCSI. These supply storage for two RHEL5 compute 
clusters (17-node Penguin and 3-node IBM x3950m2) and 300+ user secure file 
share. 

Without "real" documentation -- conceptual and otherwise -- it's damn 
difficult to decide. Is Gluster cpu-intensive, memory heavy, or a mix of the 
two? And bricks! Is a 10T SAN unit a "brick", or a "block" (or several 
"bricks")? and does it matter? How many bricks can/could/should a Gluster 
server node serve? 

I could spend weeks searching through the mailing-list archives, but that's 
the purpose of documentation. Frankly, we're at a loss. 

Is there information to be found, and, if so, where is it? 

-- 
JONATHAN B. HOREN 
Systems Administrator 
UAF Life Science Informatics 
Center for Research Services 
(907) 474-2742 
jbhoren at alaska.edu 
http://biotech.inbre.alaska.edu 

_______________________________________________ 
Gluster-users mailing list 
Gluster-users at gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users 



-- 
JONATHAN B. HOREN 
Systems Administrator 
UAF Life Science Informatics 
Center for Research Services 
(907) 474-2742 
jbhoren at alaska.edu 
http://biotech.inbre.alaska.edu 


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux