Hi Amar, >> 1. While learning GlusterFS, I'was wondering if it's possible to add >> server volumes to increase space capacity of my cluster "on the FLY"? >> I mean, a hot upgrade. > > Hope you checked the Roadmap, we have it coming in 1.5 release (planned to > it ready by yearend). Didn't check it yet. I'll do. >> 2. Second, when using "files replicating strategy" (scheduler), is it >> possible to remove a server node without stopping the hole cluster >> (ex. hardware maintaining reasons, add more disk/ram to the node ...)? > > Scheduler is not the term you may be looking at... check AFR (automatic file > replication), and yes, if you are using, you can take out the node when > cluster is active. Oups. You're right. AFR is better. >> 3. Finally, what's the best volume specifications for writing huge >> number (hundred thousands) of big file (1 Gb size in average) on my >> cluster? >> I've an application which produce these big files at a regular rate >> (let say, 1 file per minute) and I'd like to know if a good volume >> scpecification >> can handle this amount of data? > > Well, at the rate of 1GB per minute, you will hit the disk speed, hence > stripe is better option. But with Stripe translator you will not be able to > take out the nodes. My process is creating N (big) files per minute no more (N is usually equals to 2 or 4). I'm not considering to use Stripe translator. AFR will do the job for me I think. What I'd like to setup is a specification which will replicate 3 copies of these big files as fast as possible. So, is it possible to have a mix strategy (AFR translator, ALU scheduler and unify) which spread exactly 3 copies of each files over K servers (K=10, but could be more)? I saw an AFR translator option in the WIKI to specify the number of copies: option replicate *:a_number It is correct to use this option when the number servers K (i.e 10) is greater than the number of copies (i.e 3)? Or, is this option deprecated for newer versions (> 1.3.7)? If yes, how can I achieve that please? Thanks again cheers F.