Recommendations for busy static web server replacement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all

after being a silent reader for some time and not very successful in getting 
good performance out of our test set-up, I'm finally getting to the list with 
questions.

Right now, we are operating a web server serving out 4MB files for a 
distributed computing project. Data is requested from all over the world at a 
rate of about 650k to 800k downloads a day. Each data file is usually only 
ever read 2-3 times and after some time deleted again. Thus typical data rates 
are about 30MB/s day in and out with a constant influx of new data at about 
have that rate. Also, file deletion is an ever ongoing process.

Currently, this machine is running on a hardware RAID6 with a 10TB xfs volume 
serving the needs. But even after optimizing the io scheduler we are hitting a 
limit here. Obviously, going with raid 10 should be faster, but we are now 
aiming for a more scalable solution, e.g. glusterfs.

Our idea is to have something like a web server at the front which would 
process the request and get data from the storage pool in the background. And 
if need be, we could add more storage bricks for better scalability.

Our current idea was to use a distributed/replicated set-up with 2n servers. 
For testing, I have the following systems available:

up to 10 server with 12 SATA data disks and md-software raid (OS is on SATA 
DoM) and one "web server". All of these are connected by 10GbE. Each server 
has 16 or 24 GB of RAM (currently limited to 500MB as I currently don't want 
to test caching) and multiple cores (at least 4 cores @2GHz, plus HT).

For stress testing I can use a large number of computers which e.g. use curl 
for downloading the files to /dev/null.

I did a few tests, mostly to get used to glusterfs first, but so far 
performance had not been too well.

(1) two servers with raid0 over all 12 disks, each serving as a single storage 
brick in simple replicated setup. "web server node" mounted this via FUSE and 
I created several 1000 of files at a rate of about 115MB/s. Then nginx served 
files to 50 clients. Each client now downloaded 100 4MB files + 100 small 
files which held the md5sums of each other file for validating files. 

On average each client took about 7.25minutes and on the web server I've only 
seen ~ 46MB/s throughput.

(2) same two server now each exporting each disk on its own, i.e.

gluster volume create test-volume replica 2 transport tcp $(for i in b c d e f 
g h i j k l m ; do for n in 1 2; do echo -n "gluster0$n:/data-$i "; done; 
done)

As expected, the overhead here is larger. initial file creation started slowly 
at 45MB/s and peaked around 105MB/s, 50 clients saw a total bandwidth of about 
41MB/s

(3) I ran other tests across all 10 backend bricks, but never went beyond ~ 
80MB/s when using 4 disk raid0 over 10 servers.

All tests were run with glusterfs 3.2.5 on Debian Squeeze, md-software raid, 
xfs file system, "default" settings for all of these, except using the 
deadline IO scheduler.

Now the big question: For our next generation production system, how can we 
get a (much) better performance. As I'm not seeing that our disks are 100% 
utilized nor the network is loaded at all, and the bricks being pretty idle, I 
currently suspect that glusterfs needs some serious tuning here. On the 
webserver I do see a single thread taking about 80% of user cpu which will 
never get beyond that and about 10-15% system CPU usage. But so far with 
blindly poking around, I have yet to hit the right setting.

Ideally, I'd like to have a set-up, where multiple relatively cheap computers 
with say 4 disks each run in raid0 or raid 10 or no raid and export this via 
glusterfs to our web server. Gluster's replication will serve as kind of fail-
safe net and data redistribution will help, when we add more similar machines 
later on to counter increased usage.

Thanks a lot in advance for at least reaching the end of my email, any help 
appreciated.

Cheers

Carsten

-- 
Dr. Carsten Aulbert - Max Planck Institute for Gravitational Physics
Callinstrasse 38, 30167 Hannover, Germany
Phone/Fax: +49 511 762-17185 / -17193
http://www.top500.org/system/9234 | http://www.top500.org/connfam/6
CaCert Assurer | Get free certificates from http://www.cacert.org/


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux