Re: Performance problems in our web server setup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Actually having io-threads as the 'bottommost' is the preferred mode.
io-threads is meant to push blocking operations (disk read, disk write,
network write) to a seperate thread, so that the 'logic' (read-ahead,
io-cache, write-behind) work over a non-blocking base. A thread-count of 16
might be a bit high though.

thanks,
avati

2007/7/24, Harris Landgarten <harrisl@xxxxxxxxxxxxx>:

Bernard,

Try moving io-threads to the end of the chain on your client. Gluster
looks at the chain bottom up so that will put io-threads at the top. You
should also test with and without readahead. As it has been explained to me,
readahead only helps on IB or other very fast connections that can swamp the
processor. In my app, readahead's overhead was actually slowing things down.

I have been on 2.5 for a while so I am not sure if all of this translates
to 2.4 but if you are going to stay on 2.4 you should update to patch-184.
There were many bugs fixed and if I remember there were a lot of io-threads
fixes.

Harris

----- Original Message -----
From: "Bernhard J. M. Grün" <bernhard.gruen@xxxxxxxxxxxxxx>
To: gluster-devel@xxxxxxxxxx
Sent: Tuesday, July 24, 2007 10:22:36 AM (GMT-0500) America/New_York
Subject: Performance problems in our web server setup

Hello!

We experience some performance problems with our setup at the moment.
And we would be happy if someone of you could help us out.
This is our setup:
Two clients connect to two servers that share the same data via AFR.
The two servers hold about 13.000.000 smaller image files that are
sent out to the web via the two clients.
First I'll show you the configuration of the servers:
volume brick
  type storage/posix                   # POSIX FS translator
  option directory /media/storage       # Export this directory
end-volume

volume iothreads    #iothreads can give performance a boost
   type performance/io-threads
   option thread-count 16
   subvolumes brick
end-volume

### Add network serving capability to above brick.
volume server
  type protocol/server
  option transport-type tcp/server     # For TCP/IP transport
  option listen-port 6996              # Default is 6996
  option client-volume-filename /opt/glusterfs/etc/glusterfs/client.vol
  subvolumes iothreads
  option auth.ip.iothreads.allow * # Allow access to "brick" volume
end-volume

Now the configuration of the clients:
### Add client feature and attach to remote subvolume
volume client1
  type protocol/client
  option transport-type tcp/client     # for TCP/IP transport
  option remote-host 10.1.1.13     # IP address of the remote brick
  option remote-port 6996              # default server port is 6996
  option remote-subvolume iothreads        # name of the remote volume
end-volume

### Add client feature and attach to remote subvolume
volume client2
  type protocol/client
  option transport-type tcp/client     # for TCP/IP transport
  option remote-host 10.1.1.14     # IP address of the remote brick
  option remote-port 6996              # default server port is 6996
  option remote-subvolume iothreads        # name of the remote volume
end-volume

volume afrbricks
  type cluster/afr
  subvolumes client1 client2
  option replicate *:2
end-volume

volume iothreads    #iothreads can give performance a boost
   type performance/io-threads
   option thread-count 8
   subvolumes afrbricks
end-volume

### Add writeback feature
volume writeback
  type performance/write-behind
  option aggregate-size 0  # unit in bytes
  subvolumes iothreads
end-volume

### Add readahead feature
volume bricks
  type performance/read-ahead
  option page-size 65536     # unit in bytes
  option page-count 16       # cache per file  = (page-count x page-size)
  subvolumes writeback
end-volume

We use Lighttpd as web server to handle the web traffic and it seems
that the image loading is quite slow. Also the used bandwidth between
one client and its corresponding AFR-Server is low - about 12 MBit/s
over a 1 GBit line. So there must be a bottleneck in our
configuration. Maybe you can help us.
At the moment we are using 1.3.0 (mainline--2.4 patch-131). At the
moment we can't easily switch to mainline--2.5 because the servers are
under high load.

We also have seen that each client uses only one connection to each
server. In my opinion this means that the iothreads subvolume on the
client is (nearly) useless. Wouldn't it be better to establish more
than just one connection to each server?

Many thanks in advance

Bernhard J. M. Grün


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel



_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel




--
Anand V. Avati


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux