Re: nodes don't use swap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

thanks.

I'm running a mail system where dovecots and postfixs share glusterfs. There are also 6 nodes which run the shared storage system. Fuse version is fuse-2.7.2glfs8 and glusterfs version is mainline--2.5, patch 690

The config file from nodes is:

**********
**********

volume esp
   type storage/posix
   option directory /mnt/compartit
end-volume

volume espa
   type features/posix-locks
   subvolumes esp
end-volume

volume espai
  type performance/io-threads
  option thread-count 15
  option cache-size 512MB
  subvolumes espa
end-volume

volume nm
   type storage/posix
   option directory /mnt/namespace
end-volume

volume ultim
   type protocol/server
   subvolumes espai nm
   option transport-type tcp/server
   option auth.ip.espai.allow *
   option auth.ip.nm.allow *
end-volume

**********
**********

dovecots has:

*********
*********

volume espai1
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.204
   option remote-subvolume espai
end-volume

volume espai2
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.205
   option remote-subvolume espai
end-volume

volume espai3
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.206
   option remote-subvolume espai
end-volume

volume espai4
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.207
   option remote-subvolume espai
end-volume

volume espai5
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.213
   option remote-subvolume espai
end-volume

volume espai6
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.214
   option remote-subvolume espai
end-volume

volume namespace1
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.204
   option remote-subvolume nm
end-volume

volume namespace2
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.205
   option remote-subvolume nm
end-volume

volume gru1
   type cluster/afr
   subvolumes espai1 espai2
end-volume

volume grup1
 type performance/io-cache
 option cache-size 64MB
 option page-size 1MB
 option priority *.txt:2,*:1
 option force-revalidate-timeout 2
 subvolumes gru1
end-volume

volume gru2
   type cluster/afr
   subvolumes espai3 espai4
end-volume

volume grup2
 type performance/io-cache
 option cache-size 64MB
 option page-size 1MB
 option priority *.txt:2,*:1
 option force-revalidate-timeout 2
 subvolumes gru2
end-volume

volume gru3
   type cluster/afr
   subvolumes espai5 espai6
end-volume

volume grup3
 type performance/io-cache
 option cache-size 64MB
 option page-size 1MB
 option priority *.txt:2,*:1
 option force-revalidate-timeout 2
 subvolumes gru3
end-volume

volume nm
   type cluster/afr
   subvolumes namespace1 namespace2
end-volume

volume ultim
   type cluster/unify
   subvolumes grup1 grup2 grup3
   option scheduler rr
   option namespace nm
end-volume


*********
*********

and finally postfixs has

********
********

volume espai1
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.204
   option remote-subvolume espai
end-volume

volume espai2
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.205
   option remote-subvolume espai
end-volume

volume espai3
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.206
   option remote-subvolume espai
end-volume

volume espai4
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.207
   option remote-subvolume espai
end-volume

volume espai5
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.213
   option remote-subvolume espai
end-volume

volume espai6
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.214
   option remote-subvolume espai
end-volume

volume namespace1
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.204
   option remote-subvolume nm
end-volume

volume namespace2
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.205
   option remote-subvolume nm
end-volume

volume gru1
   type cluster/afr
   subvolumes espai1 espai2
end-volume

volume grup1
  type performance/write-behind
  option aggregate-size 1MB
  option flush-behind on
  subvolumes gru1
end-volume

volume gru2
   type cluster/afr
   subvolumes espai3 espai4
end-volume

volume grup2
  type performance/write-behind
  option aggregate-size 1MB
  option flush-behind on
  subvolumes gru2
end-volume

volume gru3
   type cluster/afr
   subvolumes espai5 espai6
end-volume

volume grup3
  type performance/write-behind
  option aggregate-size 1MB
  option flush-behind on
  subvolumes gru3
end-volume

volume nm
   type cluster/afr
   subvolumes namespace1 namespace2
end-volume

volume ultim
   type cluster/unify
   subvolumes grup1 grup2 grup3
   option scheduler rr
   option namespace nm
end-volume

********
********

All machines are virtual, from xen 3.2.0
The thing is that i'm running some tests to see what the bottlenecks are.
I've tried for example, to send emails every second to the system, and also check some mailboxes every minute. This is just fine. I think i could improve performance, but none of the parts of gluster
get to that point when they run out of memory or cpu.

but when i do a disk test, like ddt, postmark or bonnie, i get this problem where some nodes just run out of everything, cpu, memory, etc. Each of them has 2GB ram memory, and 4GB of swap, but swap is never used. It also looks like they only use one of the two cpus they have, and i don't think it is a Xen problem because i've used before this type of setups with xen, and some software actually uses more than one cpu.

If you need any further information... just let me kno.

Thanks.


















En/na Basavanagowda Kanur ha escrit:
Jordi,
  We would like to know more details about your setup to understand what is
causing the bottle-neck.

Please post the spec files that you are using.

--
Gowda

On Tue, Mar 11, 2008 at 5:22 PM, Jordi Moles <jordi@xxxxxxxxx> wrote:

hi everyone,

i'm stressing a gluterfs system i've set up. i've given 2GB of ram
memory to every node and 4GB for swap. now i've got the system totally
stressed :) but nodes don't seem to be able to use swap memory. Is it
normal?
can i change anything to make gluster use swap?

I've tried ddt, postmark and bonnie to create thousands of files and see
how the system reacts, and the bottleneck so far is the ram memory of
the nodes. They eat the 2GB the have and don't seem to be able to use
swap.

Nodes has also two processors, and i would also like to know if they can
make profit of that or gluster is limited to one cpu.

thanks.


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel







[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux