Performance problem with 2.0.0rc1 ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It seems to me there is a performance problem with GlusterFS 2.0.0rc1.

Please see the results of the following simple test below:

$ dd if=/dev/zero of=hundred-meg-file count=100000 bs=1000

for 2.0.0.rc1:
100000+0 records in
100000+0 records out
100000000 bytes (100 MB) copied, 283.346 s, 353 kB/s

for 1.3.12
100000+0 records in
100000+0 records out
100000000 bytes (100 MB) copied, 3.16986 s, 31.5 MB/s

Both test run on the same systems (2x DELL R200 DualCore E3110 CPU 2GB
RAM) linked via 1GB Ethernet.

2.0.0rc1 configuration (identical on both machines):
----- server ----------
volume posix
  type storage/posix
  option directory /glusterfs
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume brick
  type performance/io-threads
  option thread-count 8
  subvolumes locks
end-volume

volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow *
  subvolumes brick
end-volume
-------------------------

----- client ----------
volume remote1
  type protocol/client
  option transport-type tcp
  option remote-host 192.168.3.1
  option remote-subvolume brick
end-volume

volume remote2
  type protocol/client
  option transport-type tcp
  option remote-host 192.168.3.2
  option remote-subvolume brick
end-volume

volume replicate
  type cluster/replicate
  subvolumes remote1 remote2
end-volume

volume writebehind
  type performance/write-behind
  option aggregate-size 128KB
  option window-size 1MB
  subvolumes replicate
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume
-------------------------


1.3.12 configuration (identical on both machines):
----- server ----------
volume posix
  type storage/posix
  option directory /glusterfs
end-volume

volume locks
  type features/posix-locks
  subvolumes posix
end-volume

volume brick
  type performance/io-threads
  option thread-count 8
  subvolumes locks
end-volume

volume server
  type protocol/server
  option transport-type tcp/server
  option auth.ip.brick.allow *
  subvolumes brick
end-volume
-------------------------

----- client ----------
volume remote1
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.3.1
  option remote-subvolume brick
end-volume

volume remote2
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.3.2
  option remote-subvolume brick
end-volume

volume replicate
  type cluster/afr
  subvolumes remote1 remote2
end-volume

volume writebehind
  type performance/write-behind
  option aggregate-size 128KB
  option window-size 1MB
  subvolumes replicate
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume
-------------------------



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux