Re: how best to set up for performance?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Mar 16, 2008, at 12:46 PM, Anand Avati wrote:
 You might want to try specifying

'option window-size 2097152' in all your protocol (client and server) volumes.

Also, you might also want read-ahead on the server (over iothreads) with the same page-size as the client, but higher page-count.



Thanks for all the help. I turned on the window-size option but specifying in the client volumes slowed things right down. Setting it only in the servers showed a speedup.

I also discovered that in striped mode I was basically running at exactly the same speed as distinct gluster fs volumes on each storage server. So, splitting things into 2 mount points on my client, each against one of the servers, doubled my aggregate performance to around ~650MB/s. I estimate I should be able to double performance again if I could figure out how these various settings are all interacting.

Any other suggestions appreciated - or if someone from zresearch would like to log in and take a look I'd be delighted to set that up.

niall

server-jr1
------------
volume posix
  type storage/posix
  option directory /big
end-volume

volume brick
  type performance/io-threads
  option thread-count 8
  option cache-size 12288MB
  subvolumes posix
end-volume

volume readahead-brick
  type performance/read-ahead
  option page-size 1M
  option page-count 512
  subvolumes brick
end-volume

volume server
  type protocol/server
  subvolumes brick
  option transport-type tcp/server     # For TCP/IP transport
  option auth.ip.brick.allow *
  subvolumes readahead-brick
  option window-size 2097152
end-volume


server-jr2
------------
volume posix
  type storage/posix
  option directory /big
end-volume

volume brick
  type performance/io-threads
  option thread-count 8
  option cache-size 12288MB
  subvolumes posix
end-volume

volume readahead-brick
  type performance/read-ahead
  option page-size 1MB
  option page-count 512
  subvolumes brick
end-volume

volume server
  type protocol/server
  subvolumes brick
  option transport-type tcp/server     # For TCP/IP transport
  option auth.ip.brick.allow *
  option window-size 2097152
  subvolumes readahead-brick
end-volume


client-jr1:
------------
volume jr1
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.3.2
  option remote-subvolume brick
end-volume

volume readahead-jr1
  type performance/read-ahead
  option page-size 1MB
  option page-count 64
  subvolumes jr1
end-volume

volume iot
 type performance/io-threads
 option thread-count 4
 option cache-size 1024MB
 subvolumes readahead-jr1
end-volume

volume writebehind
  type performance/write-behind
  subvolumes iot
end-volume

volume readahead
  type performance/read-ahead
  option page-size 1MB
  option page-count 64
  subvolumes writebehind
end-volume

client-jr2
------------
volume jr2
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.2.2
  option remote-subvolume brick
end-volume

volume readahead-jr2
  type performance/read-ahead
  option page-size 1MB
  option page-count 64
  subvolumes jr2
end-volume

volume iot
 type performance/io-threads
 option thread-count 4
 option cache-size 1024MB
 subvolumes readahead-jr2
end-volume

volume readahead-jr2
  type performance/read-ahead
  option page-size 1MB
  option page-count 64
  subvolumes iot
end-volume

volume writebehind
  type performance/write-behind
  subvolumes iot
end-volume

volume readahead
  type performance/read-ahead
  option page-size 1MB
  option page-count 64
  subvolumes writebehind
end-volume







[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux