Re: how best to set up for performance?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Amar,

This is certainly an improvement -  thank you for the suggestions.


Can you try with these spec files and let me know the results? Also, my doubt is if you have 10Gig/E, how will you get 1.5GBps from single client?


I have 2 distinct 10gige interfaces in the client machine, one for each storage server. Traffic over one should, in theory, have zero effect on traffic on the other. Though of course we could see some interference somewhere within the OS on the client machine. To your definitions I needed to add definitions for readahead-jr1 and readahead-jr2, so I used the same settings
as you had used for the readahead volume.

volume readahead-jr1
  type performance/read-ahead
  option page-size 1MB
  option page-count 2
  subvolumes jr1
end-volume

volume readahead-jr2
  type performance/read-ahead
  option page-size 1MB
  option page-count 2
  subvolumes jr2
end-volume

This gives the more respectable performance:

root@caneland:/etc/glusterfs# dd if=/dev/zero of=/mnt/stripe/big.file bs=1M count=80000
80000+0 records in
80000+0 records out
83886080000 bytes (84 GB) copied, 210.761 seconds, 398 MB/s

root@caneland:/etc/glusterfs# dd if=/mnt/stripe/big.file of=/dev/null bs=1M
80000+0 records in
80000+0 records out
83886080000 bytes (84 GB) copied, 206.691 seconds, 406 MB/s

There should still be plenty of headroom here - this is 200MB per server out of perhaps 700MB/s. Pardon the repetition, but I include the full specs below for completeness.

niall


server 1:

volume posix
  type storage/posix
  option directory /big
end-volume

volume brick
  type performance/io-threads
  option thread-count 8
  option cache-size 4096MB
  subvolumes posix
end-volume

volume server
  type protocol/server
  subvolumes brick
  option transport-type tcp/server     # For TCP/IP transport
  option auth.ip.brick.allow *
  subvolumes brick
end-volume


server 2:


volume posix
  type storage/posix
  option directory /big
end-volume

volume brick
  type performance/io-threads
  option thread-count 8
  option cache-size 4096MB
  subvolumes posix
end-volume

volume server
  type protocol/server
  subvolumes brick
  option transport-type tcp/server     # For TCP/IP transport
  option auth.ip.brick.allow *
  subvolumes brick
end-volume


client:

volume jr1
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.3.2
  option remote-subvolume brick
end-volume

volume jr2
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.2.2
  option remote-subvolume brick
end-volume

volume readahead-jr1
  type performance/read-ahead
  option page-size 1MB
  option page-count 2
  subvolumes jr1
end-volume

volume readahead-jr2
  type performance/read-ahead
  option page-size 1MB
  option page-count 2
  subvolumes jr2
end-volume

volume stripe0
  type cluster/stripe
  option block-size *:1MB
  subvolumes readahead-jr1 readahead-jr2
end-volume

volume iot
 type performance/io-threads
 subvolumes stripe0
end-volume

volume writebehind
  type performance/write-behind
  subvolumes iot
end-volume

volume readahead
  type performance/read-ahead
  option page-size 1MB
  option page-count 2
  subvolumes writebehind
end-volume





[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux