Re: GlusterFS Performance tuning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

is it for small files with lot of them, or mainly big ones?

When mounting from client, i add direct-io-mode=disable in fstab

sysctl setting I use (I don't have 10G network, may not be enough numbers) (on clients and servers):
vm.swappiness=0
net.core.rmem_max=67108864
net.core.wmem_max=67108864
# increase Linux autotuning TCP buffer limit to 32MB
net.ipv4.tcp_rmem=4096 87380 33554432
net.ipv4.tcp_wmem=4096 65536 33554432
# increase the length of the processor input queue
net.core.netdev_max_backlog=30000
# recommended default congestion control is htcp
net.ipv4.tcp_congestion_control=htcp

Options I have reconfigured for small files tuning:
performance.cache-size: 1GB
nfs.disable: on
performance.client-io-threads: on
performance.io-cache: on
performance.io-thread-count: 16
performance.readdir-ahead: enable
performance.read-ahead: disable
server.allow-insecure: on
cluster.lookup-optimize: on
client.event-threads: 4
server.event-threads: 4
cluster.readdir-optimize: on
performance.write-behind-window-size: 1MB

Again, you should try but even with bigger number based on your hardware specs.

Did you set MTU to 9000 ? (jumbo frame)
Did you set in bios power settings to high static mode (name depend on vendor) ?



Cordialement,
Mathieu CHATEAU
http://www.lotp.fr

2015-11-25 15:44 GMT+01:00 Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC] <uthra.r.rao@xxxxxxxx>:

Thank you all for taking the time to reply to my email:

 

Here is some more information on our setup:

- Number of Nodes à 2 Gluster servers and 1 client for testing. After testing we will mount the GlusterFS volume on 3 clients.

- CPU & RAM on Each Node à 2 CPUs 3.4MHz, 384GB RAM on each Gluster Server

- What else is running on the nodes à Nothing it is only our data server

- Number of bricks à Two

- output of "gluster  volume info" & "gluster volume status"

 

Storage server1:

# gluster  volume info gtower

Volume Name: gtower

Type: Replicate

Volume ID: 838ab806-06d9-45c5-8d88-2a905c167dba

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: storage1.sci.gsfc.nasa.gov:/tower7/gluster1/brick

Brick2: storage2.sci.gsfc.nasa.gov:/tower8/gluster2/brick

Options Reconfigured:

nfs.export-volumes: off

nfs.addr-namelookup: off

performance.readdir-ahead: on

performance.cache-size: 2GB

 

-----------------------------------------------------------

 

Storage server 2:

# gluster  volume info gtower

Volume Name: gtower

Type: Replicate

Volume ID: 838ab806-06d9-45c5-8d88-2a905c167dba

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: storage1.sci.gsfc.nasa.gov:/tower7/gluster1/brick

Brick2: storage2.sci.gsfc.nasa.gov:/tower8/gluster2/brick

Options Reconfigured:

nfs.export-volumes: off

nfs.addr-namelookup: off

performance.readdir-ahead: on

performance.cache-size: 2GB

 

-------------------------------------------------------------------------

 

We have made a raidz3 consisting of 6 vdevs each consisting of 12 (6TB) drives and assigned one 200GB SSD drive for ZFS caching.

 

Our attached storage has 60 (6TB) drives for which I have done multipathing. We are also using 12 drives in the server for which I have set-up vdevs. So we are using 60+12 = 72 drives for ZFS (raidz3)

 

 

If you have any other suggestions based on our configuration please let me know.

 

Thank you.

Uthra

 



 

 

 

From: Gmail [mailto:b.s.mikhael@xxxxxxxxx]
Sent: Tuesday, November 24, 2015 4:50 PM
To: Pierre MGP Pro
Cc: Lindsay Mathieson; Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC]; gluster-users@xxxxxxxxxxx
Subject: Re: GlusterFS Performance tuning

 

you can do the following:

 

# gluster volume set $vol performance.o-thread-count 64

Today’s CPU are powerful enough to handle 64 threads per volume.

 

# gluster volume set $vol client.event-threads XX

XX depend on the number of connections from the FUSE client to the server, you can get this number by running netstat and grep on the server IP and count the number of connections.

 

# gluster volume set $vol server.event-threads XX

 

XX depend on the number of connections from the server to the client(s), you can get this number by running netstat and grep on “gluster" and count the number of connections.

 

also, you can follow the instructions in the following page:

 

 

-Bishoy

On Nov 24, 2015, at 1:31 PM, Pierre MGP Pro <pierre-mgp-jouy.inra@xxxxxxxxx> wrote:

 

Hi Lindsay Mathieson and all,

Le 24/11/2015 21:09, Lindsay Mathieson a écrit :

More details on your setup[ would be useful:

- Number of Nodes

- CPU & RAM on Each Node

- What else is running on the nodes

- Number of bricks

- output of "gluster  volume info" & "gluster volume status"


- ZFS config for each Node

  * number of disks and rai arrangement

  * log and cache SSD?

  * zpool status

OK I have tested that kind of configuration, and the result depend of what you are waiting for :

  • zfsonlinux is now efficient, but you will not have access to the ACL ;
  • on a volume with seven disk we get the maximum of the PCI Express bandwidth ;
  • so you can mount a distributed gluster volume with your zfsonlinux nodes. The bandwidth will depend of the kind of glusterfs volume you want to build : distributed, stripped, replicated ;
    • replicated : bad, because of the synchronism write for the files replication ;
    • striped is the best because it allow you to get an average bandwidth on a file whatever the node you R/W the file ;
  • then the last, for me, is the Ethernet access between each nodes. If you have 1Gb, get back to your sand box, At the year you need 10Gbs and as the minimal Ethernet access is two port you need to bound them ;
  • have 10 Gbs Ethernet switch ;

That the expressions of the needs for the now and future necessities.

sincerely

Pierre Léonard

 

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

 


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux