Re: Expected performance for WORM scenario

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Tue, Mar 13, 2018 at 2:42 PM, Ondrej Valousek <Ondrej.Valousek@xxxxxxxxxxx> wrote:

Yes, I have had this in place already (well except of the negative cache, but enabling that did not make much effect).

To me, this is no surprise – nothing can match nfs performance for small files for obvious reasons:


Could you give profile info of the run you did with and without nl-cache? Please also provide your volume info output.
 

1.       Single server, does not have to deal with distributed locks

2.       Afaik, gluster does not support read/write delegations the same way NFS does.

3.       Glusterfs is FUSE based

Glusterfs supports NFS/SMB/fuse

4.       Glusterfs does not support async writes

It supports async writes. It has a feature called write-behind which does that.

 

Summary: If you do not need to scale out, stick with a single server (+DRBD optionally for HA), it will give you the best performance

 

Ondrej

 

 

From: Pranith Kumar Karampuri [mailto:pkarampu@xxxxxxxxxx]
Sent: Tuesday, March 13, 2018 9:10 AM


To: Ondrej Valousek <Ondrej.Valousek@xxxxxxxxxxx>
Cc: Andreas Ericsson <andreas.ericsson@xxxxxxxxxxx>; Gluster-users@xxxxxxxxxxx
Subject: Re: Expected performance for WORM scenario

 

 

 

On Tue, Mar 13, 2018 at 1:37 PM, Ondrej Valousek <Ondrej.Valousek@xxxxxxxxxxx> wrote:

Well, it might be close to the _synchronous_ nfs, but it is still well behind of the asynchronous nfs performance.

Simple script (bit extreme I know, but helps to draw the picture):

 

#!/bin/csh

 

set HOSTNAME=`/bin/hostname`

set j=1

while ($j <= 7000)

   echo ahoj > test.$HOSTNAME.$j

   @ j++

end

rm -rf test.$HOSTNAME.*

 

 

Takes 9 seconds to execute on the NFS share, but 90 seconds on GlusterFS – i.e. 10 times slower.

 

Do you have the new features enabled?

 

performance.stat-prefetch=on
performance.cache-invalidation=on
performance.md-cache-timeout=600
network.inode-lru-limit=50000

performance.nl-cache=on
performance.nl-cache-timeout=600
network.inode-lru-limit=50000

 

 

Ondrej

 

From: Pranith Kumar Karampuri [mailto:pkarampu@xxxxxxxxxx]
Sent: Tuesday, March 13, 2018 8:28 AM
To: Ondrej Valousek <Ondrej.Valousek@xxxxxxxxxxx>
Cc: Andreas Ericsson <andreas.ericsson@xxxxxxxxxxx>; Gluster-users@xxxxxxxxxxx
Subject: Re: Expected performance for WORM scenario

 

 

 

On Mon, Mar 12, 2018 at 6:23 PM, Ondrej Valousek <Ondrej.Valousek@xxxxxxxxxxx> wrote:

Hi,

Gluster will never perform well for small files.

I believe there is  nothing you can do with this.

 

It is bad compared to a disk filesystem but I believe it is much closer to NFS now.

 

Andreas,

     Looking at your workload, I am suspecting there to be lot of LOOKUPs which reduce performance. Is it possible to do the following?

 

# gluster volume profile <volname> info incremental

#execute your workload

# gluster volume profile <volname> info incremental > /path/to/file/that/you/need/to/send/us

 

If the last line in there is LOOKUP, mostly we need to enable nl-cache feature and see how it performs.

 

Ondrej

 

From: gluster-users-bounces@gluster.org [mailto:gluster-users-bounces@gluster.org] On Behalf Of Andreas Ericsson
Sent: Monday, March 12, 2018 1:47 PM
To: Gluster-users@xxxxxxxxxxx
Subject: Expected performance for WORM scenario

 

Heya fellas.

 

I've been struggling quite a lot to get glusterfs to perform even halfdecently with a write-intensive workload. Testnumbers are from gluster 3.10.7.

 

We store a bunch of small files in a doubly-tiered sha1 hash fanout directory structure. The directories themselves aren't overly full. Most of the data we write to gluster is "write once, read probably never", so 99% of all operations are of the write variety.

 

The network between servers is sound. 10gb network cards run over a 10gb (doh) switch. iperf reports 9.86Gbit/sec. ping reports a latency of 0.1 - 0.2 ms. There is no firewall, no packet inspection and no nothing between the servers, and the 10gb switch is the only path between the two machines, so traffic isn't going over some 2mbit wifi by accident.

 

Our main storage has always been really slow (write speed of roughly 1.5MiB/s), but I had long attributed that to the extremely slow disks we use to back it, so now that we're expanding I set up a new gluster cluster with state of the art NVMe SSD drives to boost performance. However, performance only hopped up to around 2.1MiB/s. Perplexed, I tried it first with a 3-node cluster using 2GB ramdrives, which got me up to 2.4MiB/s. My last resort was to use a single node running on ramdisk, just to 100% exclude any network shenanigans, but the write performance stayed at an absolutely abysmal 3MiB/s.

 

Writing straight to (the same) ramdisk gives me "normal" ramdisk speed (I don't actually remember the numbers, but my test that took 2 minutes with gluster completed before I had time to blink). Writing straight to the backing SSD drives gives me a throughput of 96MiB/sec.

 

The test itself writes 8494 files that I simply took randomly from our production environment, comprising a total of 63.4MiB (so average file size is just under 8k. Most are actually close to 4k though, with the occasional 2-or-so MB file in there.

 

I have googled and read a *lot* of performance-tuning guides, but the 3MiB/sec on single-node ramdisk seems to be far beyond the crippling one can cause by misconfiguration of a single system.

 

With this in mind; What sort of write performance can one reasonably hope to get with gluster? Assume a 3-node cluster running on top of (small) ramdisks on a fast and stable network. Is it just a bad fit for our workload?

 

/Andreas

-----
 
The information contained in this e-mail and in any attachments is confidential and is designated solely for the attention of the intended recipient(s). If you are not an intended recipient, you must not use, disclose, copy, distribute or retain this e-mail or any part thereof. If you have received this e-mail in error, please notify the sender by return e-mail and delete all copies of this e-mail from your computer system(s). Please direct any additional queries to: communications@xxxxxxxxxxx. Thank You. Silicon and Software Systems Limited (S3 Group). Registered in Ireland no. 378073. Registered Office: South County Business Park, Leopardstown, Dublin 18.


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users




--

Pranith

-----
 
The information contained in this e-mail and in any attachments is confidential and is designated solely for the attention of the intended recipient(s). If you are not an intended recipient, you must not use, disclose, copy, distribute or retain this e-mail or any part thereof. If you have received this e-mail in error, please notify the sender by return e-mail and delete all copies of this e-mail from your computer system(s). Please direct any additional queries to: communications@xxxxxxxxxxx. Thank You. Silicon and Software Systems Limited (S3 Group). Registered in Ireland no. 378073. Registered Office: South County Business Park, Leopardstown, Dublin 18.




--

Pranith

-----

The information contained in this e-mail and in any attachments is confidential and is designated solely for the attention of the intended recipient(s). If you are not an intended recipient, you must not use, disclose, copy, distribute or retain this e-mail or any part thereof. If you have received this e-mail in error, please notify the sender by return e-mail and delete all copies of this e-mail from your computer system(s). Please direct any additional queries to: communications@xxxxxxxxxxx. Thank You. Silicon and Software Systems Limited (S3 Group). Registered in Ireland no. 378073. Registered Office: South County Business Park, Leopardstown, Dublin 18.



--
Pranith
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux