Re: [External] Re: file metadata operations performance - gluster 4.1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



To enable nl-cache please use group option instead of single volume set:

#gluster vol set VOLNAME group nl-cache

This sets few other things including time out, invalidation etc.

For enabling the option Raghavendra mentioned, you ll have to execute it explicitly, as it's not part of group option yet:

#gluster vol set VOLNAME performance.nl-cache-positive-entry on

Also from the past experience, setting the below option has helped in performance:

# gluster vol set VOLNAME network.inode-lru-limit 200000

Regards,
Poornima


On Thu, Aug 30, 2018, 8:49 PM Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx> wrote:


On Thu, Aug 30, 2018 at 8:38 PM, Davide Obbi <davide.obbi@xxxxxxxxxxx> wrote:
yes "performance.parallel-readdir on and 1x3 replica 

That's surprising. I thought performance.parallel-readdir will help only when distribute count is fairly high. This is something worth investigating further.


On Thu, Aug 30, 2018 at 5:00 PM Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx> wrote:


On Thu, Aug 30, 2018 at 8:08 PM, Davide Obbi <davide.obbi@xxxxxxxxxxx> wrote:
Thanks Amar,

i have enabled the negative lookups cache on the volume:

I think enabling nl-cache-positive-entry might help for untarring or git clone into glusterfs. Its disabled by default. can you let us know the results?
 
Option: performance.nl-cache-positive-entry
Default Value: (null)
Description: enable/disable storing of entries that were lookedup and found to be present in the volume, thus lookup on non existent file is served from the cache


To deflate a tar archive (not compressed) of 1.3GB it takes aprox 9mins which can be considered a slight improvement from the previous 12-15 however still not fast enough compared to local disk. The tar is present on the gluster share/volume and deflated inside the same folder structure.

I am assuming this is with parallel-readdir enabled, right?


Running the operation twice (without removing the already deflated files) also did not reduce the time spent.

Running the operation with the tar archive on local disk made no difference

What really made a huge difference while git cloning was setting "performance.parallel-readdir on". During the phase "Receiving objects" , as i enabled the xlator it bumped up from 3/4MBs to 27MBs

What is the distribute count? Is it 1x3 replica?


So in conclusion i'm trying to make the untar operation working at an acceptable level, not expecting local disks speed but at least being within the 4mins

I have attached the profiles collected at the end of the untar operations with the archive on the mount and outside

thanks
Davide


On Tue, Aug 28, 2018 at 8:41 AM Amar Tumballi <atumball@xxxxxxxxxx> wrote:
One of the observation we had with git clone like work load was, nl-cache (negative-lookup cache), helps here.

Try 'gluster volume set $volume-name nl-cache enable'.

Also sharing the 'profile info' during this performance observations also helps us to narrow down the situation.

More on how to capture profile info @ https://hackmd.io/PhhT5jPdQIKxzfeLQmnjJQ?view

-Amar


On Thu, Aug 23, 2018 at 7:11 PM, Davide Obbi <davide.obbi@xxxxxxxxxxx> wrote:
Hello,

did anyone ever managed to achieve reasonable waiting time while performing metadata intensive operations such as git clone, untar etc...? Is this possible workload or will never be in scope for glusterfs?

I'd like to know, if possible, what would be the options that affect such volume performances.
Albeit i managed to achieve decent git status/git grep operations, 3 and 30 secs, the git clone and untarring a file from/to the same share take ages. for a git repo of aprox 6GB.

I'm running a test environment with 3 way replica 128GB RAM and 24 cores are  2.40GHz, one internal SSD dedicated to the volume brick and 10Gb network

The options set so far that affects volume performances are:
 48 performance.readdir-ahead: on                  
 49 features.cache-invalidation-timeout: 600 
 50 features.cache-invalidation: on  
 51 performance.md-cache-timeout: 600 
 52 performance.stat-prefetch: on  
 53 performance.cache-invalidation: on  
 54 performance.parallel-readdir: on    
 55 network.inode-lru-limit: 900000    
 56 performance.io-thread-count: 32   
 57 performance.cache-size: 10GB

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



--
Amar Tumballi (amarts)


--
Davide Obbi
System Administrator

Booking.com B.V.
Vijzelstraat 66-80 Amsterdam 1017HL Netherlands
Direct +31207031558
Booking.com
The world's #1 accommodation site 
43 languages, 198+ offices worldwide, 120,000+ global destinations, 1,550,000+ room nights booked every day 
No booking fees, best price always guaranteed 
Subsidiary of Booking Holdings Inc. (NASDAQ: BKNG)

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



--
Davide Obbi
System Administrator

Booking.com B.V.
Vijzelstraat 66-80 Amsterdam 1017HL Netherlands
Direct +31207031558
Booking.com
The world's #1 accommodation site 
43 languages, 198+ offices worldwide, 120,000+ global destinations, 1,550,000+ room nights booked every day 
No booking fees, best price always guaranteed 
Subsidiary of Booking Holdings Inc. (NASDAQ: BKNG)

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux