Re: md-cache changes and impact on tiering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> ----- Original Message -----
> > From: "Poornima Gurusiddaiah" <pgurusid@xxxxxxxxxx>
> > To: "Dan Lambright" <dlambrig@xxxxxxxxxx>, "Nithya Balachandran"
> > <nbalacha@xxxxxxxxxx>
> > Cc: "Gluster Devel" <gluster-devel@xxxxxxxxxxx>
> > Sent: Tuesday, August 23, 2016 12:56:38 AM
> > Subject: md-cache changes and impact on tiering
> > 
> > Hi,
> > 
> > The basic patches for md-cache and integrating it with cache-invalidation
> > is
> > merged in master. You could try master build and enable the following
> > settings, to see if there is any impact on tiering performance at all:
> > 
> > # gluster volume set <volname> performance.stat-prefetch on
> > # gluster volume set <volname> features.cache-invalidation on
> > # gluster volume set <volname> performance.cache-samba-metadata on
> > # gluster volume set <volname> performance.md-cache-timeout 600
> > # gluster volume set <volname> features.cache-invalidation-timeout 600

To follow up on our discussions at the Berlin Gluster conference, I add to the above list one more important tunable:

# gluster v set vol1 network.inode-lru-limit <large number>

in my case, this was needed as the default setting was too small for my workload. I'll also share that there exists a new "inode forget" counter in gluster profile, which makes it much easier track cache utilization.

With this set of tunables, I more consistently see nice improvements on small file workloads with tiering for small files.  But I would imagine md-cache+upcall will help many scenarios where "lookup amplification" acts as a drag. I saw some encouraging results testing this with RDMA.

Some caveats to acknowledge
 
- client caching takes resources from the end user's machine

- the md-cache timeout does not yet have an "infinity" setting; entries still age out artificially

- I am running a very artificial workload using our "smallfile" workload generator [1]. It does does open/read/close over a large set of files; I've not exercised other file operations. 

All that said, it sure seems like a big step forward to me. 

Great to see small file performance improvements with gluster ! :)

[1]
https://github.com/bengland2/smallfile


> 
> 
> > 
> > Note: It has to be executed in the same order.
> > 
> > Tracker bug: https://bugzilla.redhat.com/show_bug.cgi?id=1211863
> > Patches:
> > http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+topic:bug-1211863
> > 
> > Thanks,
> > Poornima
> > 
> 
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux