On 06/08/2015 05:21 PM, Jeff Darcy wrote:
Every resource(thread, mem pools) is associated with glusterfs_ctx,
hence as the ctxs in the process grows the resource utilization also
grows (most of it being unused). This mostly is an issue with any
libgfapi application: USS, NFS Ganesha, Samba, vdsm, qemu. It is
normal in any of the libgfapi application to have multiple
mounts(ctxs) in the same process, we have seen the number of threads
scale from 10s-100s in these applications.
Solution:
======
Have a shared resource pool, threads and mem pools. Since they are
shared
Looking at it from a different perspective...
As I understand it, the purpose of glusterfs_ctx is to be a container
for these resources. Therefore, the problem is not that the resources
aren't shared within a context but that the contexts aren't shared
among glfs objects. This happens because we unconditionally call
glusterfs_ctx_new from within glfs_new. To be honest, this looks a
bit like rushed code to me - a TODO in early development that never
got DONE later. Perhaps the right thing to do is to let glfs_new
share an existing glusterfs_ctx instead of always creating a new one.
It would even be possible to make this the default behavior (so that
existing apps can benefit without change) but it might be better for
it to be a new call. As a potential future enhancement, we could
provide granular control over which resources are shared and which
are private, much like clone(2) does with threads.
+1. In the pre gfapi days, ctx was intended to be a global resource -
one per process and was available to all translators. It makes sense to
retain the same behavior in gfapi by having a single ctx that can be
shared across multiple glfs instances.
-Vijay
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel