Re: Shared resource pool for libgfapi

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Every resource(thread, mem pools) is associated with glusterfs_ctx,
> hence as the ctxs in the process grows the resource utilization also
> grows (most of it being unused).  This mostly is an issue with any
> libgfapi application: USS, NFS Ganesha, Samba, vdsm, qemu.  It is
> normal in any of the libgfapi application to have multiple
> mounts(ctxs) in the same process, we have seen the number of threads
> scale from 10s-100s in these applications.

> Solution:
> ======
> Have a shared resource pool, threads and mem pools. Since they are
> shared

Looking at it from a different perspective...

As I understand it, the purpose of glusterfs_ctx is to be a container
for these resources.  Therefore, the problem is not that the resources
aren't shared within a context but that the contexts aren't shared
among glfs objects.  This happens because we unconditionally call
glusterfs_ctx_new from within glfs_new.  To be honest, this looks a
bit like rushed code to me - a TODO in early development that never
got DONE later.  Perhaps the right thing to do is to let glfs_new
share an existing glusterfs_ctx instead of always creating a new one.
It would even be possible to make this the default behavior (so that
existing apps can benefit without change) but it might be better for
it to be a new call.  As a potential future enhancement, we could
provide granular control over which resources are shared and which
are private, much like clone(2) does with threads.
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux