Re: Shared resource pool for libgfapi

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 
> Initially ctx had a one to one mapping with process and volume/mount, but
> with libgfapi
> and libgfchangelog, ctx has lost the one to one association with process.
> The question is do we want to retain one to one mapping between ctx and
> process or
> ctx and volume.

Yes and no. The problem is with viewing 'ctx' as panacea. We need to break it
into objects which have their own independent per-volume, per-process
relationships.

ctx as it stands abstracts the following.

- Process identity information - process_uuid, cmd_args etc.

- Volume specific information  - graph_id, list of graphs of a given volume,
  connection to volfile server, client_table etc.

- Aggregate of resources       - memory pools, event pool, syncenv (for
  synctasks) etc.

This proposal already does parts of above, i.e, to break the abstraction of an
aggregate of resources into a resource pool structure. I wouldn't decide on the
approach based on the no. of code (sites) that this change would impact. More
change is not necessarily bad.  I would decide on ease of abstraction and
extension it brings with it, for the kind of features that are to come in with
4.0. For e.g, imagine how this would pave way to enhance glusterfs to support
multiple (sub-)volumes being served from a single (virtual)brick process.This
is crudely similiar to multiple-'ctx' problem gfapi is trying to solve now.
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux