gfproxy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

This mail is regarding the gfproxy feature, please go through the same and let us know your thoughts.

About the gfproxy feature:
-----------------------------------
As per the current architecture of Gluster, the client is more intelligent and has all the clustering logic. This approach has its own pros and cons. In several use cases, it is desirable to have all this clustering logic on the server side and have, as thin client as possible. Eg: Samba, Qemu, Block device export etc. This makes the upgrades easier, and is more scalable as the resources consumed by thin clients are much less than normal client.

Approach:
Client volfile is split into two volfiles:
1. Thin client volfile: master(gfapi/Fuse) followed by Protocol/client
2. gfproxyd volfile: protocol/server, performance xlators, cluster xlators, protocol/servers.
With this model, the thin client connects to gfproxyd and glusterd(like always). gfproxyd connects to all the bricks. The major problem with this is performance, when the client and gfproxyd are not co-located.


What is already done by Facebook:
---------------------------------------------
1. Volgen code for generating thin client volfile and the gfproxyd daemon volfile.
2. AHA translator on the thin client, so that on a restart/network disruptions between thin client and gfproxyd, we retry fops and the client doesn't become inaccessible.


What remains to be done:
---------------------------------
1. Glusterd managing the gfproxyd
    Currently the gfproxy daemon listens on 40000 port, if we want to run multiple gfproxyd (one per volume)
2. Redo the volgen and daemon management in glusterd2
    -  Ability to be able to run daemons on subset of cluster nodes
    -  ssl
    - Validate with other features like snap, tier,
3. Graph switch for the gfproxyd
4. Failover from one gfproxyd to another
5. Less resource consumption on thin client - Memory and threads
6. Performance analysis

Issue: https://github.com/gluster/glusterfs/issues/242

Regards,
Poornima
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux