On Thu, Dec 8, 2016 at 9:13 AM, jin deng <cheneydeng88@xxxxxxxxx> wrote:
Good news is we are working on brick multiplexing
which should address the issues you highlighted above and is supposed to
land in 3.10 release (Around Feb end/early March). For more details,
please refer [1]
[1] https://github.com/gluster/glusterfs-specs/blob/master/under_review/multiplexing.md
Hello world,we are using glusterfs to build our public cloud storage service with NFS protocol.we use the glusterfsas the storage layer.And we will do our development based on the glusterfs of version 3.6.9.As public cloud,our users may create a lot of volumes.The way glusterfs doing is start a "glusterfsd" processfor every volume,that will make our server with too much processes and most of them may have little request.Sothe processes take much resource of our server and may become the bottle of our service.
[1] https://github.com/gluster/glusterfs-specs/blob/master/under_review/multiplexing.md
Two ways to solve that problem as i thought:1) we don't create the volume as our users indicated,instead,we create just one "basic" volume,and all the users' volumes export as a sub-directory within the basic volume.however,this solution sacrifices the ability to migrate/heal the data with the volume granularity and seems unbearable.2) modifying the protocol/server xlator to let it support handle multiple subvolumes in one process.after scanning the code of protocol/server,i think the biggest problem is the configuration,the configuration and the glusterfsd is corresponding one-by-one.I want to get some guidances from you if this modification is possible and not takes too much work.And is there other problems which this plan won't work?Hope to see your solution to solve our problem.Thanks in advance.
--Sincerely,DengJin
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel
--
~ Atin (atinm)
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel