Hi Riya,
Thanks for writing to us. Some questions before we start on this.
* Where can we see your work of modifying the fuse module to cache the calls? Some reference would help us to provide more specific pointers. (or ask better questions).
* If the caching happened in fuse module, and it expects the regular arguments as the parameters, then there may not be any work required at all in glusterfs, as it works on low-level fuse api.
* Also, how to invalidate caches from userspace program? because GlusterFS could be accessed from multiple clients, so it becomes an important piece to have.
For referring at the codebase to look at integration with fuse module, please check the directory 'xlators/mount/fuse/src/' and mostly the file 'fuse-bridge.c'.
Thanks for your interest in project, would be great to collaborate on this effort, as it can enhance the performance of glusterfs in many usecases.
Regards,
Amar
On Mon, Apr 2, 2018 at 6:34 AM, riya khanna <riyakhanna1983@xxxxxxxxx> wrote:
Hi,I've modified FUSE framework to take a part of user-space daemon code and moves it into the kernel fuse driver to minimize user-kernel-user switches during file system operations. An example would be caching getattr/getxattr/lookup/security checks etc. This design, therefore, create fast (served directly from the kernel) and a slow (regular fuse) execution paths. The fast and slow paths can also communicate with each other using shared memory. I was wondering if it is possible to accelerate glusterfs using this design. What pieces could (should) be easily moved to kernel space? Any pointers would be highly appreciated. Thanks!-Riya
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel
Amar Tumballi (amarts)
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-devel