Re: [Gluster-users] Need a way to display and flush gluster cache ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 28, 2016 at 05:58:15PM +0530, Mohammed Rafi K C wrote:
> 
> 
> On 07/27/2016 04:33 PM, Raghavendra G wrote:
> >
> >
> > On Wed, Jul 27, 2016 at 10:29 AM, Mohammed Rafi K C
> > <rkavunga@xxxxxxxxxx <mailto:rkavunga@xxxxxxxxxx>> wrote:
> >
> >     Thanks for your feedback.
> >
> >     In fact meta xlator is loaded only on fuse mount, is there any
> >     particular reason to not to use meta-autoload xltor for nfs server
> >     and libgfapi ?
> >
> >
> > I think its because of lack of resources. I am not aware of any
> > technical reason for not using on NFSv3 server and gfapi.
> 
> Cool. I will try to see how we can implement meta-autoliad feature for
> nfs-server and libgfapi. Once we have the feature in place, I will
> implement the cache memory display/flush feature using meta xlators.

In case you plan to have this ready in a month (before the end of
August), you should propose it as a 3.9 feature. Click the "Edir this
page on GitHub" link on the bottom of
https://www.gluster.org/community/roadmap/3.9/ :)

Thanks,
Niels


> 
> Thanks for your valuable feedback.
> Rafi KC
> 
> >  
> >
> >     Regards
> >
> >     Rafi KC
> >
> >     On 07/26/2016 04:05 PM, Niels de Vos wrote:
> >>     On Tue, Jul 26, 2016 at 12:43:56PM +0530, Kaushal M wrote:
> >>>     On Tue, Jul 26, 2016 at 12:28 PM, Prashanth Pai <ppai@xxxxxxxxxx> <mailto:ppai@xxxxxxxxxx> wrote:
> >>>>     +1 to option (2) which similar to echoing into /proc/sys/vm/drop_caches
> >>>>
> >>>>      -Prashanth Pai
> >>>>
> >>>>     ----- Original Message -----
> >>>>>     From: "Mohammed Rafi K C" <rkavunga@xxxxxxxxxx> <mailto:rkavunga@xxxxxxxxxx>
> >>>>>     To: "gluster-users" <gluster-users@xxxxxxxxxxx> <mailto:gluster-users@xxxxxxxxxxx>, "Gluster Devel" <gluster-devel@xxxxxxxxxxx> <mailto:gluster-devel@xxxxxxxxxxx>
> >>>>>     Sent: Tuesday, 26 July, 2016 10:44:15 AM
> >>>>>     Subject:  Need a way to display and flush gluster cache ?
> >>>>>
> >>>>>     Hi,
> >>>>>
> >>>>>     Gluster stack has it's own caching mechanism , mostly on client side.
> >>>>>     But there is no concrete method to see how much memory are consuming by
> >>>>>     gluster for caching and if needed there is no way to flush the cache memory.
> >>>>>
> >>>>>     So my first question is, Do we require to implement this two features
> >>>>>     for gluster cache?
> >>>>>
> >>>>>
> >>>>>     If so I would like to discuss some of our thoughts towards it.
> >>>>>
> >>>>>     (If you are not interested in implementation discussion, you can skip
> >>>>>     this part :)
> >>>>>
> >>>>>     1) Implement a virtual xattr on root, and on doing setxattr, flush all
> >>>>>     the cache, and for getxattr we can print the aggregated cache size.
> >>>>>
> >>>>>     2) Currently in gluster native client support .meta virtual directory to
> >>>>>     get meta data information as analogues to proc. we can implement a
> >>>>>     virtual file inside the .meta directory to read  the cache size. Also we
> >>>>>     can flush the cache using a special write into the file, (similar to
> >>>>>     echoing into proc file) . This approach may be difficult to implement in
> >>>>>     other clients.
> >>>     +1 for making use of the meta-xlator. We should be making more use of it.
> >>     Indeed, this would be nice. Maybe this can also expose the memory
> >>     allocations like /proc/slabinfo.
> >>
> >>     The io-stats xlator can dump some statistics to
> >>     /var/log/glusterfs/samples/ and /var/lib/glusterd/stats/ . That seems to
> >>     be acceptible too, and allows to get statistics from server-side
> >>     processes without involving any clients.
> >>
> >>     HTH,
> >>     Niels
> >>
> >>
> >>>>>     3) A cli command to display and flush the data with ip and port as an
> >>>>>     argument. GlusterD need to send the op to client from the connected
> >>>>>     client list. But this approach would be difficult to implement for
> >>>>>     libgfapi based clients. For me, it doesn't seems to be a good option.
> >>>>>
> >>>>>     Your suggestions and comments are most welcome.
> >>>>>
> >>>>>     Thanks to Talur and Poornima for their suggestions.
> >>>>>
> >>>>>     Regards
> >>>>>
> >>>>>     Rafi KC
> >>>>>
> >>>>>     _______________________________________________
> >>>>>     Gluster-devel mailing list
> >>>>>     Gluster-devel@xxxxxxxxxxx <mailto:Gluster-devel@xxxxxxxxxxx>
> >>>>>     http://www.gluster.org/mailman/listinfo/gluster-devel
> >>>>>
> >>>>     _______________________________________________
> >>>>     Gluster-devel mailing list
> >>>>     Gluster-devel@xxxxxxxxxxx <mailto:Gluster-devel@xxxxxxxxxxx>
> >>>>     http://www.gluster.org/mailman/listinfo/gluster-devel
> >>>     _______________________________________________
> >>>     Gluster-users mailing list
> >>>     Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@xxxxxxxxxxx>
> >>>     http://www.gluster.org/mailman/listinfo/gluster-users
> >>>
> >>>
> >>>     _______________________________________________
> >>>     Gluster-devel mailing list
> >>>     Gluster-devel@xxxxxxxxxxx <mailto:Gluster-devel@xxxxxxxxxxx>
> >>>     http://www.gluster.org/mailman/listinfo/gluster-devel
> >
> >
> >     _______________________________________________
> >     Gluster-devel mailing list
> >     Gluster-devel@xxxxxxxxxxx <mailto:Gluster-devel@xxxxxxxxxxx>
> >     http://www.gluster.org/mailman/listinfo/gluster-devel
> >
> >
> >
> >
> > -- 
> > Raghavendra G
> 

Attachment: signature.asc
Description: PGP signature

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux