Re: Ceph file change monitor

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes , We need to similar to inotify/fanotity .

came through link http://docs.ceph.com/docs/master/dev/osd_internals/watch_notify/?highlight=notify#watch-notify

Just want to know if i can use this ?

If yes means how we have to use ?

Thanks,
Siva 

On Thu, Jun 9, 2016 at 6:06 PM, Anand Bhat <anand.bhat@xxxxxxxxx> wrote:
I think you are looking for inotify/fanotify events for Ceph. Usually these are implemented for local file system. Ceph being a networked file system, it will not be easy to implement  and will involve network traffic to generate events.

Not sure it is in the plan though.

Regards,
Anand

On Wed, Jun 8, 2016 at 2:46 PM, John Spray <jspray@xxxxxxxxxx> wrote:
On Wed, Jun 8, 2016 at 8:40 AM, siva kumar <85siva@xxxxxxxxx> wrote:
> Dear Team,
>
> We are using ceph storage & cephFS for mounting .
>
> Our configuration :
>
> 3 osd
> 3 monitor
> 4 clients .
> ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)
>
> We would like to get file change notifications like what is the event
> (ADDED, MODIFIED,DELETED) and for which file the event has occurred. These
> notifications should be sent to our server.
> How to get these notifications?

This isn't a feature that CephFS has right now.  Still, I would be
interested to know what protocol/format your server would consume
these kinds of notifications in?

John

> Ultimately we would like to add our custom file watch notification hooks to
> ceph so that we can handle this notifications by our self .
>
> Additional Info :
>
> [test@ceph-zclient1 ~]$ ceph -s
>
>>     cluster a8c92ae6-6842-4fa2-bfc9-8cdefd28df5c
>
>      health HEALTH_WARN
>             mds0: ceph-client1 failing to respond to cache pressure
>             mds0: ceph-client2 failing to respond to cache pressure
>             mds0: ceph-client3 failing to respond to cache pressure
>             mds0: ceph-client4 failing to respond to cache pressure
>      monmap e1: 3 mons at
> {ceph-zadmin=xxx.xxx.xxx.xxx:6789/0,ceph-zmonitor=xxx.xxx.xxx.xxx:6789/0,ceph-zmonitor1=xxx.xxx.xxx.xxx:6789/0}
>             election epoch 16, quorum 0,1,2
> ceph-zadmin,ceph-zmonitor1,ceph-zmonitor
>      mdsmap e52184: 1/1/1 up {0=ceph-zstorage1=up:active}
>      osdmap e3278: 3 osds: 3 up, 3 in
>       pgmap v5068139: 384 pgs, 3 pools, 518 GB data, 7386 kobjects
>             1149 GB used, 5353 GB / 6503 GB avail
>                  384 active+clean
>
>   client io 1259 B/s rd, 179 kB/s wr, 11 op/s
>
>
>
> Thanks,
> S.Sivakumar
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
----------------------------------------------------------------------------
Never say never.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux