Hello Marcel, hello Simuel, sorry for my late answer, but I was away for two months and for that I could continue my tests last week. First of all thank you for your patch of the Filesystem RA. It works like a charm but I have some little remarks. What I found out is that the test of the filesystem access with OCF_CHECK_LEVEL is not working with glusterfs. If I use the nvpair OCF_CHECK_LEVEL with a value of 10 I get an err_message with the content: ' 192.168.51.1:/gl_vol0 is not a block device, monitor 10 is noop' If I use the nvpair OCF_CHECK_LEVEL with a value of 20 I get an err_message with the content: ' ERROR: dd said: dd: opening `/virtfs0/.Filesystem_status/res_glusterfs_sp0:0_0_vmhost1': Invalid argument' After that the resource is trying to restart permanently Unfortunately I am not familiar enough with scripting to fix it by myself and to contribute it. Another item I would like to discuss is a bit more general. As Samuel pointed out the Filesystem RA (with native client) needs the gluster node it connects to (by using the device attribute of the Filsesystem RA) up and running only on startup of the client. After that the native client detects by itself if a gluster node is gone or not. This is correct so far but in my setup this could be a SPOF. I would like to build a cluster of three machines (A,B,C) and start a Filesystem RA clone on all three clusternodes. Each of that nodes is a glusterfs server offering a glusterfs replicated share and also a glusterfs client which mounts that share (from server A initially). If all three servers are up there is no problem. Even if one of the servers is going down everything will work fine. But If the node the clients are connected to on startup (Server A) crashes and I have afterwards a need to reboot one of the remaining servers (B or C) , this server is not able to reconnect as client cause node A is still down.