On 02/23/2017 11:12 PM, Joseph
Lorenzini wrote:
Hi all,
I have a simple replicated volume with a replica count of
3. To ensure any file changes (create/delete/modify) are
replicated to all bricks, I have this setting in my client
configuration.
volume
gv0-replicate-0
type cluster/replicate
subvolumes gv0-client-0 gv0-client-1 gv0-client-2
end-volume
And that works as expected. My question is
how one could detect if this was not happening which could
poise a severe problem with data consistency and
replication. For example, those settings could be omitted
from the client config and then the client will only write
data to one brick and all kinds of terrible things will
start happening. I have not found a way the gluster volume
cli to detect when that kind of problem is occurring. For
example gluster volume heal <volname> info does not
detect this problem.
Is there any programmatic way to detect
when this problem is occurring?
I couldn't understand how you will end up in this situation. There
is only one possibility (assuming there is no bug :) ), ie you
changed the client graph in a way that there is only one subvolume
to replica server.
To check that the simply way is, there is a xlator called meta,
which provides meta data information through mount point, similiar
to linux proc file system. So you can check the active graph through
meta and see the number of subvolumes for replica xlator
for example : the directory <mount
point>/.meta/graphs/active/<volname>-replicate-0/subvolumes
will have entries for each replica clients , so in your case you
should see 3 directories.
Let me know if this helps.
Regards
Rafi KC
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
|
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users