Gluster volume start <volname> force + EC nfs mount

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi Xavi,
All gluster commands which can restart nfs process can lead to inconsistent versions on the file/directory if the gluster-nfs process dies just at the time of updating versions. I don't see any way to fix this problem as the NFS process is killed with SIGKILL.

Directory and metadata heals can recover from versions not being same. I think we need to add logic in data self-heal code where even when the versions don't match it should go ahead and check if the data matches on the bricks. i.e. Read from 'k' (n=k+m) number of bricks and see if the data matches on the rest of the redundancy bricks('m'). If it all matches then it should just set the versions same.

Any other ideas?

Also added gluster-devel to check if anyother component had to deal with similar problems.

Pranith
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux