Thanks Pranith. We are waiting for a downtime on our production setup. Will update you once we are able to apply this on our production setup. Thanks and Regards, Ram From: Pranith Kumar Karampuri [mailto:pkarampu@xxxxxxxxxx]
Ram, I sent https://review.gluster.org/17765 to fix the possibility in bulk removexattr. But I am not sure if this is indeed the reason for this issue. On Mon, Jul 10, 2017 at 6:30 PM, Ankireddypalle Reddy <areddy@xxxxxxxxxxxxx> wrote: Thanks for the swift turn around. Will try this out and let you know. Thanks and Regards, Ram From: Pranith Kumar Karampuri [mailto:pkarampu@xxxxxxxxxx]
Ram, If you see it again, you can use this. I am going to send out a patch for the code path which can lead to removal of gfid/volume-id tomorrow. On Mon, Jul 10, 2017 at 5:19 PM, Sanoj Unnikrishnan <sunnikri@xxxxxxxxxx> wrote: Please use the systemtap script(https://paste.fedoraproject.org/paste/EGDa0ErwX0LV3y-gBYpfNA)
to check which process is invoking remove xattr calls. I have checked for these fops at the protocol/client and posix translators.
1) install systemtap and dependencies. 3) change the path of the translator in the systemtap script to appropriate values for your system
(change "/usr/lib64/glusterfs/3.12dev/xlator/protocol/client.so" and "/usr/lib64/glusterfs/3.12dev/xlator/storage/posix.so") 4) run the script as follows
#stap -v fop_trace.stp Regards, Sanoj On Mon, Jul 10, 2017 at 2:56 PM, Sanoj Unnikrishnan <sunnikri@xxxxxxxxxx> wrote: @ pranith , yes . we can get the pid on all removexattr call and also print the backtrace of the glusterfsd process when trigerring removing xattr. I will write the script and reply back. On Sat, Jul 8, 2017 at 7:06 AM, Pranith Kumar Karampuri <pkarampu@xxxxxxxxxx> wrote: Ram, As per the code, self-heal was the only candidate which *can* do it. Could you check logs of self-heal daemon and the mount to check if there are any metadata heals on root? +Sanoj Sanoj, Is there any systemtap script we can use to detect which process is removing these xattrs? On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy <areddy@xxxxxxxxxxxxx> wrote: We lost the attributes on all the bricks on servers glusterfs2 and glusterfs3 again. [root@glusterfs2 Log_Files]# gluster volume info Volume Name: StoragePool Type: Distributed-Disperse Volume ID: 149e976f-4e21-451c-bf0f-f5691208531f Status: Started Number of Bricks: 20 x (2 + 1) = 60 Transport-type: tcp Bricks: Brick1: glusterfs1sds:/ws/disk1/ws_brick Brick2: glusterfs2sds:/ws/disk1/ws_brick Brick3: glusterfs3sds:/ws/disk1/ws_brick Brick4: glusterfs1sds:/ws/disk2/ws_brick Brick5: glusterfs2sds:/ws/disk2/ws_brick Brick6: glusterfs3sds:/ws/disk2/ws_brick Brick7: glusterfs1sds:/ws/disk3/ws_brick Brick8: glusterfs2sds:/ws/disk3/ws_brick Brick9: glusterfs3sds:/ws/disk3/ws_brick Brick10: glusterfs1sds:/ws/disk4/ws_brick Brick11: glusterfs2sds:/ws/disk4/ws_brick Brick12: glusterfs3sds:/ws/disk4/ws_brick Brick13: glusterfs1sds:/ws/disk5/ws_brick Brick14: glusterfs2sds:/ws/disk5/ws_brick Brick15: glusterfs3sds:/ws/disk5/ws_brick Brick16: glusterfs1sds:/ws/disk6/ws_brick Brick17: glusterfs2sds:/ws/disk6/ws_brick Brick18: glusterfs3sds:/ws/disk6/ws_brick Brick19: glusterfs1sds:/ws/disk7/ws_brick Brick20: glusterfs2sds:/ws/disk7/ws_brick Brick21: glusterfs3sds:/ws/disk7/ws_brick Brick22: glusterfs1sds:/ws/disk8/ws_brick Brick23: glusterfs2sds:/ws/disk8/ws_brick Brick24: glusterfs3sds:/ws/disk8/ws_brick Brick25: glusterfs4sds.commvault.com:/ws/disk1/ws_brick Brick26: glusterfs5sds.commvault.com:/ws/disk1/ws_brick Brick27: glusterfs6sds.commvault.com:/ws/disk1/ws_brick Brick28: glusterfs4sds.commvault.com:/ws/disk10/ws_brick Brick29: glusterfs5sds.commvault.com:/ws/disk10/ws_brick Brick30: glusterfs6sds.commvault.com:/ws/disk10/ws_brick Brick31: glusterfs4sds.commvault.com:/ws/disk11/ws_brick Brick32: glusterfs5sds.commvault.com:/ws/disk11/ws_brick Brick33: glusterfs6sds.commvault.com:/ws/disk11/ws_brick Brick34: glusterfs4sds.commvault.com:/ws/disk12/ws_brick Brick35: glusterfs5sds.commvault.com:/ws/disk12/ws_brick Brick36: glusterfs6sds.commvault.com:/ws/disk12/ws_brick Brick37: glusterfs4sds.commvault.com:/ws/disk2/ws_brick Brick38: glusterfs5sds.commvault.com:/ws/disk2/ws_brick Brick39: glusterfs6sds.commvault.com:/ws/disk2/ws_brick Brick40: glusterfs4sds.commvault.com:/ws/disk3/ws_brick Brick41: glusterfs5sds.commvault.com:/ws/disk3/ws_brick Brick42: glusterfs6sds.commvault.com:/ws/disk3/ws_brick Brick43: glusterfs4sds.commvault.com:/ws/disk4/ws_brick Brick44: glusterfs5sds.commvault.com:/ws/disk4/ws_brick Brick45: glusterfs6sds.commvault.com:/ws/disk4/ws_brick Brick46: glusterfs4sds.commvault.com:/ws/disk5/ws_brick Brick47: glusterfs5sds.commvault.com:/ws/disk5/ws_brick Brick48: glusterfs6sds.commvault.com:/ws/disk5/ws_brick Brick49: glusterfs4sds.commvault.com:/ws/disk6/ws_brick Brick50: glusterfs5sds.commvault.com:/ws/disk6/ws_brick Brick51: glusterfs6sds.commvault.com:/ws/disk6/ws_brick Brick52: glusterfs4sds.commvault.com:/ws/disk7/ws_brick Brick53: glusterfs5sds.commvault.com:/ws/disk7/ws_brick Brick54: glusterfs6sds.commvault.com:/ws/disk7/ws_brick Brick55: glusterfs4sds.commvault.com:/ws/disk8/ws_brick Brick56: glusterfs5sds.commvault.com:/ws/disk8/ws_brick Brick57: glusterfs6sds.commvault.com:/ws/disk8/ws_brick Brick58: glusterfs4sds.commvault.com:/ws/disk9/ws_brick Brick59: glusterfs5sds.commvault.com:/ws/disk9/ws_brick Brick60: glusterfs6sds.commvault.com:/ws/disk9/ws_brick Options Reconfigured: performance.readdir-ahead: on diagnostics.client-log-level: INFO auth.allow: glusterfs1sds,glusterfs2sds,glusterfs3sds,glusterfs4sds.commvault.com,glusterfs5sds.commvault.com,glusterfs6sds.commvault.com Thanks and Regards, Ram From: Pranith Kumar Karampuri [mailto:pkarampu@xxxxxxxxxx]
On Fri, Jul 7, 2017 at 9:25 PM, Ankireddypalle Reddy <areddy@xxxxxxxxxxxxx> wrote: 3.7.19 These are the only callers for removexattr and only _posix_remove_xattr has the potential to do removexattr as posix_removexattr already makes sure that it is not gfid/volume-id. And
surprise surprise _posix_remove_xattr happens only from healing code of afr/ec. And this can only happen if the source brick doesn't have gfid, which doesn't seem to match with the situation you explained.
So there are only two possibilities: 1) Source directory in ec/afr doesn't have gfid 2) Something else removed these xattrs. What is your volume info? May be that will give more clues. PS: sys_fremovexattr is called only from posix_fremovexattr(), so that doesn't seem to be the culprit as it also have checks to guard against gfid/volume-id removal.
Pranith ***************************Legal Disclaimer*************************** "This communication may contain confidential and privileged material for the sole use of the intended recipient. Any unauthorized review, use or distribution by others is strictly prohibited. If you have received the message by mistake, please advise the sender by reply email and delete the message. Thank you." **********************************************************************
--
Pranith
Pranith ***************************Legal Disclaimer*************************** "This communication may contain confidential and privileged material for the sole use of the intended recipient. Any unauthorized review, use or distribution by others is strictly prohibited. If you have received the message by mistake, please advise the sender by reply email and delete the message. Thank you." **********************************************************************
Pranith ***************************Legal Disclaimer*************************** "This communication may contain confidential and privileged material for the sole use of the intended recipient. Any unauthorized review, use or distribution by others is strictly prohibited. If you have received the message by mistake, please advise the sender by reply email and delete the message. Thank you." **********************************************************************
|
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users