On 12/15/2016 01:41 PM, Nithya Balachandran wrote:
On 15 December 2016 at 18:07, Xavier Hernandez <xhernandez@xxxxxxxxxx <mailto:xhernandez@xxxxxxxxxx>> wrote: On 12/15/2016 12:48 PM, Raghavendra Gowdappa wrote: I need to step back a little to understand the RCA correctly. If I understand the code correctly, the callstack which resulted in failed setattr is (in rebalance process): dht_lookup -> dht_lookup_cbk -> dht_lookup_everwhere -> dht_lookup_everywhere_cbk -> dht_lookup_everywhere_done -> dht_linkfile_create -> dht_lookup_linkfile_create_cbk -> dht_linkfile_attr_heal -> setattr However, this setattr doesn't change the file type. <dht_linkfile_attr_heal> STACK_WIND (copy, dht_linkfile_setattr_cbk, subvol, subvol->fops->setattr, ©_local->loc, &stbuf, (GF_SET_ATTR_UID | GF_SET_ATTR_GID), xattr); </dht_linkfile_attr_heal> As can be seen above, the setattr call only changes UID/GID. So, I am at loss to explain why the file type changed. Has anyone has any other explanation? Does the inode passed to setattr represent the regular file just created ? or does it contain information about the previous file (the one it's being replaced) that in this case is a symbolic link ? Right, IIUC, the reason this fails is the inode for the actual sym link has type LINK which does not match the stbuf returned in the setattr on the linkto file. The file does _not_ change types.
That seems a big problem to me. All fops should receive consistent data, otherwise its behavior is undefined. Any xlator may rely on received data to decide what to do. In this particular case, ec could check the data from the answer, but maybe in the future another xlator needs to decide what to do before getting the answers. If we receive inconsistent data, that won't be possible.
It seems not right to me to share the same inode to represent two distinct files, even if they are related to the same file from the top view. I think taht each DHT's subvolume should have it's private inode representation, specially if they represent different files.
Xavi
Xavi regards, Raghavendra
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel