Hello guys, I would like to get some advices on a some problems we have on our 3 hosts gluster setup. Here the setup used:
Please note that we also have the ACL option enabled on the
volume mount. Use case: An user submit jobs/tasks to a Spark cluster which have the glusterfs volume mounted on each host. 13 tasks were successfully completed in ~30 min for each (convert some logs to a json format and write the ouput to the gluster fs) but one was blocked for more than 12 hours when we checkedwas going wrong. We found some log entries related to an inode locking in the brick log one one host: [2016-06-19 03:15:08.563397] E [inodelk.c:304:__inode_unlock_lock] 0-exp-locks: Matching lock not found for unlock 0-9223372036854775807, by 10613ebc6c6a0000 on 0x6cee5c0f4730 [2016-06-19 03:15:08.563684] E [MSGID: 115053] [server-rpc-fops.c:273:server_inodelk_cbk] 0-exp-server: 5375861: INODELK /spark/user/20160328/_temporary/0/_temporary (015bde3a-09d 6-41a2-8e9f-7e7c5295d596) ==> (Invalid argument) [Invalid argument] Errors in the data log: [2016-06-19 03:13:29.198676] I [MSGID: 109036]
[dht-common.c:8824:dht_log_new_layout_for_dir_selfheal] 0-exp-dht:
Setting layout of /spark/user/20160328/_temporary/0/_temporary/at And these entries are also spamming the data log when an action
is done the fs: [2016-06-19 13:58:22.817308] I [dict.c:462:dict_get]
(-->/usr/lib64/glusterfs/3.8.0/xlator/debug/io-stats.so(+0x13628)
[0x6f0655cd1628]
-->/usr/lib64/glusterfs/3.8.0/xlator/system/posix-acl.s We did a stadump and we got confirmation that some processes were in a blocking state. We did a clear lock on the blocked inode and the spark job has finally finished (with errors). What could be the root cause of these lockings? Thanks for your help! Florian
|
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users