Thanks Joe, Just to clarify, I’m seeing 8 seconds to run ls -l in a dir containing 2 files. I mentioned that the _parent_ dir contains 123k items, in case it was relevant. Although it seems that the
fact we are hitting the dir with many requests seems to be the key factor. Aaron From: Joe Julian [mailto:joe@xxxxxxxxxxxxxxxx]
The -l flag is causing a metadata lookup for every file in the directory. The way the ls command does that it's with
individual fstat calls to each directory entry. That's a lot of tiny network round trips with fops that don't even fill a standard frame thus each frame has a high percentage of overhead for tcp. Add to that the replica check to ensure you're not getting stale
data and you have another round trip for each file. Your 123k directory entries require several frames of getdirent and over 492k frames for the individual fstat calls. That's roughly 16us per frame. On November 27, 2017 5:18:56 AM PST, Aaron Roberts <aroberts@xxxxxxxxxxxxxx> wrote:
|
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users