Dear NFS fellows, we have notice a behavior of nfs client when iterating over a big directory. The client re-requests entries that already has been. For example, a client issues READDIR on a directory with 1k files. Initial cookie is 0, maxcount 32768. c -> s cookie 0 s -> c last cookie 159 c -> s cookie 105 s -> c last cookie 259 c -> s cookie 207 ... and so on. The interesting thing is, if I mount with rsize 8192 (maxcount 8192), then first couple or requests are asking for correct cookies - 0, 43, 81, 105. Again 105 as with maxcount 32678. To me it looks like that there is some kind of internal page (actually NFS_MAX_READDIR_PAGES) alignment and entries which do not fit into initially allocated PAGE_SIZE * NFS_MAX_READDIR_PAGES memory just get dropped. As 30% of each reply is thrown away, listing of large directories may produce much more requests than required. Is it an expected behavior? Thanks in advance, Tigran.