Re: RFC: return d_type for non-plus READDIR

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Chuck,

On Wed, Mar 24, 2021 at 01:50:52PM +0000, Chuck Lever III wrote:
 
> "How much of a problem is it" -- I guess what I really want to
> see is some quantification of the problem, in numbers.
> 
> - Exactly which workloads benefit from having the DT information?
> - How much do they improve?
> - Which workloads are negatively impacted, and how much?
> - How are workloads impacted if the client requests DT
> information from servers that cannot support it efficiently?
> 
> Seems to me there will be some caching effects -- there are at
> least two caches between the server's persistent storage and the
> application. So I expect this will be a complex situation, at
> best.

Customer applications that would benefit are those that periodically need to
scan a tree with large directories, e.g. to find new files for document
exchange or messaging applications. Most of the apps that I've seen do this
were custom developed. Some standard CLI apps also fall in this category,
including "find" (with no predicates other than for type and name), and
"updatedb". 

How much do these improve? I think there are three cases. On EFS:

- Case 1: READDIR returns DT_UNKNOWN. The client needs to do a stat() for
every entry to get the file type. Throughput is approximately 2K
entries/sec.
- Case 2: READDIR returns the actual d_type, but the server gets d_type
by reading the dirent inodes. Throughput is approximately 18K entries/s.
- Case 3: READDIR returns the actual d_type and does not need to read
inodes. Throughput is 200K entries/s.

(Caveat: EFS does not currently store d_type in our directories, so I did a
related test that should give the same results. For cases 2 and 3, I
measured a regular non-plus READDIR and tested it against two server
configurations, one where the server reads all dirent inodes and just
discards the results, and one where it does not read any inodes.)

If the server stores d_type in its directories, then the only negative
impact that I can think of would be the extra 4 bytes for each dirent in the
NFS response. The exact overhead depends on the file size, but should be
typically be less than 5-7% depending on file name size. On the other hand,
if requesting d_type requires the server to read inodes, where previously it
did not, then there's an 11x throughput regression (scenario 3 vs 2).

Regarding caching, yes, great question. This was something we looked into as
well. In our tests, reading dirent inodes only when needed (i.e. for
READDIRPLUS) got us an overall better cache hit rate, which we explained due
to lower pressure on the cache. That's a second reason why we want to only
request d_type if it's not going to force the server to read all inodes.

> So, alternatives might be:
> - Always requesting the DT information
> - Leveraging an existing mount option, like lookupcache=
> - A sysfs setting or a module parameter
> - A heuristic to guess when requesting the information is harmful
> - Enabling the request based on directory size or some other static
> feature of the directory
> - If this information is of truly great benefit, approaching server
> vendors to support it efficiently, and then have it always enabled
> on clients
> 
> Adding an administrative knob means we don't have a good understanding
> of how this setting is going to work. As an experimental feature, this
> is a great way to go, but for a permanent, long-term thing, let's keep
> in mind that client administration is a resource that has to scale
> well to cohorts of 100s of thousands of systems. The simpler and more
> automatic we can make it, the better off it will be for everyone.

Thanks for that! I'd be interested to hear if you think our data above is
compelling enough. Ideally we'd find a way to do this approach
experimentally at first. Whether we can make it a default, or whether we
need a way to discover the capbility, would depend on how other server
vendors handle this.

Geert



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux