On 2019/2/17 8:23 上午, Nix wrote: > On 16 Feb 2019, Coly Li told this: > >> The reason we care about metadata here is, for some file systems, they >> do metadata readahead as sequential requests, and we want to keep such >> sequential metadata I/Os on cache device. > > ... and something is still not quite right here. I just did a git status > on the usual evil test case, a Chromium git repo on XFS-on-bcache-on-md. > I've done a complete backup indexing run before and 10GiB or so of > metadata has hit the cache device, yet the git status still caused it to > pound away at the disk for fifteen minutes or so, very seekily, with > bypassed I/O going up and nothing much happening to the cache hits *or* > cache misses. > > (I have boosted the sequential_cutoff to 6144K on the grounds that, with > my RAID chunk size of 512K on three disks of a 5-disk RAID-6 and > sequential read rate of 200MiB/s/disk, it's only once you pass about > 6144K that the time taken to read exceeds the typical seek time of about > 7--10ms. A bit more stuff is getting cached, but not... *whatever* git > is doing here.) > > I'll do a drop_caches soon and try again, and examine what's going on > with blktrace, because something strange is happening here I think. > > > Hm actually it looks like "git status" reads the first line of every > file as well, which obviously a backup index run is not going to do > (that just stat()s everything). It's still not clear to me why *that* > was being bypassed though. Reading a few hundred bytes from each of > tens of thousands of files seems like exactly the sort of thing bcache > should be caching... more analysis needed, I think. Let's see, can I get > someone to give me a research grant :P > >> For normal file readahead, if it is sequential I/O and execeeds >> sequential cutoff threshold, bcache won't have it. But if it is random, >> bcache may have it. It is about I/O patterns, not priorities. > > Unless you're using the ioprio patch in which case that matters too ;) > (different sort of priority though.) > Aha, you wait for me here :-) I am open with the ioprio patch, and also I plan to provide it for some partners and wait for the response (not do it yet, but on my plan). What I concern is the confused configuration interface, we need to make it much simpler. We don't need to check ioprio very accurate, if we can find a way to configure it simpler. We need someone work on it, that is it. Thanks. -- Coly Li