Hi David, FYI I've updated this patch on [1]. [1] https://github.com/lostjeffle/linux/commit/589dd838dc539aee291d1032406653a8f6269e6f. This new version mainly adds cachefiles_ondemand_flush_reqs(), which drains the pending read requests when cachefilesd is going to exit. On 2/9/22 2:00 PM, Jeffle Xu wrote: > This patch introduces a new devnode 'cachefiles_ondemand' to support the > newly introduced on-demand read mode. > > The precondition for on-demand reading semantics is that, all blob files > have been placed under corresponding directory with correct file size > (sparse files) on the first beginning. When upper fs starts to access > the blob file, it will "cache miss" (hit the hole) and then turn to user > daemon for preparing the data. > > The interaction between kernel and user daemon is described as below. > 1. Once cache miss, .ondemand_read() callback of corresponding fscache > backend is called to prepare the data. As for cachefiles, it just > packages related metadata (file range to read, etc.) into a pending > read request, and then the process triggering cache miss will fall > asleep until the corresponding data gets fetched later. > 2. User daemon needs to poll on the devnode ('cachefiles_ondemand'), > waiting for pending read request. > 3. Once there's pending read request, user daemon will be notified and > shall read the devnode ('cachefiles_ondemand') to fetch one pending > read request to process. > 4. For the fetched read request, user daemon need to somehow prepare the > data (e.g. download from remote through network) and then write the > fetched data into the backing file to fill the hole. > 5. After that, user daemon need to notify cachefiles backend by writing a > 'done' command to devnode ('cachefiles_ondemand'). It will also > awake the previous asleep process triggering cache miss. > 6. By the time the process gets awaken, the data has been ready in the > backing file. Then process can re-initiate a read request from the > backing file. > > Signed-off-by: Jeffle Xu <jefflexu@xxxxxxxxxxxxxxxxx> > --- -- Thanks, Jeffle