On Fri, Apr 08, 2022 at 07:50:55PM +0800, JeffleXu wrote: > > > On 4/8/22 7:25 PM, Vivek Goyal wrote: > > On Fri, Apr 08, 2022 at 10:36:40AM +0800, JeffleXu wrote: > >> > >> > >> On 4/7/22 10:10 PM, Vivek Goyal wrote: > >>> On Sat, Apr 02, 2022 at 06:32:50PM +0800, Jeffle Xu wrote: > >>>> Move dmap free worker kicker inside the critical region, so that extra > >>>> spinlock lock/unlock could be avoided. > >>>> > >>>> Suggested-by: Liu Jiang <gerry@xxxxxxxxxxxxxxxxx> > >>>> Signed-off-by: Jeffle Xu <jefflexu@xxxxxxxxxxxxxxxxx> > >>> > >>> Looks good to me. Have you done any testing to make sure nothing is > >>> broken. > >> > >> xfstests -g quick shows no regression. The tested virtiofs is mounted > >> with "dax=always". > > > > I think xfstests might not trigger reclaim. You probably will have to > > run something like blogbench with a small dax window like 1G so that > > heavy reclaim happens. > > > Actually, I configured the DAX window to 8MB, i.e. 4 slots when running > xfstests. Thus I think the reclaim path is most likely triggered. > > > > > > For fun, I sometimes used to run it with a window of just say 16 dax > > ranges so that reclaim was so heavy that if there was a bug, it will > > show up. > > > > Yeah, my colleague had ever reported that a DAX window of 4KB will cause > hang in our internal OS (which is 4.19, we back ported virtiofs to > 4.19). But then I found that this issue doesn't exist in the latest > upstream. The reason seems that in the upstream kernel, > devm_memremap_pages() called in virtio_fs_setup_dax() will fail directly > since the dax window (4KB) is not aligned with the sparse memory section. Given our default chunk size is 2MB (FUSE_DAX_SHIFT), may be it is not a bad idea to enforce some minimum cache window size. IIRC, even one range is not enough. Minimum 2 are required for reclaim to not deadlock. Hence, I guess it is not a bad idea to check for cache window size and if it is too small, reject it and disable dax. Thanks Vivek