On Fri, Apr 09, 2021 at 03:36:39PM +0800, Wang Yugui wrote: > Hi, > > some question about workqueue for percpu. > > > > > > > > > And a question about this, > > > > > > > > upper caller: > > > > > > > > nofs_flag = memalloc_nofs_save(); > > > > > > > > ret = btrfs_drew_lock_init(&root->snapshot_lock); > > > > > > > > memalloc_nofs_restore(nofs_flag); > > > > > > > > > > The issue is here. nofs is set which means percpu attempts an atomic > > > > > allocation. If it cannot find anything already allocated it isn't happy. > > > > > This was done before memalloc_nofs_{save/restore}() were pervasive. > > > > > > > > > > Percpu should probably try to allocate some pages if possible even if > > > > > nofs is set. > > > > > > > > Should we check and pre-alloc memory inside memalloc_nofs_restore()? > > > > another memalloc_nofs_save() may come soon. > > > > > > > > something like this in memalloc_nofs_save()? > > > > if (pcpu_nr_empty_pop_pages[type] < PCPU_EMPTY_POP_PAGES_LOW) > > > > pcpu_schedule_balance_work(); > > > > > > > > > > Percpu does do this via a workqueue item. The issue is in v5.9 we > > > introduced 2 types of chunks. However, the free float page number was > > > for the total. So even if 1 chunk type dropped below, the other chunk > > > type might have enough pages. I'm queuing this for 5.12 and will send it > > > out assuming it does fix your problem. > > workqueue for percpu maybe not strong enough( not scheduled?) when high > CPU load? > Percpu is not really cheap memory to allocate because it has a amplification factor of NR_CPUS. As a result, percpu on the critical path is really not something that is expected to be high throughput. Ideally things like btrfs snapshots should preallocate a number of these and not try to do atomic allocations because that in theory could fail because even after we go to the page allocator in the future we can't get enough pages due to needing to go into reclaim. The workqueue approach has been good enough so far. Technically there is a higher priority workqueue that this work could be scheduled on, but save for this miss on my part, the system workqueue has worked out fine. In the future as I mentioned above. It would be good to support actually getting pages, but it's work that needs to be tackled with a bit of care. I might target the work for v5.14. > this is our application pipeline. > file_pre_process | > bwa.nipt xx | > samtools.nipt sort xx | > file_post_process > > file_pre_process/file_post_process is fast, so often are blocked by > pipe input/output. > > 'bwa.nipt xx' is a high-cpu-load, almost all of CPU cores. > > 'samtools.nipt sort xx' is a high-mem-load, it keep the input in memory. > if the memory is not enough, it will save all the buffer to temp file, > so it is sometimes high-IO-load too(write 60G or more to file). > > > xfstests(generic/476) is just high-IO-load, cpu/memory load is NOT high. > so xfstests(generic/476) maybe easy than our application pipeline. > > Although there is yet not a simple reproducer for another problem > happend here, but there is a little high chance that something is wrong > in btrfs/mm/fs-buffer. > > but another problem(os freezed without call trace, PANIC without OOPS?, > > the reason is yet unkown) still happen. I do not have an answer for this. I would recommend looking into kdump. > > Best Regards > Wang Yugui (wangyugui@xxxxxxxxxxxx) > 2021/04/09 > > > Thanks, Dennis