On 01/27/25 at 05:19pm, Kairui Song wrote: > On Mon, Jan 20, 2025 at 10:39 AM Baoquan He <bhe@xxxxxxxxxx> wrote: > > > > On 01/13/25 at 01:34pm, Kairui Song wrote: > > > On Sat, Jan 4, 2025 at 1:46 PM Baoquan He <bhe@xxxxxxxxxx> wrote: > > > > > > > > On 12/31/24 at 01:46am, Kairui Song wrote: > > > > > From: Kairui Song <kasong@xxxxxxxxxxx> > > > > > > > > > > The flag SWP_SCANNING was used as an indicator of whether a device > > > > > is being scanned for allocation, and prevents swapoff. Combined with > > > > > SWP_WRITEOK, they work as a set of barriers for a clean swapoff: > > > > > > > > > > 1. Swapoff clears SWP_WRITEOK, allocation requests will see > > > > > ~SWP_WRITEOK and abort as it's serialized by si->lock. > > > > > 2. Swapoff unuses all allocated entries. > > > > > 3. Swapoff waits for SWP_SCANNING flag to be cleared, so ongoing > > > > > allocations will stop, preventing UAF. > > > > > 4. Now swapoff can free everything safely. > > > > > > > > > > This will make the allocation path have a hard dependency on > > > > > si->lock. Allocation always have to acquire si->lock first for > > > > > setting SWP_SCANNING and checking SWP_WRITEOK. > > > > > > > > > > This commit removes this flag, and just uses the existing per-CPU > > > > > refcount instead to prevent UAF in step 3, which serves well for > > > > > such usage without dependency on si->lock, and scales very well too. > > > > > Just hold a reference during the whole scan and allocation process. > > > > > Swapoff will kill and wait for the counter. > > > > > > > > > > And for preventing any allocation from happening after step 1 so the > > > > > unuse in step 2 can ensure all slots are free, swapoff will acquire > > > > > the ci->lock of each cluster one by one to ensure all allocations > > > > > see ~SWP_WRITEOK and abort. > > > > > > > > Changing to use si->users is great, while wondering why we need acquire = > > > > each ci->lock now. After setup 1, we have cleared SWP_WRITEOK, and take > > > > the si off swap_avail_heads list. No matter what, we just need wait for > > > > p->comm's completion and continue, why bothering to loop for the > > > > ci->lock acquiring? > > > > > > > > > > Hi Baoquan, > > > > > > Waiting for p->comm's completion must be done after unuse is called > > > (unuse will need to take the si->users refcound, so it can't be dead > > > yet), but unuse must be called after no one will allocate any new > > > entry. That is guaranteed by the loop ci->lock acquiring. > > > > Sorry for late response, Kairui. I went trought the code flow of swap > > allocation several times, however haven't made clear how loop ci->lock > > acquiring is needed here. Once si->flags &= ~SWP_WRITEOK is executed in > > del_from_avail_list() when swaping off, even though the allocation > > action is still on going, it will be failed in cluster_alloc_range() > > by the 'if (!(si->flags & SWP_WRITEOK))' checking. Then that allocation > > Hi Baoquan, > > Thanks for the careful review. > > > requirement will be failed and returned, means no new swap entry|slot > > allcation will be done. Then unuse won't be impacted at all. In this > > case, why do we care about it? > > > > Please forgive my stupidity, could you elaborate in which case this kind > > of still ongoging swap allocation will happen during its swap device's > > off? Could you give an example of the concurrent execution flows? > > There is no barrier or lock between clear the flag and try_to_unuse, > so nothing guarantees the "if (!(si->flags & SWP_WRITEOK))" in > cluster_alloc_range will see the updated flag. The loop ci->lock acts > like a full memory barrier, ensuring any allocation after the loop > lock will definitely see the updated flags, and try_to_unuse will only > go on after all allocation have either stopped or will see the updated > flags. In practice this problem is almost impossible to happen, but in > theory possible. Got it now. swap_avail_lock is not taken during allocation, and we don't take it when accessing si->flags in cluster_alloc_range() becasue that could bring in new lock contention. Thanks a lot for patient explanation.