Re: [git pull] vfs.git pile 3 - dcache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 03 2022 at 16:42, Linus Torvalds wrote:
> On Wed, Aug 3, 2022 at 4:24 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote:
>> On Wed, Aug 03, 2022 at 06:59:36PM -0400, Steven Rostedt wrote:
>> >
>> >       preempt_disable_inlock() ?
>>
>> preempt_disable_locked()?
>
> Heh. Shed painting in full glory.
>
> Let's try just "preempt_enable_under_spinlock()" and see.
>
> It's a bit long, but it's still shorter than the existing usage pattern.
>
> And we don't have "inlock" anywhere else, and while "locked" is a real
> pattern we have, it tends to be about other things (ie "I hold the
> lock that you need, so don't take it").
>
> And this is _explicitly_ only about spinning locks, because sleeping
> locks don't do the preemption disable even without RT.
>
> So let's make it verbose and clear and unambiguous. It's not like I
> expect to see a _lot_ of those. Knock wood.
>
> We had a handful of those things before (in mm/vmstat, and now added
> another case to the dentry code. So it has become a pattern, but I
> really really hope it's not exactly a common pattern.
>
> And so because it's not common, typing a bit more is a good idea - and
> making it really clear is probably also a good idea.

Sebastian and me looked over it.

The use cases in mm/vmstat are not really all under spinlocks. That code
gets called also just from plain local_irq or even just preempt disabled
regions (depending on the stats item), which makes the proposed name
less accurate than you describe.

A worse case is the u64_stat code which is an ifdef maze (only partially
due to RT). Those stats updates can also be called from various contexts
where no spinlock is involved. That code is extra convoluted due to
irqsave variants and "optimizations" for 32bit UP. Removing the latter
would make a cleanup with write_seqcount_...() wrappers pretty simple.

Aside of that we have RT conditional preempt related code in
page_alloc() and kmap_atomic(). Both care only about the task staying
pinned on a CPU. In page_alloc() using preempt_disable() on !RT is more
lightweight than migrate_disable(). So something like
task_[un]pin_temporary() might work and be descriptive enough.

For kmap_atomic() it was decided back then when we introduced
kmap_local() that we don't do a wholesale conversion and leave it to the
maintainers/developers to look at it on a case by case basis as that has
quite some cleanup potential at the call sites. 18 month later we still
have 435 of the back then 527 call sites. Sadly enough there are 21 new
instances vs. 71 removed and about 20 related cleanup patches ignored.

So either we come up with something generic or we just resort to
different wrappers for those use cases. I'll have another look with
Sebastian tomorrow.

Thoughts?

Thanks,

        tglx






[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux