Re: [RFC PATCH v1 1/5] locking/atomic: Implement atomic_fetch_and_or

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 29, 2021 at 06:18:28PM +0800, hev wrote:
> On Thu, Jul 29, 2021 at 5:39 PM Will Deacon <will@xxxxxxxxxx> wrote:
> >
> > On Wed, Jul 28, 2021 at 07:48:22PM +0800, Rui Wang wrote:
> > > From: wangrui <wangrui@xxxxxxxxxxx>
> > >
> > > This patch introduce a new atomic primitive 'and_or', It may be have three
> > > types of implemeations:
> > >
> > >  * The generic implementation is based on arch_cmpxchg.
> > >  * The hardware supports atomic 'and_or' of single instruction.
> >
> > Do any architectures actually support this instruction?
> No, I'm not sure now.
> 
> >
> > On arm64, we can clear arbitrary bits and we can set arbitrary bits, but we
> > can't combine the two in a fashion which provides atomicity and
> > forward-progress guarantees.
> >
> > Please can you explain how this new primitive will be used, in case there's
> > an alternative way of doing it which maps better to what CPUs can actually
> > do?
> I think we can easily exchange arbitrary bits of a machine word with atomic
> andnot_or/and_or. Otherwise, we can only use xchg8/16 to do it. It depends on
> hardware support, and the key point is that the bits to be exchanged
> must be in the
> same sub-word. qspinlock adjusted memory layout for this reason, and waste some
> bits(_Q_PENDING_BITS == 8).

No, it's not about wasting bits -- short xchg() is exactly what you want to
do here, it's just that when you get more than 13 bits of CPU number (which
is, err, unusual) then we need more space in the lockword to track the tail,
and so the other fields end up sharing bytes.

> In the case of qspinlock xchg_tail, I think there is no change in the
> assembly code
> after switching to atomic andnot_or, for the architecture that
> supports CAS instructions.
> But for LL/SC style architectures, We can implement xchg for sub-word
> better with new
> primitive and clear[1]. And in fact, it reduces the number of retries
> when the two memory
> load values are not equal.

The only system using LL/SC with this many CPUs is probably Power, and their
atomics are dirt slow anyway.

> If the hardware supports this atomic semantics, we will get better
> performance and flexibility.
> I think the hardware is easy to support.

The issue I have is exposing these new functions as first-class members of
the atomics API. On architectures with AMO instructions, falling back to
cmpxchg() will have a radically different performance profile when compared
to many of the other atomics operations and so I don't think we should add
them without very good justification.

At the very least, we could update the atomics documentation to call out
unconditional functions which are likely to loop around cmpxchg()
internally. We already have things like atomic_add_unless() and
atomic_dec_if_positive() but their conditional nature makes it much less
surprising than something like atomic_and_or() imo.

Will



[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux