Re: [PATCH v10 05/10] iommu/amd: Introduce helper function to update 256-bit DTE

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 13, 2024 at 5:34 PM Jason Gunthorpe <jgg@xxxxxxxxxx> wrote:
>
> On Wed, Nov 13, 2024 at 03:36:14PM +0100, Uros Bizjak wrote:
> > On Wed, Nov 13, 2024 at 3:28 PM Jason Gunthorpe <jgg@xxxxxxxxxx> wrote:
> > >
> > > On Wed, Nov 13, 2024 at 03:14:09PM +0100, Uros Bizjak wrote:
> > > > > > Even without atomicity guarantee, __READ_ONCE() still prevents the
> > > > > > compiler from performing unwanted optimizations (please see the first
> > > > > > comment in include/asm-generic/rwonce.h) and unwanted reordering of
> > > > > > reads and writes when this function is inlined. This macro does cast
> > > > > > the read to volatile, but IMO it is much more readable to use
> > > > > > __READ_ONCE() than volatile qualifier.
> > > > >
> > > > > Yes it does, but please explain to me what "unwanted reordering" is
> > > > > allowed here?
> > > >
> > > > It is a static function that will be inlined by the compiler
> > > > somewhere, so "unwanted reordering" depends on where it will be
> > > > inlined. *IF* it will be called from safe code, then this limitation
> > > > for the compiler can be lifted.
> > >
> > > As long as the values are read within the spinlock the order does not
> > > matter. READ_ONCE() is not required to contain reads within spinlocks.
> >
> > Indeed. But then why complicate things with cmpxchg, when we have
> > exclusive access to the shared memory? No other thread can access the
> > data, protected by spinlock; it won't change between invocations of
> > cmpxchg in the loop, and atomic access via cmpxchg is not needed.
>
> This is writing to memory shared by HW and HW is doing a 256 bit
> atomic load.
>
> It is important that the CPU do a 128 bit atomic write.
>
> cmpxchg is not required, but a 128 bit store is. cmpxchg128 is the
> only primitive Linux offers.

If we want to exercise only the atomic property of cmpxchg16b, then we
can look at arch/x86/lib/atomic64_set_cx8.S how cmpxchg8b is used to
implement the core of arch_atomic64_set() for x86_32:

SYM_FUNC_START(atomic64_set_cx8)
1:
/* we don't need LOCK_PREFIX since aligned 64-bit writes
 * are atomic on 586 and newer */
    cmpxchg8b (%esi)
    jne 1b

    RET
SYM_FUNC_END(atomic64_set_cx8)

we *do* have arch_try_cmpxchg128_local() that declares cmpxchg16b
without lock prefix, and perhaps we can use it to create 128bit store,
something like:

static __always_inline void iommu_atomic128_set(__int128 *ptr, __int128 val)
{
    __int128 old = *ptr;

    do {
    } while (!arch_try_cmpxchg128_local(ptr, &old, val));
}

Then write_dte_upper128() would look like:

static void write_dte_upper128(struct dev_table_entry *ptr, struct
dev_table_entry *new)
{
    struct dev_table_entry old = {}; <--- do we need to initialize struct here?

    old.data128[1] = ptr->data128[1];

    /*
     * Preserve DTE_DATA2_INTR_MASK. This needs to be
     * done here since it requires to be inside
     * spin_lock(&dev_data->dte_lock) context.
     */
    new->data[2] &= ~DTE_DATA2_INTR_MASK;
    new->data[2] |= old.data[2] & DTE_DATA2_INTR_MASK;

    iommu_atomic128_set(&ptr->data128[1], new->data128[1]);
}

and in a similar way implement write_dte_lower128().

(I am away from the keyboard ATM, so the above is not tested, but you
got the idea...)

Uros.





[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux