Re: [PATCH v4 00/30] NT synchronization primitive driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 15, 2024 at 08:08:10PM -0500, Elizabeth Figura wrote:
> This patch series implements a new char misc driver, /dev/ntsync, which is used
> to implement Windows NT synchronization primitives.

This patch series does not apply to anything I have at hand. Nor does it
state anything explicit to put it on top of.


> Hence I would like to request review from someone familiar with locking to make
> sure that the usage of low-level kernel primitives is correct and that the wait
> queues work as intended, and to that end I've CC'd the locking maintainers.

I am sadly very limited atm, but I'll try and read through it. If only I
could apply...



> == Patches ==
> 
> The intended semantics of the patches are broadly intended to match those of the
> corresponding Windows functions. For those not already familiar with the Windows
> functions (or their undocumented behaviour), patch 27/27 provides a detailed
> specification, and individual patches also include a brief description of the
> API they are implementing.
> 
> The patches making use of this driver in Wine can be retrieved or browsed here:
> 
>     https://repo.or.cz/wine/zf.git/shortlog/refs/heads/ntsync5

I don't support GE has it in his builds? Last time I tried, building
Wine was a bit of a pain.


> Some aspects of the implementation may deserve particular comment:
> 
> * In the interest of performance, each object is governed only by a single
>   spinlock. However, NTSYNC_IOC_WAIT_ALL requires that the state of multiple
>   objects be changed as a single atomic operation. In order to achieve this, we
>   first take a device-wide lock ("wait_all_lock") any time we are going to lock
>   more than one object at a time.
> 
>   The maximum number of objects that can be used in a vectored wait, and
>   therefore the maximum that can be locked simultaneously, is 64. This number is
>   NT's own limit.
> 
>   The acquisition of multiple spinlocks will degrade performance. This is a
>   conscious choice, however. Wait-for-all is known to be a very rare operation
>   in practice, especially with counts that approach the maximum, and it is the
>   intent of the ntsync driver to optimize wait-for-any at the expense of
>   wait-for-all as much as possible.

Per the example of percpu-rwsem, it would be possible to create a
mutex-spinlock hybrid scheme, where single locks are spinlocks while
held, but can block when the global thing is pending. And the global
lock is always mutex like.

If all that is worth it, I don't know. Nesting 64 spinlocks doesn't give
me warm and fuzzy feelings though.




[Index of Archives]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]

  Powered by Linux