On 2018-12-06 05:04, Davidlohr Bueso wrote:
On 12/3/18 6:02 AM, Roman Penyaev wrote:
The main change is in replacement of the spinlock with a rwlock, which
is
taken on read in ep_poll_callback(), and then by adding poll items to
the
tail of the list using xchg atomic instruction. Write lock is taken
everywhere else in order to stop list modifications and guarantee that
list
updates are fully completed (I assume that write side of a rwlock does
not
starve, it seems qrwlock implementation has these guarantees).
Its good then that Will recently ported qrwlocks to arm64, which iirc
had
a bad case of writer starvation. In general, qrwlock will maintain
reader
to writer ratios of acquisitions fairly well, but will favor readers
over
writers in scenarios where when too many tasks (more than ncpus).
Thanks for noting that. Then that should not be a problem, since number
of
parallel ep_poll_callback() calls can't be greater then number of CPUs
because of the wq.lock which is taken by the caller of
ep_poll_callback().
BTW, did someone make any estimations how much does the latency on the
write side increase if the number of readers is greater than CPUs?
--
Roman