Re: [RFC net-next 0/5] Suspend IRQs during preferred busy poll

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> >>>> The value may not be obvious, but guidance (in the form of
> >>>> documentation) can be provided.
> >>>
> >>> Okay. Could you share a stab at what that would look like?
> >>
> >> The timeout needs to be large enough that an application can get a
> >> meaningful number of incoming requests processed without softirq
> >> interference. At the same time, the timeout value determines the
> >> worst-case delivery delay that a concurrent application using the same
> >> queue(s) might experience. Please also see my response to Samiullah
> >> quoted above. The specific circumstances and trade-offs might vary,
> >> that's why a simple constant likely won't do.
> > 
> > Thanks. I really do mean this as an exercise of what documentation in
> > Documentation/networking/napi.rst will look like. That helps makes the
> > case that the interface is reasonably ease to use (even if only
> > targeting advanced users).
> > 
> > How does a user measure how much time a process will spend on
> > processing a meaningful number of incoming requests, for instance.
> > In practice, probably just a hunch?
> 
> As an example, we measure around 1M QPS in our experiments, fully 
> utilizing 8 cores and knowing that memcached is quite scalable. Thus we 
> can conclude a single request takes about 8 us processing time on 
> average. That has led us to a 20 us small timeout (gro_flush_timeout), 
> enough to make sure that a single request is likely not interfered with, 
> but otherwise as small as possible. If multiple requests arrive, the 
> system will quickly switch back to polling mode.
> 
> At the other end, we have picked a very large irq_suspend_timeout of 
> 20,000 us to demonstrate that it does not negatively impact latency. 
> This would cover 2,500 requests, which is likely excessive, but was 
> chosen for demonstration purposes. One can easily measure the 
> distribution of epoll_wait batch sizes and batch sizes as low as 64 are 
> already very efficient, even in high-load situations.

Overall Ack on both your and Joe's responses.

epoll_wait disables the suspend if no events are found and ep_poll
would go to sleep. As the paper also hints, the timeout is only there
for misbehaving applications that stop calling epoll_wait, correct?
If so, then picking a value is not that critical, as long as not too
low to do meaningful work.

> Also see next paragraph.
> 
> > Playing devil's advocate some more: given that ethtool usecs have to
> > be chosen with a similar trade-off between latency and efficiency,
> > could a multiplicative factor of this (or gro_flush_timeout, same
> > thing) be sufficient and easier to choose? The documentation does
> > state that the value chosen must be >= gro_flush_timeout.
> 
> I believe this would take away flexibility without gaining much. You'd 
> still want some sort of admin-controlled 'enable' flag, so you'd still 
> need some kind of parameter.
> 
> When using our scheme, the factor between gro_flush_timeout and 
> irq_suspend_timeout should *roughly* correspond to the maximum batch 
> size that an application would process in one go (orders of magnitude, 
> see above). This determines both the target application's worst-case 
> latency as well as the worst-case latency of concurrent applications, if 
> any, as mentioned previously.

Oh is concurrent applications the argument against a very high
timeout?

> I believe the optimal factor will vary 
> between different scenarios.
> 
> >>>>> If the only goal is to safely reenable interrupts when the application
> >>>>> stops calling epoll_wait, does this have to be user tunable?
> >>>>>
> >>>>> Can it be either a single good enough constant, or derived from
> >>>>> another tunable, like busypoll_read.
> >>>>
> >>>> I believe you meant busy_read here, is that right?
> >>>>
> >>>> At any rate:
> >>>>
> >>>>     - I don't think a single constant is appropriate, just as it
> >>>>       wasn't appropriate for the existing mechanism
> >>>>       (napi_defer_hard_irqs/gro_flush_timeout), and
> >>>>
> >>>>     - Deriving the value from a pre-existing parameter to preserve the
> >>>>       ABI, like busy_read, makes using this more confusing for users
> >>>>       and complicates the API significantly.
> >>>>
> >>>> I agree we should get the API right from the start; that's why we've
> >>>> submit this as an RFC ;)
> >>>>
> >>>> We are happy to take suggestions from the community, but, IMHO,
> >>>> re-using an existing parameter for a different purpose only in
> >>>> certain circumstances (if I understand your suggestions) is a much
> >>>> worse choice than adding a new tunable that clearly states its
> >>>> intended singular purpose.
> >>>
> >>> Ack. I was thinking whether an epoll flag through your new epoll
> >>> ioctl interface to toggle the IRQ suspension (and timer start)
> >>> would be preferable. Because more fine grained.
> >>
> >> A value provided by an application through the epoll ioctl would not be
> >> subject to admin oversight, so a misbehaving application could set an
> >> arbitrary timeout value. A sysfs value needs to be set by an admin. The
> >> ideal timeout value depends both on the particular target application as
> >> well as concurrent applications using the same queue(s) - as sketched above.
> > 
> > I meant setting the value systemwide (or per-device), but opting in to
> > the feature a binary epoll options. Really an epoll_wait flag, if we
> > had flags.
> > 
> > Any admin privileged operations can also be protected at the epoll
> > level by requiring CAP_NET_ADMIN too, of course. But fair point that
> > this might operate in a multi-process environment, so values should
> > not be hardcoded into the binaries.
> > 
> > Just asking questions to explore the option space so as not to settle
> > on an API too soon. Given that, as said, we cannot remove it later.
> 
> I agree, but I believe we are converging? Also taking into account Joe's 
> earlier response, given that the suspend mechanism dovetails so nicely 
> with gro_flush_timeout and napi_defer_hard_irqs, it just seems natural 
> to put irq_suspend_timeout at the same level and I haven't seen any 
> strong reason to put it elsewhere.

Yes, this sounds good.
 
> >>> Also, the value is likely dependent more on the expected duration
> >>> of userspace processing? If so, it would be the same for all
> >>> devices, so does a per-netdev value make sense?
> >>
> >> It is per-netdev in the current proposal to be at the same granularity
> >> as gro_flush_timeout and napi_defer_hard_irqs, because irq suspension
> >> operates at the same level/granularity. This allows for more control
> >> than a global setting and it can be migrated to per-napi settings along
> >> with gro_flush_timeout and napi_defer_hard_irqs when the time comes.
> > 
> > Ack, makes sense. Many of these design choices and their rationale are
> > good to explicitly capture in the commit message.
> 
> Agreed.
> 
> Thanks,
> Martin






[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux