On Mon, Feb 24, 2020 at 11:32:58PM +0000, Saeed Mahameed wrote: > On Sun, 2020-02-23 at 10:53 +0200, Yishai Hadas wrote: > > On 2/21/2020 9:04 PM, Saeed Mahameed wrote: > > > On Wed, 2020-02-19 at 21:05 +0200, Leon Romanovsky wrote: > > > > From: Yishai Hadas <yishaih@xxxxxxxxxxxx> > > > > > > > > Expose raw packet pacing APIs to be used by DEVX based > > > > applications. > > > > The existing code was refactored to have a single flow with the > > > > new > > > > raw > > > > APIs. > > > > > > > > The new raw APIs considered the input of 'pp_rate_limit_context', > > > > uid, > > > > 'dedicated', upon looking for an existing entry. > > > > > > > > This raw mode enables future device specification data in the raw > > > > context without changing the existing logic and code. > > > > > > > > The ability to ask for a dedicated entry gives control for > > > > application > > > > to allocate entries according to its needs. > > > > > > > > A dedicated entry may not be used by some other process and it > > > > also > > > > enables the process spreading its resources to some different > > > > entries > > > > for use different hardware resources as part of enforcing the > > > > rate. > > > > > > > > > > It sounds like the dedicated means "no sharing" which means you > > > don't > > > need to use the mlx5_core API and you can go directly to FW.. The > > > problem is that the entry indices are managed by driver, and i > > > guess > > > this is the reason why you had to expand the mlx5_core API.. > > > > > > > The main reason for introducing the new mlx5_core APIs was the need > > to > > support the "shared mode" in a "raw data" format to prevent future > > touching the kernel once PRM will support extra fields. > > As the RL indices are managed by the driver (mlx5_core) including > > the > > sharing, we couldn’t go directly to FW, the legacy API was > > refactored > > inside the core to have one flow with the new raw APIs. > > So we may need those APIs regardless the dedicated mode. > > > > I not a fan of legacy APIs, all of the APIs are mlx5 internals and i > would like to keep one API which is only PRM dependent as much as > possible. > > Anyway thanks for the clarification, i think the patch is good as is, > we can improve and remove the legacy API in the future and keep the raw > API. > > > > > > I would like to suggest some alternatives to simplify the approach > > > and > > > allow using RAW PRM for DEVX properly. > > > > > > 1. preserve RL entries for DEVX and let DEVX access FW directly > > > with > > > PRM commands. > > > 2. keep mlx5_core API simple and instead of adding this raw/non raw > > > api > > > and complicating the RL API with this dedicated bit: > > > > > > just add mlx5_rl_{alloc/free}_index(), this will dedicate for you > > > the > > > RL index form the end of the RL indices database and you are free > > > to > > > access the FW with this index the way you like via direct PRM > > > commands. > > > > > As mentioned above, we may still need the new mlx5_core raw APIs for > > the > > shared mode which is the main usage of the API, we found it > > reasonable > > to have the dedicate flag in the new raw alloc API instead of > > exposing > > more two new APIs only for that. > > > > Please note that even if we'll go with those 2 extra APIs for the > > dedicated mode, we may still need to maintain in the core this > > information to prevent returning this entry for other cases. > > > > Also the idea to preserve some entries at the end might be wasteful > > as > > there is no guarantee that DEVX will really be used, and even so it > > may > > not ask for entries in a dedicated mode. > > > > Presering them for this optional use case might prevent using them > > for > > all other cases. > > > > > > > > The counter per entry mas changed to be u64 to prevent any option > > > > to > > > typo ^^^ was > > > > Sure, thanks. > > > > Leon, Other than the typo i am good with this patch. > you can fix up the patch prior to pulling into mlx5-next, no need for > v2. Thanks Saeed, I'll apply it once Doug/Jason ack RDMA part of the series. > > Acked-by: Saeed Mahameed <saeedm@xxxxxxxxxxxx> > > > thanks, > Saeed. >