Hi, I finally got some chance and progress on redesigning rdma cgroup controller for the most use cases that we discussed in this email chain. I am posting RFC and soon code in new email. Parav On Sun, Sep 20, 2015 at 4:05 PM, Haggai Eran <haggaie@xxxxxxxxxxxx> wrote: > On 15/09/2015 06:45, Jason Gunthorpe wrote: >> No, I'm saying the resource pool is *well defined* and *fixed* by each >> hardware. >> >> The only question is how do we expose the N resource limits, the list >> of which is totally vendor specific. > > I don't see why you say the limits are vendor specific. It is true that > different RDMA devices have different implementations and capabilities, > but they all use the expose the same set of RDMA objects with their > limitations. Whether those limitations come from hardware limitations, > from the driver, or just because the address space is limited, they can > still be exhausted. > >> Yes, using a % scheme fixes the ratios, 1% is going to be a certain >> number of PD's, QP's, MRs, CQ's, etc at a ratio fixed by the driver >> configuration. That is the trade off for API simplicity. >> >> >> Yes, this results in some resources being over provisioned. > > I agree that such a scheme will be easy to configure, but I don't think > it can work well in all situations. Imagine you want to let one > container use almost all RC QPs as you want it to connect to the entire > cluster through RC. Other containers can still use a single datagram QP > to connect to the entire cluster, but they would require many address > handles. If you force a fixed ratio of resources given to each container > it would be hard to describe such a partitioning. > > I think it would be better to expose different controls for the > different RDMA resources. > > Regards, > Haggai -- To unsubscribe from this list: send the line "unsubscribe cgroups" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html