Re: [RFCv2 00/15] RFCv2: Consolidated userspace RDMA library repo

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Aug 28, 2016 at 07:14:31PM +0300, Yishai Hadas wrote:

> Today each vendor manages its code and is not exposed to bugs/issues in
> other vendors. Moving to a consolidate rpm might cause that any bug fix in
> one component will force a new release of all other stuff without any real
> reason. Introducing such a dependency might even delay/block a release
> without any justification.

We already went over this with Steve, and the consensus was this isn't
a practical problem.

Please read the thread from here:

https://www.spinics.net/lists/linux-rdma/msg37813.html

There is some excellent information on how the distribution processes
work and general support from the community on this integration.

The basic summary is that this consolidation allows the
already-existing per-distro coordiation of all the libraries and the
providers to happen in the open and be shared by all the downstream
consumers.

Things would be more like the kernel flow where the distros will
monitor one place for bug fixes to back port (rdma-plumbing) and
vendors will still work with the distro to get their specific patches
backported.

Remember, by the time we get to the provider level there are already
often dependencies in the kernel and libibverbs that need to be met by
the distro, so it has rarely been the case of 'just grab my libprovider and
you are good'

The few case like that are also trivially cherry-pickable patches.

This makes things easier for all the downstream users by providing a
single source reference, and makes it easier for developers beacuse we
can update all providers in one pass with one pull request.

The other thing to bear in mind is that most of the code is
dead. Other than a few providers there simply is no churn. All the
work is keeping everything up to date and still compiling on modern
distros/arches/etc.

I'm expecting that the whole thing will remain continuously releasable
exactly because it is largely all dead code. If a vendor pushes a bug
into their provider then they are going to get burned, not other vendors.

However, we face the ioctl UAPI conversion in our future, and tackling
that across 15 different dead git trees with MIA maintainers is simply
too hard. I view this work as a necessary precondition for the UAPI
fixup.

> I believe that we agree that each maintainer should be independent to
> review/accept relevant code to his component and make sure that his code is
> fully tested and ready for a release in any given time.

The model is exactly the same as the kernel, which is proven to work,
and does expect that all patches are release ready.

Sean has proven this model works with libfabric and has shown
benefits of building community infrastructure.

> Put all code in one big umbrella letting only one person to accept code is
> not a good idea which might slow the process of accepting new code and
> features.

This is already basically the case, you need to wait for Doug to take
any libibverbs changes before you can push any provider feature
change.

If this really concerns you then propose a maintainer team scheme -
eg hosting this on github would allow for that.

> In addition, need to consider that there are few distros which have releases
> in different times, the flexibility to take a specific component based on
> its readiness and importance is a vital
> point that must be saved as done today.

Again, it doesn't matter. The distros will take from upstream whenever
they are ready and upstream is 'continuously releasable'. This way the
distros can get everything and not have to hunt down 15 different git
trees randomly sprinkled across the internet

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux