Re: Prepared RDMA Tree for 4.7

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/15/2016 02:00 AM, Leon Romanovsky wrote:
> On Sat, May 14, 2016 at 09:09:54AM -0400, Doug Ledford wrote:
>> On 05/14/2016 12:33 AM, Leon Romanovsky wrote:
>>> On Fri, May 13, 2016 at 12:31:55PM -0400, Doug Ledford wrote:
>>>
>>> <snip>
>>>
>>> I had an intent to help you,
>>
>> That's fine.  As I pointed out in my email, I need to know what I'm
>> sending to Linus.
>>
>>> So can you please share with us your plans on how-to address the
>>> general lack of "communication, transparency and coordination",
>>> which is definitely happen here?
>>
>> I could always just artificially limit each release to no more than 100
>> patches or something like that.  Then I would have more time per patch
>> to be communicative.  But that wouldn't make people happy, it would take
>> months and years to get things done.
> 
> Most of the patches are fixes and a lot of them one line only. They can
> be acknowledged and applied in more timely manner. This will definitely
> remove the tension.
> 
> For example, I have a build warning fix for hfi1 driver, but I prefer
> don't send it till I see their fixes are applied.

I don't know what you are talking about.  All of their approved fixes
*have* been applied, and commented as such on list.

>> You guys have a ton of features you want to
>> get into the kernel, each one being a different sort of hardware
>> offload, and while you may have worked with customers to do the initial
>> creation, you didn't work with the community on describing them or
>> anything else, and you bring them here fully done to your satisfaction,
>> and it takes other people time to wrap their head around what they are,
>> whether or not they might be usable in their own hardware, and then to
>> decide if the design as presented would work for them if they did decide
>> to implement it.  Without their input, I'm left in a rather untenable
>> position.  And frankly, I'm rather sick of being in that position.
> 
> What is wrong with SELinux patches?

They need time for people to think about them.  You call them the
SELinux patches, but that is somewhat misleading.  When I think of
SELinux, I think of things like limited context server applications
("Can httpd read home directories?", "Can dovecot write to home
direcories?  Can dovecot listen on port 993? 995?", etc.)  It's very
general purpose and has a lot of policy that goes with it.  From what I
read, these patches are different.  They are mainly used to enforce the
subnet manager's P_Key policy.  There isn't anything else they do.  From
that standpoint, they look like the user space half of the namespace
equation.  But they're devoid of any of the other policy decisions that
SELinux often makes.  I haven't read them closes enough to see if they
could be easily extended to implement any of these other types of policy
or not, but that's certainly an issue.  If you are going to go monkeying
around in the SELinux subsystem (and 7 of the 12 patches do), then
making sure we do things in a manner that is not going to paint us into
a corner seems appropriate.  I haven't had the time to do that level of
looking at these patches.

> There are three issues with this request:
> 1. It is not silence, but readiness to merge, since all feedback is
> answered and everything is understandable. LSO patches are great
> example for it, if the people don't understand they will ask and will
> request to adjust patches accordingly.

No, you don't merge things before they are understood and then adjust
them.  You merge things you already understand.

> 2. It will limit ability to move IB stack further and will eliminate
> fair competitive market.

Competition is not a valid reason to merge poorly understood or poorly
designed code.

-- 
Doug Ledford <dledford@xxxxxxxxxx>
              GPG KeyID: 0E572FDD


Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux