Re: [PATCH AUTOSEL 4.9 09/26] net/mlx5e: Init ethtool steering for representors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2020-04-17 at 09:21 -0400, Sasha Levin wrote:
> On Thu, Apr 16, 2020 at 09:08:06PM +0000, Saeed Mahameed wrote:
> > On Thu, 2020-04-16 at 15:58 -0400, Sasha Levin wrote:
> > > Hrm, why? Pretend that the bot is a human sitting somewhere
> > > sending
> > > mails out, how does it change anything?
> > > 
> > 
> > If i know a bot might do something wrong, i Fix it and make sure it
> > will never do it again. For humans i just can't do that, can I ? :)
> > so this is the difference and why we all have jobs ..
> 
> It's tricky because there's no one true value here. Humans are
> constantly wrong about whether a patch is a fix or not, so how can I
> train my bot to be 100% right?
> 
> > > > > The solution here is to beef up your testing infrastructure
> > > > > rather
> > > > > than
> > > > 
> > > > So please let me opt-in until I beef up my testing infra.
> > > 
> > > Already did :)
> > 
> > No you didn't :), I received more than 5 AUTOSEL emails only today
> > and
> > yesterday.
> 
> Appologies, this is just a result of how my process goes - patch
> selection happened a few days ago (which is when blacklists are
> applied), it's been running through my tests since, and mails get
> sent
> out only after tests.
> 

No worries, as you see i am not really against this AI .. i am just
worried about it being an opt-out thing :)

> > Please don't opt mlx5 out just yet ;-), i need to do some more
> > research
> > and make up my mind..
> 
> Alrighty. Keep in mind you can always reply with just a "no" to
> AUTOSEL
> mails, you don't have to explain why you don't want it included to
> keep
> it easy.
> 

Sure ! thanks .

> > > > > taking less patches; we still want to have *all* the fixes,
> > > > > right?
> > > > > 
> > > > 
> > > > if you can be sure 100% it is the right thing to do, then yes,
> > > > please
> > > > don't hesitate to take that patch, even without asking anyone
> > > > !!
> > > > 
> > > > Again, Humans are allowed to make mistakes.. AI is not.
> > > 
> > > Again, why?
> > > 
> > 
> > Because AI is not there yet.. and this is a very big philosophical
> > question.
> > 
> > Let me simplify: there is a bug in the AI, where it can choose a
> > wrong
> > patch, let's fix it.
> 
> But we don't know if it's wrong or not, so how can we teach it to be
> 100% right?
> 
> I keep retraining the NN based on previous results which improves
> it's
> accuracy, but it'll never be 100%.
> 
> The NN claims we're at ~95% with regards to past results.
> 

I didn't really mean for you to fix it..

I am just against using un-audited AI. because i know it can never
reach 100%.

Just out of curiosity : 
what are these 5% failure rate, what types of failures ? how are they
identified and how are they feedback into the NN re-training ?





[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux