Re: [PATCH 5/9] HWPoison: add memory_failure_queue()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Huang Ying <ying.huang@xxxxxxxxx> wrote:

> > That's where 'active filters' come into the picture - see my other mail 
> > (that was in the context of unidentified NMI errors/events) where i 
> > outlined how they would work in this case and elsewhere. Via active filters 
> > we could share most of the code, gain access to the events and still have 
> > kernel driven policy action.
> 
> Is that something as follow?
> 
> - NMI handler run for the hardware error, where hardware error
>   information is collected and put into perf ring buffer as 'event'.

Correct.

Note that for MCE errors we want the 'persistent event' framework Boris has 
posted: we want these events to be buffered up to a point even if there is no 
tool listening in on them:

 - this gives us boot-time MCE error coverage

 - this protects us against a logging daemon being restarted and events
   getting lost

> - Some 'active filters' are run for each 'event' in NMI context.

Yeah. Whether it's a human-ASCII space 'filter' or really just a callback you 
register with that event is secondary - both would work.

> - Some operations can not be done in NMI handler, so they are delayed to
>   an IRQ handler (can be done with something like irq_work).

Yes.

> - Some other 'active filters' are run for each 'event' in IRQ context.
>   (For memory error, we can call memory_failure_queue() here).

Correct.

> Where some 'active filters' are kernel built-in, some 'active filters' can be 
> customized via kernel command line or by user space.

Yes.

> If my understanding as above is correct, I think this is a general and 
> complex solution.  It is a little hard for user to understand which 'active 
> filters' are in effect.  He may need some runtime assistant to understand the 
> code (maybe /sys/events/active_filters, which list all filters in effect 
> now), because that is hard only by reading the source code.  Anyway, this is 
> a design style choice.

I don't think it's complex: the built-in rules are in plain sight (can be in 
the source code or can even be explicitly registered callbacks), the 
configuration/tooling installed rules will be as complex as the admin or tool 
wants them to be.

> There are still some issues, I don't know how to solve in above framework.
> 
> - If there are two processes request the same type of hardware error
>   events.  One hardware error event will be copied to two ring buffers (each 
>   for one process), but the 'active filters' should be run only once for each 
>   hardware error event.

With persistent events 'active filters' should only be attached to the central 
persistent event.

> - How to deal with ring-buffer overflow?  For example, there is full of 
>   corrected memory error in ring-buffer, and now a recoverable memory error 
>   occurs but it can not be put into perf ring buffer because of ring-buffer 
>   overflow, how to deal with the recoverable memory error?

The solution is to make it large enough. With *every* queueing solution there 
will be some sort of queue size limit.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux IBM ACPI]     [Linux Power Management]     [Linux Kernel]     [Linux Laptop]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux