Re: [PATCH] pci-acpi: ignore bit0 of _OSC return code (was Re: OSC enablement issue)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sunday, November 9, 2008 8:54 pm Kenji Kaneshige wrote:
> Currently acpi_run_osc() checks all the bits in _OSC result code (the
> first DWORD in the capabilities buffer) to see error condition. But
> the bit 0, which doesn't indicate any error, must be ignored.
>
> The bit 0 is used as the query flag at _OSC invocation time. Some
> platforms clear it during _OSC evaluation, but the others don't. On
> latter platforms, current acpi_run_osc() mis-detects error when _OSC
> is evaluated with query flag set because it doesn't ignore the bit 0.
> Because of this, the __acpi_query_osc() always fails on such platforms.
>
> And this is the cause of the problem that pci_osc_control_set()
> doesn't work since the commit 4e39432f4df544d3dfe4fc90a22d87de64d15815
> which changed pci_osc_control_set() to use __acpi_query_osc().
>
> Signed-off-by: Kenji Kaneshige <kaneshige.kenji@xxxxxxxxxxxxxx>

Applied to my for-linus branch, thanks Kenji-san.

Jesse
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux