On Sunday, November 9, 2008 8:54 pm Kenji Kaneshige wrote: > Currently acpi_run_osc() checks all the bits in _OSC result code (the > first DWORD in the capabilities buffer) to see error condition. But > the bit 0, which doesn't indicate any error, must be ignored. > > The bit 0 is used as the query flag at _OSC invocation time. Some > platforms clear it during _OSC evaluation, but the others don't. On > latter platforms, current acpi_run_osc() mis-detects error when _OSC > is evaluated with query flag set because it doesn't ignore the bit 0. > Because of this, the __acpi_query_osc() always fails on such platforms. > > And this is the cause of the problem that pci_osc_control_set() > doesn't work since the commit 4e39432f4df544d3dfe4fc90a22d87de64d15815 > which changed pci_osc_control_set() to use __acpi_query_osc(). > > Signed-off-by: Kenji Kaneshige <kaneshige.kenji@xxxxxxxxxxxxxx> Applied to my for-linus branch, thanks Kenji-san. Jesse -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html