On Mon, Jun 4, 2012 at 1:01 PM, Martin Peres <martin.peres@xxxxxxx> wrote: > Le 04/06/2012 17:18, Jerome Glisse a écrit : > >> >> My experience is that things that are true today for GPU, are not >> tomorrow. Yes there will still be clock/voltage, but there could be >> new complete different things, like shutting down block. > > IMO, this isn't something the user should ever be aware of. > NVIDIA GPUs have been clock gated for years and we can > also cut the power to some parts of the card. > >> I am not even mentioning things like complex value range dependency >> btw things (for instance if a domain as clock/voltage in certain range >> than some other domain can only have clock in a restricted range >> value). > > Yeah, the power budget could get in our way. However, if no perflvl > is defined in the vbios and we can't calculate a given preset, then > we are in the completly dynamic scenario I talked about earlier. It still > fits the proposed interface because the user is only setting a performance > profile, not a performance level. That's the difference. > > Moreover, we can't ask the user for anything too complex anyway... > >> >> While i agree that sysfs looks easy for user to play with, i believe >> that gui is what you really after and afaik closed source driver all >> expose a gui for their power management. Using ioctl allow better >> overall control, like atomic setting of several domain ... > > Want to do power management from the userspace? :o > > If you need that much control, you're doing something wrong. The kernel > should be in charge of power management. The interface I'm talking > about is just a way to report clocks to the user *and* to get some > input from the user of what he really wants to achieve. > > In the case where a user would want to set clocks + voltage himself, > the sysfs interface I proposed works perfectly and atomically: > > - The user sets both the voltage and clock domains of the custom > performance level. > - Then the user is free to switch to the custom performance profile. > - Here is your atomicity ;) > > While I agree that future GPUs will get more and more complex, > I still think we need something that is broad-enough to accomodate > for future architectures and precise-enough to give users good power. > > I would really like to get an interface like that in the foreseeable future, > there is no rush but we still need to find a way and I would like the DRM > community to think about this issue. > > Martin My point is that there is no way for power management to find an API that fits all GPU. If i were to do it now, i would have one ioctl version for r3xx, one for r5xx, one for r6xx/r7xx, one for r8xx, one for r9xx, ... yes there would be some common fields accross them. That being said i think one file might fit all GPU is the power profile one accepting something : performance, normal, energy I am pretty sure all GPU have and will continue to have & use power profile. But when it comes to reporting information and making custom profile i would use an ioctl because on that side i see way too many difference accross gpu from same company but from different generation, so i wouldn't even want to try to bolt something accross GPU from different company. Also think to IGP, where memory clock doesn't make sense and where even voltage might not make sense as the GPU might be so entangle with the CPU that it would be linked with CPU powerstate. Also when i was refering to shutting down things, i think for instance that some custom profile/powersaving might want to disable shader engine (way more radical than clock gatting). Also think to case of single card multi GPU, people might want to have both GPU working with same profile like when in performance mode, or power down one of the GPU. So as i said in previous mail, my perfect solution is ioctl and let the driver dev do some kind of plugin for gnome-control-center (similar to what compiz effect plugin was from design pov) where driver dev can put a gui that reflect best what is available for each specific case. Cheers, Jerome _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel