Re: [PATCH 5/5] virtgpu: mark as a render gpu

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10 September 2015 at 15:52, Gerd Hoffmann <kraxel@xxxxxxxxxx> wrote:
>   Hi,
>
>> > Dave?  Looking at the ioctls they are all fine for render nodes, there
>> > isn't anything modesetting related in the device-specific ioctls.
>> >
>> > Correct?
>> >
>> Unless I've overdone the coffee this time - modesetting is done via
>> the card# node, while render via either card# or renderD#.
>
> Exactly, thats why anything modesetting-related must be disabled for
> renderD#.  Looking at the virtio-gpu device-specific ioctls I don't
> think there is anything doing modesetting (which we would have to leave
> out), so we can apply DRM_RENDER_ALLOW everythere I think.  Or maybe
> there is a global switch to flip DRM_RENDER_ALLOW for the whole list ...
>
IMHO the idea of having a 'global' switch sounds quite good, yet there
isn't one atm :-( It will be quite useful as we get more render only
devices.
DRIVER_RENDER doesn't do that unfortunately (which I think was the
original assumption), it only instructs drm core to create the
renderD# device/node.

Hope this clears up any ambiguity from my earlier replies :-)
Emil
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization



[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux