Re: [Feature request] Multiple X servers on one graphics card?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/01/2011 10:22 PM, Alan Cox wrote:
> On Mon, 1 Aug 2011 20:47:42 +0100
> Dave Airlie <airlied@xxxxxxxxx> wrote:
> 
>>>
>>> Hmmm, what's about the opposite approach?
>>> To me, it sounds simpler and more logical when the kernel always creates
>>> one device node per output (or maybe dynamically per connected output),
>>> without any need for configuration or device assignment.
>>
>> It just doesn't fit in with how the drm device nodes work, like it might seem
>> simpler in the kernel but I think it would just complicate userspace.
> 
> It also doesn't fit some cases of reality (eg the USB displaylink stuff)
> where the output and the GPU are effectively decoupled.
> 
> There are also some interesting security issues with a lot of GPUs where
> you'd be very very hard pushed to stop one task spying on the display of
> another as there isn't much in the way of MMU contexts on the GPU side.
> 

Actually, >= GeForce8 have proper (and working) virtual memory, i.e.
per-context page directory and page tables.

> Alan
> _______________________________________________
> dri-devel mailing list
> dri-devel@xxxxxxxxxxxxxxxxxxxxx
> http://lists.freedesktop.org/mailman/listinfo/dri-devel

_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux