Re: Mesa shader compiling/optimizing process is too slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Presumably there needs to be a api-level mechanism to wait for the
background optimization to finish, so that piglit etc can validate the
behavior of the optimized shader?

-- Chris

On Tue, Jul 10, 2012 at 5:17 AM, Eric Anholt <eric@xxxxxxxxxx> wrote:
> Tiziano Bacocco <tiziano@xxxxxxxxxxxxxxxxx> writes:
>
>> I've done benchmarks and comparison between proprietary drivers and
>> Mesa, Mesa seems to be up to 200x slower compiling the same shader,
>> since i understand optimizing such part of code may take months or even
>> more, i have thought to solve it this way:
>>
>> Upon calling glLinkProgram , an unoptimized version of the shader (
>> compiles much much faster ) is uploaded to the GPU
>> Then a separate thread is launched that will optimize the shader and as
>> soon it is done, on the next call to glUseProgram it will upload
>> optimized version in place of unoptimized one.
>>
>> This will solve many performance issues and temporary freezes with games
>> that load/unload content while running, while not reducing performance
>> once the background optimization is done
>
> Yeah, we've thought of this, and it would take some work.  Sounds like a
> fun project for someone.
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@xxxxxxxxxxxxxxxxxxxxx
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux