Bug ID | 99780 |
---|---|
Summary | Flickering artifacts in radeonsi driver with divergent texture reads. |
Product | Mesa |
Version | unspecified |
Hardware | x86-64 (AMD64) |
OS | Linux (All) |
Status | NEW |
Severity | normal |
Priority | medium |
Component | Drivers/Gallium/radeonsi |
Assignee | dri-devel@lists.freedesktop.org |
Reporter | dark_sylinc@yahoo.com.ar |
QA Contact | dri-devel@lists.freedesktop.org |
Created attachment 129530 [details] Sample repro I'm an Ogre3D developer. We noticed after updating to LLVM 4.0 that there were visual glitches: http://imgur.com/43tIuI6 http://imgur.com/iBhxoBj (this glitches tend to flicker over time *even if everything is stationary*) This bug is *NOT* present if using LLVM 3.9.1 This bug happens with radeon open source drivers. Using llvmpipe works as intended. I tried to reproduce this bug in a clean environment but failed. So instead I stripped down as much as possible to a minimal repro sample out of our code; which I am attaching. We noticed that this flicker happens alongside the difference between the PSSM splits, but not always: http://imgur.com/130B4mE We added a tint to differentiate the splits. But as you can see there are a few artifacts that happens within one tint; though the vast majority of the artifacts happens alongside the line. A PSSM split in shader code means this: uniform sampler2DShadow texShadowMap[2]; float fShadow = 1.0; vec3 tint = vec3( 1, 1, 1 ); if( inPs.depth <= pass.pssmSplitPoints0 ) { fShadow = getShadow( texShadowMap[0], inPs.posL0, vec4( 0.000488281, 0.000488281, 1, 1 ) ); tint = vec3( 0.0, 0, 1 ); } else if( inPs.depth <= pass.pssmSplitPoints1 ) { fShadow = getShadow( texShadowMap[1], inPs.posL1, vec4( 0.000976562, 0.000976562, 1, 1 ) ); tint = vec3(0.0, 1.0, 0.0 ); } Everything from inPs comes from the vertex shader. After stripping down as much as possible, our getShadow declaration is this: float getShadow( sampler2DShadow shadowMap, vec4 psPosLN, vec4 invShadowMapSize ) { float fDepth = psPosLN.z; vec2 uv = psPosLN.xy / psPosLN.w; float retVal = 0; vec2 fW; vec4 c; retVal += texture( shadowMap, vec3( uv, fDepth ) ).r; return retVal; } I've seen a similar artifact before in the Windows AMD drivers back when I was experimenting with bindless textures and I purposedly managed to mix textures from divergent branches into the same wavefront. Not sure if that's what's happening here though. Just run launchapp.sh; select OpenGL renderer, and click accept. It will launch the sample. NOTE: If when running the sample you encounter this problem: Created GL 4.3 context libGL error: failed to create drawable libGL error: failed to create drawable X Error of failed request: 0 Major opcode of failed request: 156 (GLX) Minor opcode of failed request: 26 (X_GLXMakeContextCurrent) Serial number of failed request: 28 Current serial number in output stream: 28 The just run it again. I do not know why this happens, it happens rarely (like 1 in 10 runs) and I haven't yet been able to determine why. It sounds like a race condition inside Mesa (because the bug won't manifest while debugging where loading is significantly slowed down loading all symbols), but I don't have enough evidence to back this up, and it's not the reason of this bug ticket. If you have more issues running the sample let me know, I can provide assistance. This sample needs SDL2 to be installed in your system. I'm using AMD Radeon HD 7770 1GB Ubuntu 16.10 Mesa git 5bc222ebafddd14f2329f5096287b51d798a6431 LLVM 4.0 SVN 293947 Kernel 4.8.0-37-generic
You are receiving this mail because:
- You are the assignee for the bug.
_______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel