hello all,
i have recently been developing application in linux, specifically a
imageviewer application, and since all i had to do was blit a 2d image
to the screen i programmed it entirely in xlib using the dbe and mit-shm
extension. it works very well. Now the reason for this app was in a
rendering cluster, machines(including the same one as the displaying
machine) would volume render a piece of the total image and send it on
to the machine that was the display. The rendering code utilizes opengl
and glx. Now if I run the display on a different machine without a
rendering code running, the imageviewer app works very well. The
problems comes about when I have an instance of the rendering code up,
my imageviewer app is no longer double buffered, but appears to be
copied into the frame buffer. I have tried many ideas to correct this,
as I thought that maybe the rendering code was using to much card memory
and booting imageviewer out of the card. Now I did not write the
rendering code, but the developer insists that he is reducing the memory
footprint of this app, but the problem persists. I would very much like
to hear any suggestions or insight into my problem, as like I said, I am
only a budding x developer, first got my feet wet just 1 1/2 months
ago. Thanks in advance.
jim greensky
university of minnesota
laboratory of computational science and engineering
_______________________________________________
XFree86 mailing list
XFree86@xxxxxxxxxxx
http://XFree86.Org/mailman/listinfo/xfree86