On 02.03.2005, at 20:22, GSR - FR wrote:
IOW, supersampling is nice for the small set of cases in which it really matters, otherwise it is going to be slower always. Of course, it is going to be faster in many cases than full sampling and scaling down. If anybody figures a better method than user selectable adaptive (best case as fast as no oversampling, worst case as slow as adaptive), I guess POVRay Team will like to hear too. :]
It might as well be that the "adaption" is the root of the speed problem. As is the code is a mungo-jungo of hardcoded computation that works differently (or at least seems so) than other region based code. It does not operate on tiles but on rows, does its own memory allocation and thus is hardly parallizable and very likely much slower than it needs to be.
And hey, 3 times "adaptive" supersampling blending a layer takes *much* longer an a manual 10x oversampling by blending a larger image and scaling it down to the original size with Lanczos; this is a UP machine BTW.
My assumption here is that if the adaptive supersampling code takes magnitudes longer to render than without supersampling it could be benefitial to simply use the common code to the render <depth>x<depth> times the amount of tiles to fill and simply do some weighting on this data to fill the final tile. Very easy, reuses existing code, runs multithreaded and is likely quite a bit faster than the stuff now is.
I would also look into the possibility of analyzing the inputs (gradient and repeat type) to find degenerated cases and recommend the use of supersampling to the users...
Servus, Daniel
Attachment:
PGP.sig
Description: This is a digitally signed message part