Hi all At $JOB we have a web app that generates XML for another web app to use. Each complete XML document is a list of individual items, and each item is stored on disk, in gzip format to save space - the format is overly verbose, and compression is highly effective, and gzip is nicely transparent to lots of utilities (vim mainly). Currently, a django app assembles the document together (it also generates them if they are missing, but lets ignore that for now). It first reads each file off disk, decompresses it, assembles one large string (sometimes 100MB+ XML), compresses it again (sigh) and then hands it off to apache. As a naive attempt, I modified the django app to simply load the file from disk, pre- and append a compressed header and footer, and then hand that off to apache with the appropriate content type. This "worked" in some respects - downloading the file to disk using fetch, then gzcat+md5 confirmed that the uncompressed response was bit-for-bit, but all "real" web clients I gave it to (firefox, chrome, libcurl) would only see the first chunk - the header, where as gzcat sees all the chunks. So, my questions are two-fold: 1) Is there something in the gzip file header which makes this approach a no-go 2) Is there any approach in stock httpd that could assemble docs like this (if it is even possible), or would I be looking at a custom module? I appreciate only the second one is really on topic here :) Cheers Tom --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscribe@xxxxxxxxxxxxxxxx For additional commands, e-mail: users-help@xxxxxxxxxxxxxxxx