Re: Why small > big?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Alex:

Apologies again for top-posting.

No problem with the "sloppy" language, it was more than good enough to get your idea across. Besides, I always hate being called for using a term incorrectly when I actually knew better.

My claim that the quality-value term can be computed is not as involved as you describe. I was simply meaning that IF one used the --

imagejpeg($image_p, null, 100);

-- statement for the sole purpose of making a thumbnail from a larger image, then there should be a correlation between size before/after and the "quality-term".

For example, if I take an image and simply shrink it to 1/4 it's size, then I should be able to show the image using 1/4 as many pixels without much loss in quality. Now, how I decide which pixels are representative of their local pixel groups in sampling, is another thing and I leave that to the JPEG algorithm.

Considering in this thread where I left the quality at 100% and reduced the image to less 40 percent of the original, and the end result was that I actually made a larger file. So, I belive that at least this example shows that 100% is not a good quality value setting for reducing images -- thus we know the high end is less than 100.

Clearly as one reduces images, the quality-value required should also reduce. Now, my gut feeling is that this value in reduction is predictable, perhaps linear, perhaps polynomial, perhaps something that would lend itself well to fuzzy logic -- I don't know specifically what that may be, but I believe it can be programmed. That's what I was addressing. I have NO data to back-up my gut feeling.

As for your other comments regarding blocks and edge effects (mosquito noise) they have direct counterparts in temporal signal processing -- trying to truncate any signal is going to introduce noise. That's the reason for so many filter taper solutions, such as cosign, hamming, hanning, and more than I wish to remember. I've done a considerable amount of that type of programming.

Your comments about the visual aspects of the brain are right-on and I find that investigation fascinating. There is so much to learn and so much to gain by understanding how we process data (visual and otherwise).

Your comments about tonally rich areas of the face being more addressed by the brain is true, but the reason for this is two fold: one, it is more complicated; but two, it's a face! Our brains process images differently for different objects. Turn a picture of someone's face upside down and our brains don't process the image the same. Interesting, huh?

In any event, (back on topic as you say) our brains process images much differently than what a compression algorithm could address. On one hand, it has to reduce the image in areas where the brain is not interested, and on the other hand, it has to retain those visual aspects that the brain is built to recognize. That's not doable without error unless the algorithm mimics the brain.

But, in general terms (baring brain specific variations), as an image is reduced, one should be able to reduce the value of the quality term in some predictable manner depending upon the ratio of change. True or not, that's my claim.

Thanks for the exchange -- it was interesting.

tedd

---

At 11:07 PM +0100 8/23/06, Alex Turner wrote:
Tedd,

Sorry for the floppy language. You are quite correct, the name is discrete cosine. I get too relaxed sometimes.

As to the visual impact of a degree of compression, I don't think that you can automate this. The issue surrounds the way the brain processes information. When you see something, you brain processes the visual field and looks for patters that it recognizes and then your conscious mind becomes aware of the patterns, not actually the thing you are looking at. Optical illusions can illustrate this point. For example where you see a bunch of blobs on a white background and then someone tells you it is a dog and you see the dog. Once you see the dog you can no longer 'not see it'. This is because of the way the brain processes patterns.

The trick to DCT is that in most 'organic' images - people, trees etc - the patterns for which your brain is looking actually occupy low frequencies. However, the majority of the information which is encoded into the image is in high frequencies. Consequently, by selectively removing the high frequencies, the image appears to the conscious mind to be the same whilst in reality it is degraded.

The snag come when the pattern your brain is looking to match to requires high frequencies. The classic is a edge. If one has an infinitely large white background with a single infinitely sharp line on it, you require infinite frequencies to encode it correctly (ten years ago I knew the proof for this, time and good wine has put a stop to that). This is much like the side band problem in radio transmission. If you encode an image in dimensional space rather than in frequency space you don't get this problem (hence PNG permitting perfectly sharp lines).

So - back on topic. If you take an image with sharp lines in it, then pass it through DCT twice (the process in symmetrical) but loose some of the high frequency data in the process (compression) then the result is that the very high frequency components that encode the edge are stripped off. Rather than (as one might like) this making the edge fussy, it produces what is called mosquito noise around the edges.

Because mosquito noise is nothing like what you are 'expecting' to see, the brain is very sensitive to it.

Thus, the amount you notice the compression of JPEG depends on the nature of the image you compress.

Now it gets nasty. DCT scales as a power of n (where n is the size of image) - there is a fast DCT process like the F in FFT. But it is still non linear. This means that to make the encoding and decoding of JPEG reasonably quick the image is split into blocks and each block is separately passed through the DCT process. This is fine except that it produces errors from one block to the next as to where the edges are in HSV space. Thus, as the compression is turned up, the edges of the block can become visible due to discontinuities in the color, huge and saturation at the borders. This again is sensitive to the sort of image you are compressing. For example, if it has a very flat (say black or white) background, then you will not notice. Alternatively, if the image is tonally rich, like someone's face, you will notice it a lot.

Again, this effect means that it is not really possible to automate the process of figuring out what compression setting is optimum.

As for PNG: As far as I know, the only issue with any realistic browser (other than very old ones like IE2 or something) is that the alpha channel is not supported. As there is no alpha channel in JPEG, so there is no difference. Though I do not profess to be absolutely sure that all browsers you might encounter manage PNG ok.

Side Issues:
DCT is integer. This means that if you have zero compression in the DCT process, then you get out what you put in (except if you get overflow, which can be avoided as far as I know). This is not the case in FFT where floating point errors mean you always loose something. Thus JPEG/100% should be at or near perfect (lossless) but does not actually compress.

Another area where FFT and DCT become very interesting is in moving picture processing. You can filter video using FFT or DCT in ways that are hard or impossible using spacing filters. This can be good for improving noisy or fussy 'avi' files etc.

Best wishes

AJ

PS - I'll stick the above on my nerd block nerds-central.blogspot.com, if you have any good links to suggest to expand the subject, please let me know and I shall add them.


Alexander J Turner Ph.D.
www.project-network.com
www.deployview.com
www.funkifunctions.blogspot.com

-----Original Message-----
From: tedd [mailto:tedd@xxxxxxxxxxxx]
Sent: 23 August 2006 20:17
To: Alex Turner; php-general@xxxxxxxxxxxxx
Subject: TPN POSSIBLE SPAM: Re: Why small > big?

Alex:

Excuse for top posting:

You said: Clear as mud?

Well actually, it's simperer than I thought. After your reply, I did
some reading on jpeg and found it's simply a transform, not unlike
FFT where two-dimensional temporal data is transformed from the time
domain to the frequency domain -- very interesting reading.

The reverse cosine matrix you mention is probably the discrete cosine
transform (DCT) matrix where the x, y pixels of an image file have a
z component representing color. From that you can translate the data
into the frequency domain, which actually generates more data than
the original.

However, the quality setting is where you make it back up in
compression ratio's by trimming off higher frequencies which don't
add much to the data. Unlike the FFT, the algorithm does not address
phasing, which I found interesting.

However, the answer to my question deals with the quality statement.
In the statement:

imagejpeg($image_p, null, 100);

I should have used something less than 100.

I've change the figure to 25 and don't see any noticeable difference
in quality of the thumbnail.

It seems to me there should be a table (or algorithm) somewhere that
would recommend what quality to use when reducing the size of an
image via this method. In this case, I reduced an image 62 percent
(38% of the original) with a quality setting of 25 and "see" no
difference. I think this (the quality factor) is programmable.

As for png images, I would probably agree (if I saw comparisons), but
not all browsers accept them. I belive that at least one IE has
problems with png's, right?

tedd

At 4:45 PM +0100 8/23/06, Alex Turner wrote:
M Sokolewice got it nearly correct.  However, the situation is a
little more complex than he has discussed.

The % compression figure for jpeg is translated into the amount of
information stored in the reverse cosine matrix.  The size of the
compressed file is not proportional to the % you set in the
compressor.  Thus 100% actually means store all the information in
the reverse cosine matrix.  This is like storing the image in a 24
bit png, but with the compressor turned off.  So at 100% jpeg is
quite inefficient.

The other issue is the amount of high frequency information in your
images.  If you have a 2000x2000 image with most of the image
dynamics at a 10 pixel frequency, and you reduce this to 200x200
then the JPEG compression algorithm will 'see' approximately the
same amount of information in the image :-(  The reality is not
quite as simple as this because of the way JPEG uses blocks etc, but
it is an easy way of thinking about it.

What all this means is that as you reduce the size of an image, if
you want it to retain some of the detail of the original but at a
smaller size, there will be a point at which 8 or 24 bit PNG will
become a better bet.

Clear as mud?

AJ

M. Sokolewicz wrote:
I'm not quite sure, but consider the following:

Considering the fact that most JPEG images are stored with some
form of compression usually ~75% that would mean the original
image, in actual size, is about 1.33x bigger than it appears in
filesize. When you make a thumbnail, you limit the amount of
pixels, but you are setting compression to 100% (besides that, you
also use a truecolor pallete which adds to its size). So, for
images which are scaled down less than 25% (actually this will
prob. be more around 30-ish, due to palette differences) you'll
actually see the thumbnail being bigger in *filesize* than the
original (though smaller in memory-size)

- tul

P.S. isn't error_reporting( FATAL | ERROR | WARNING ); supposed to
be error_reporting( E_FATAL | E_ERROR | E_WARNING ); ??

tedd wrote:
Hi gang:

I have a thumbnail script, which does what it is supposed to do.
However, the thumbnail image generated is larger than the original
image, how can that be?

Here's the script working:

http://xn--ovg.com/thickbox

And, here's the script:

<?php /* thumb from file */

/* some settings */
ignore_user_abort();
set_time_limit( 0 );
error_reporting( FATAL | ERROR | WARNING );

/* security check */
ini_set( 'register_globals', '0' );
 >>>
/* start buffered output */
ob_start();

/* some checks */
if ( ! isset( $_GET['s'] ) ) die( 'Source image not specified' );

$filename = $_GET['s'];

// Set a maximum height and width
$width = 200;
$height = 200;

// Get new dimensions
list($width_orig, $height_orig) = getimagesize($filename);

if ($width && ($width_orig < $height_orig))
     {
     $width = ($height / $height_orig) * $width_orig;
     }
else
     {
     $height = ($width / $width_orig) * $height_orig;
     }

// Resample
$image_p = imagecreatetruecolor($width, $height);
$image = imagecreatefromjpeg($filename);
imagecopyresampled($image_p, $image, 0, 0, 0, 0, $width, $height,
$width_orig, $height_orig);

//  Output & Content type
header('Content-type: image/jpeg');
 >>>imagejpeg($image_p, null, 100);

/* end buffered output */
ob_end_flush();
?>

---

Thanks in advance for any comments, suggestions or answers.

tedd


--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php


--
-------
http://sperling.com  http://ancientstones.com  http://earthstones.com

--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.11.5/425 - Release Date: 22/08/2006


--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.11.5/425 - Release Date: 22/08/2006



--
-------
http://sperling.com  http://ancientstones.com  http://earthstones.com

--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php


[Index of Archives]     [PHP Home]     [Apache Users]     [PHP on Windows]     [Kernel Newbies]     [PHP Install]     [PHP Classes]     [Pear]     [Postgresql]     [Postgresql PHP]     [PHP on Windows]     [PHP Database Programming]     [PHP SOAP]

  Powered by Linux