Chris writes:
You cannot put in detail that is not there. It is empty magnifications you
get it in optics also.
As you say Chris you cannot, but you certainly can make it *look* better
when you increase it's size, and perception for pictorial pictures is pretty
much everything. (However I will grumble if accuracy is the goal and can
point out many flaws where added data can lead to misinterpretation of
actual data)
However, how you add this fictional material is pretty important go get the
look right. Cameras use some complex and advanced algorithms to add this
data - by way of illustration think of it like this.. you need to make an
image bigger so you just duplicate the same data you already have and stick
it between the existing pixels - this is the origin of that advice regarding
photoshop - increasing the size in small increments adds it slowly - it's a
very, very basic method and a fundamental flaw of Photoshop that it had
(still has?) such rudimentary methods of upsizing images.
By way of example see here for a page I put together a long time back
http://members.iinet.net.au/~shahjen/ebayimages/sharpening.html
If you look at the grey panels in my examples - the scroll bars - on the
right of each image pane you'll see a checkerboard pattern instead of a
smooth grey region - this is because upsizing is basically stupid. I used
these screenshots in place of any other pictures because they're
recognizable and it's clear as daylight what the upsizing is doing to the
image
As this* guy says though, some algorithms are designed to be more
intelligent than that and will actually look at the data in the image, work
out what is important, where patterns may exist, and from this they can
predict what the missing data should be and armed with this, they'll do a
good job of adding new stuff. Some algorithms are better at different tasks
than others so it's worth not just using one for everything but instead
experiment fo find the best one for the task. ( *
http://web.archive.org/web/20060307093804/http://www.interpolatethis.com/interp.html -
his page is gone now - a shame. People need to understand that the internet
is a bit more ephemeral than we've been led to believe. Authors grow tired
or pass away so their internet writings are a bit like books .. it ain't
like they'll be topping up their pages with new data all the time, and
eventually it wont be accessible any more)
Downsizing is another case where intelligent algorithm use should be used -
do it wrong or use a stupid one and you end up with those dreaded jaggies..
there is NO reasson anyone should see jagged edges in a downsized image
(even though you see it all the time!) - it's just that a resize function
has been applied instead of resampling with an algorithm - and as you can
easily imagine, taking unintelligent bites regularly out of a descending
slope and you end up with a staircase - that's how it happens
If you want detail do not use jpeg. Other forms of compression are
available
like TIff and Gem. These are lossless.
Jpeg can be lossless too. Jpeg has drawn a lot of criticism because of the
way it gets used - it is NOT inherantly lossy, it's that the staple, lauded
and revered image editing programs are lazy and resting on their laurels -
Photoshop doesn't handle jpeg losslesly , that isn't the fault of the jpeg!
Other image programs can. Yes, if you rotate a jpeg it looses data in
Photoshop - not so the case in Irfanview if you use their lossless jpeg
transformations - so that is a clear example. I'd also add that jpeg is
actually a container for a bitmap, just as zip is a container for whatever
you stick inside it. How crushed a jpeg can be is at th discretion of the
user. What jpeg is really good at is intelligently eliminating redundant
information for the purpose of storing the image in a smaller space.
Picture a 6000x6000 pixel bitmap that is pure white. A bitmap, tif and
others will store every single pixel. A jpeg will record the colour data of
one pixel and a map of where they all are and store that instead - obviously
the image file will be a lot smaller (the bitmap will be 105,469 Kb, the
jpeg at maximum quality will be 202,Kb, and interestingly, an 80% jpeg will
be the same size - a clear demonstration that there was no data loss
whatsoever)
a footnote to the matter of resizing and data interpolation - on the basic
math of camera digital image capture, there is utterly no way the cameras
can get the detail recorded in some images with optics and sensor alone..
some seriously intelligent interpolating is going on - the resolving power
of the lens which everyone knows contributes to the resolving power of the
image capture system should by it's self ensure that most images are more
degraded than they actually are, but the sensor it's self is incapable of
the capture we think it is because of the mathematical concept of a Nyquist
limit. On a single plane, the resolving power cannot exceed 1/2 the sample
rate meaning a 2000 pixel long image (of 1 pixel) cannot resolve more than
1000 pixels accurately .. but on a 2 plane system X, Y, you have roughly
only 1/3 of the resolving capacity. Couple that now with the imprefections
of a lens and your digital capture system and you'll see there's no way we
can be getting the images we do unless some clever algorithms are adding
data that is not visible to the system..