In order to understand how an image is prepared to be processed by a compression algorithm, we developed an applet that shows
how this preparation is done. In this sense the user will explore that this act is a slightly lossy one. He can choose the number of pixel a block should be.
For color images, Red-Green-Blue (RGB) values are transformed into a luminance/chrominance color space (YCbCr, YUV, etc.). The luminance component is greyscale
and the other two axes are color information. The reason for doing this is that one can afford to lose a lot more information in the chrominance components than in the luminance component: the human eye is
not as sensitive to high-frequency chrome information as it is to high-frequency luminance. Usually it is not necessary to change the color space, since the remainder of the algorithm works on each color
component independently, and doesn't care just what the data is. However, compression will be less since all components have to be coded at luminance quality. Note that color space transformation is slightly
lossy due to round off errors, but the amount of errors is much smaller than what we typically introduce later on. The user down-samples each component by averaging together groups of pixels. The luminance
component is left at full resolution, while the chrome components are often reduced 2:1 horizontally and either 2:1 or 1:1 (no change) vertically. In JPEG-speak these alternatives are usually called 2h2v and
2h1v sampling, but one may also see the terms "411" and "422" sampling. This step immediately reduces the data volume by one-half or one-third. In numerical terms it is highly lossy, but
for most images it has almost no impact on perceived quality, because of the eye's poorer resolution for chrome information. Note that down-sampling is not applicable to greyscale data; this is one reason
why color images are more compressible than greyscale.