Re: agenda for todays QA meeting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jul 21, 2020 at 7:36 AM pmkellly@xxxxxxxxxxxx
<pmkellly@xxxxxxxxxxxx> wrote:
>
> I must say that all of my compression experience has been with the
> algorithms used to compress images. I won't bore you with the details
> but we wrote software to build various image files with certain
> characteristics in pristine form. We did not use the standard test
> images that are sometimes used. The test files we used were structured
> to see how good a job the algorithms could do preserving data. Then we
> saved and opened them using the various standardized algorithms used for
> the associated file types and analyzed the results. The results were not
> impressive. We concluded that the results were fine for images. If some
> pixel values change, the average user will not notice it; so it's not
> critical. However there are many other kinds of data where such changes
> would be critical. Now I know the algorithms used for images are
> different from those used for general file compression on disk, but
> still, I try to minimize risk.

Yeah, lossy algorithms are common in imaging. There are many kinds
that unquestionably do not produce identical encoding to the original
once decompressed. The algorithms being used by Btrfs are all lossless
compression, and in fact those are also commonly used in imaging: LZO
and ZLIB (ZIP, i.e. deflate) - and in that case you can compress and
decompress images unlimited times and always get back identical RGB
encodings to the original. Short of memory or other hardware error.


>Since I'm never short of disk space I
> prefer not to use compression. I was very excited and pleased when I
> found out that btrfs check-sums files. However now I understand that it
> is a patch to make up for the compression. It seems like a zero sum gain
> to me.

I'm not sure what you mean.

Btrfs has always had checksumming from day 0. It was integral to the
design, before the compression algorithms landed. It is to make up for
the fact hardware sometimes lies or gets confused, anywhere in the
storage stack. The default for metadata (the fs itself) and data (file
contents) is crc32c, it is possible to disable it for data but not
possible to disable it for metadata. Compression only ever applies to
data. It's not applied to metadata. Checksumming has intrinsic value
regardless of compression.



-- 
Chris Murphy
_______________________________________________
test mailing list -- test@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to test-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/test@xxxxxxxxxxxxxxxxxxxxxxx




[Index of Archives]     [Fedora Desktop]     [Fedora SELinux]     [Photo Sharing]     [Yosemite Forum]     [KDE Users]

  Powered by Linux