Re: agenda for todays QA meeting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 7/21/20 13:11, Chris Murphy wrote:

Yeah, lossy algorithms are common in imaging. There are many kinds
that unquestionably do not produce identical encoding to the original
once decompressed. The algorithms being used by Btrfs are all lossless
compression, and in fact those are also commonly used in imaging: LZO
and ZLIB (ZIP, i.e. deflate) - and in that case you can compress and
decompress images unlimited times and always get back identical RGB
encodings to the original. Short of memory or other hardware error.


At the risk of sounding skeptical, I've heard that word "lossless" applied to lots of algorithms and devices that I didn't think was an appropriate usage. As an approximate example, when we were doing that testing we were hoping to find something in the neighborhood of 10-6 probability of a single byte error in a certain structure / size file when exercised a certain number of times. Sorry for being so vague. Is there any statistical data on these algorithms that is publicly available? The only ones I've ever seen (not a large population since I've been a compression avoid-er) that approach lossless don't compress much and only take out strings of the same byte value.


Since I'm never short of disk space I
prefer not to use compression. I was very excited and pleased when I
found out that btrfs check-sums files. However now I understand that it
is a patch to make up for the compression. It seems like a zero sum gain
to me.

I'm not sure what you mean.

Btrfs has always had checksumming from day 0. It was integral to the
design, before the compression algorithms landed. It is to make up for
the fact hardware sometimes lies or gets confused, anywhere in the
storage stack. The default for metadata (the fs itself) and data (file
contents) is crc32c, it is possible to disable it for data but not
possible to disable it for metadata. Compression only ever applies to
data. It's not applied to metadata. Checksumming has intrinsic value
regardless of compression.


Sorry, I have no knowledge of the history of btrfs; so please forgive me when I say or ask silly things.

I know about check-summing and use it manually on files that are important. Ya I know about the hardware too. I'm an electrical engineer. If reliability really matters for a design one of the first things I look for when considering any new chip is to see if the manufacturer has any credible reliability data.

The problem here is that anything to do with PCs or servers is largely driven by cost and there always has to be a new, better, more exciting model tomorrow. That environment produces very little in the way of parts with long histories with good proven reliability data. That's why I was originally so happy about check-summing being automatic with btrfs.

What's considered the meta data. Path to file, file name, file header, file footer, data layout?

Oh I just noticed crc32c. That's acceptable.

Sorry for going on so much.


	Thanks and Have a Great Day!

	Pat		(tablepc)
_______________________________________________
test mailing list -- test@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to test-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/test@xxxxxxxxxxxxxxxxxxxxxxx




[Index of Archives]     [Fedora Desktop]     [Fedora SELinux]     [Photo Sharing]     [Yosemite Forum]     [KDE Users]

  Powered by Linux