understanding the bluestore blob, chunk and compression params

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I'm trying to compress an rbd pool via backfilling the existing data,
and the allocated space doesn't match what I expect.

Here is the test: I marked osd.130 out and waited for it to erase all its data.
Then I set (on the pool) compression_mode=force and compression_algorithm=zstd.
Then I marked osd.130 to get its PGs/objects back (this time compressing them).

After a few 10s of minutes we have:
        "bluestore_compressed": 989250439,
        "bluestore_compressed_allocated": 3859677184,
        "bluestore_compressed_original": 7719354368,

So, the allocated is exactly 50% of original, but we are wasting space
because compressed is 12.8% of original.

I don't understand why...

The rbd images all use 4MB objects, and we use the default chunk and
blob sizes (in v13.2.6):
   osd_recovery_max_chunk = 8MB
   bluestore_compression_max_blob_size_hdd = 512kB
   bluestore_compression_min_blob_size_hdd = 128kB
   bluestore_max_blob_size_hdd = 512kB
   bluestore_min_alloc_size_hdd = 64kB

>From my understanding, backfilling should read a whole 4MB object from
the src osd, then write it to osd.130's bluestore, compressing in
512kB blobs. Those compress on average at 12.8% so I would expect to
see allocated being closer to bluestore_min_alloc_size_hdd /
bluestore_compression_max_blob_size_hdd = 12.5%.

Does someone understand where the 0.5 ratio is coming from?

Thanks!

Dan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux