Hi Dan,
bluestore_compression_max_blob_size is applied for objects marked with
some additional hints only:
if ((alloc_hints & CEPH_OSD_ALLOC_HINT_FLAG_SEQUENTIAL_READ) &&
(alloc_hints & CEPH_OSD_ALLOC_HINT_FLAG_RANDOM_READ) == 0 &&
(alloc_hints & (CEPH_OSD_ALLOC_HINT_FLAG_IMMUTABLE |
CEPH_OSD_ALLOC_HINT_FLAG_APPEND_ONLY)) &&
(alloc_hints & CEPH_OSD_ALLOC_HINT_FLAG_RANDOM_WRITE) == 0) {
dout(20) << __func__ << " will prefer large blob and csum sizes" <<
dendl;
For regular objects "bluestore_compression_max_blob_size" is used. Which
results in minimum ratio = 0.5
Thanks,
Igor
On 6/20/2019 5:33 PM, Dan van der Ster wrote:
Hi all,
I'm trying to compress an rbd pool via backfilling the existing data,
and the allocated space doesn't match what I expect.
Here is the test: I marked osd.130 out and waited for it to erase all its data.
Then I set (on the pool) compression_mode=force and compression_algorithm=zstd.
Then I marked osd.130 to get its PGs/objects back (this time compressing them).
After a few 10s of minutes we have:
"bluestore_compressed": 989250439,
"bluestore_compressed_allocated": 3859677184,
"bluestore_compressed_original": 7719354368,
So, the allocated is exactly 50% of original, but we are wasting space
because compressed is 12.8% of original.
I don't understand why...
The rbd images all use 4MB objects, and we use the default chunk and
blob sizes (in v13.2.6):
osd_recovery_max_chunk = 8MB
bluestore_compression_max_blob_size_hdd = 512kB
bluestore_compression_min_blob_size_hdd = 128kB
bluestore_max_blob_size_hdd = 512kB
bluestore_min_alloc_size_hdd = 64kB
From my understanding, backfilling should read a whole 4MB object from
the src osd, then write it to osd.130's bluestore, compressing in
512kB blobs. Those compress on average at 12.8% so I would expect to
see allocated being closer to bluestore_min_alloc_size_hdd /
bluestore_compression_max_blob_size_hdd = 12.5%.
Does someone understand where the 0.5 ratio is coming from?
Thanks!
Dan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com