bluestore compression enabled but no data compressed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I seem to have a problem getting bluestore compression to do anything. I followed the documentation and enabled bluestore compression on various pools by executing "ceph osd pool set <pool-name> compression_mode aggressive". Unfortunately, it seems like no data is compressed at all. As an example, below is some diagnostic output for a data pool used by a cephfs:

[root@ceph-01 ~]# ceph --version
ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable)

All defaults are OK:

[root@ceph-01 ~]# ceph --show-config | grep compression
[...]
bluestore_compression_algorithm = snappy
bluestore_compression_max_blob_size = 0
bluestore_compression_max_blob_size_hdd = 524288
bluestore_compression_max_blob_size_ssd = 65536
bluestore_compression_min_blob_size = 0
bluestore_compression_min_blob_size_hdd = 131072
bluestore_compression_min_blob_size_ssd = 8192
bluestore_compression_mode = none
bluestore_compression_required_ratio = 0.875000
[...]

Compression is reported as enabled:

[root@ceph-01 ~]# ceph osd pool ls detail
[...]
pool 24 'sr-fs-data-test' erasure size 8 min_size 7 crush_rule 10 object_hash rjenkins pg_num 50 pgp_num 50 last_change 7726 flags hashpspool,ec_overwrites stripe_width 24576 compression_algorithm snappy compression_mode aggressive application cephfs
[...]

[root@ceph-01 ~]# ceph osd pool get sr-fs-data-test compression_mode
compression_mode: aggressive
[root@ceph-01 ~]# ceph osd pool get sr-fs-data-test compression_algorithm
compression_algorithm: snappy

We dumped a 4Gib file with dd from /dev/zero. Should be easy to compress with excellent ratio. Search for a PG:

[root@ceph-01 ~]# ceph pg ls-by-pool sr-fs-data-test
PG_STAT OBJECTS MISSING_ON_PRIMARY DEGRADED MISPLACED UNFOUND BYTES     LOG DISK_LOG STATE        STATE_STAMP                VERSION  REPORTED UP                       UP_PRIMARY ACTING                   ACTING_PRIMARY LAST_SCRUB SCRUB_STAMP                LAST_DEEP_SCRUB DEEP_SCRUB_STAMP           
24.0         15                  0        0         0       0  62914560  77       77 active+clean 2018-09-14 01:07:14.593007  7698'77 7735:142 [53,47,36,30,14,55,57,5]         53 [53,47,36,30,14,55,57,5]             53    7698'77 2018-09-14 01:07:14.592966             0'0 2018-09-11 08:06:29.309010 

There is about 250MB data on the primary OSD, but noting seems to be compressed:

[root@ceph-07 ~]# ceph daemon osd.53 perf dump | grep blue
[...]
        "bluestore_allocated": 313917440,
        "bluestore_stored": 264362803,
        "bluestore_compressed": 0,
        "bluestore_compressed_allocated": 0,
        "bluestore_compressed_original": 0,
[...]

Just to make sure, I checked one of the objects' contents:

[root@ceph-01 ~]# rados ls -p sr-fs-data-test
10000000004.0000039c
[...]
10000000004.0000039f

It is 4M chunks ...
[root@ceph-01 ~]# rados -p sr-fs-data-test stat 10000000004.0000039f
sr-fs-data-test/10000000004.0000039f mtime 2018-09-11 14:39:38.000000, size 4194304

... with all zeros:

[root@ceph-01 ~]# rados -p sr-fs-data-test get 10000000004.0000039f obj

[root@ceph-01 ~]# hexdump -C obj
00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00400000

All as it should be, except for compression. Am I overlooking something?

Best regards,

=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux