Re: bluestore compression enabled but no data compressed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi David,

thanks for your quick answer. When I look at both references, I see exactly the same commands:

ceph osd pool set {pool-name} {key} {value}

where on one page only keys specific for compression are described. This is the command I found and used. However, I can't see any compression happening. If you know about something else than "ceph osd pool set" - commands, please let me know.

Best regards,

=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: David Turner <drakonstein@xxxxxxxxx>
Sent: 12 October 2018 15:47:20
To: Frank Schilder
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  bluestore compression enabled but no data compressed

It's all of the settings that you found in your first email when you dumped the configurations and such.  http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#inline-compression

On Fri, Oct 12, 2018 at 7:36 AM Frank Schilder <frans@xxxxxx<mailto:frans@xxxxxx>> wrote:
Hi David,

thanks for your answer. I did enable compression on the pools as described in the link you sent below (ceph osd pool set sr-fs-data-test compression_mode aggressive, I also tried force to no avail). However, I could not find anything on enabling compression per OSD. Could you possibly provide a source or sample commands?

Thanks and best regards,

=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: David Turner <drakonstein@xxxxxxxxx<mailto:drakonstein@xxxxxxxxx>>
Sent: 09 October 2018 17:42
To: Frank Schilder
Cc: ceph-users@xxxxxxxxxxxxxx<mailto:ceph-users@xxxxxxxxxxxxxx>
Subject: Re:  bluestore compression enabled but no data compressed

When I've tested compression before there are 2 places you need to configure compression.  On the OSDs in the configuration settings that you mentioned, but also on the [1] pools themselves.  If you have the compression mode on the pools set to none, then it doesn't matter what the OSDs configuration is and vice versa unless you are using the setting of force.  If you want to default compress everything, set pools to passive and osds to aggressive.  If you want to only compress specific pools, set the osds to passive and the specific pools to aggressive.  Good luck.


[1] http://docs.ceph.com/docs/mimic/rados/operations/pools/#set-pool-values

On Tue, Sep 18, 2018 at 7:11 AM Frank Schilder <frans@xxxxxx<mailto:frans@xxxxxx><mailto:frans@xxxxxx<mailto:frans@xxxxxx>>> wrote:
I seem to have a problem getting bluestore compression to do anything. I followed the documentation and enabled bluestore compression on various pools by executing "ceph osd pool set <pool-name> compression_mode aggressive". Unfortunately, it seems like no data is compressed at all. As an example, below is some diagnostic output for a data pool used by a cephfs:

[root@ceph-01 ~]# ceph --version
ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable)

All defaults are OK:

[root@ceph-01 ~]# ceph --show-config | grep compression
[...]
bluestore_compression_algorithm = snappy
bluestore_compression_max_blob_size = 0
bluestore_compression_max_blob_size_hdd = 524288
bluestore_compression_max_blob_size_ssd = 65536
bluestore_compression_min_blob_size = 0
bluestore_compression_min_blob_size_hdd = 131072
bluestore_compression_min_blob_size_ssd = 8192
bluestore_compression_mode = none
bluestore_compression_required_ratio = 0.875000
[...]

Compression is reported as enabled:

[root@ceph-01 ~]# ceph osd pool ls detail
[...]
pool 24 'sr-fs-data-test' erasure size 8 min_size 7 crush_rule 10 object_hash rjenkins pg_num 50 pgp_num 50 last_change 7726 flags hashpspool,ec_overwrites stripe_width 24576 compression_algorithm snappy compression_mode aggressive application cephfs
[...]

[root@ceph-01 ~]# ceph osd pool get sr-fs-data-test compression_mode
compression_mode: aggressive
[root@ceph-01 ~]# ceph osd pool get sr-fs-data-test compression_algorithm
compression_algorithm: snappy

We dumped a 4Gib file with dd from /dev/zero. Should be easy to compress with excellent ratio. Search for a PG:

[root@ceph-01 ~]# ceph pg ls-by-pool sr-fs-data-test
PG_STAT OBJECTS MISSING_ON_PRIMARY DEGRADED MISPLACED UNFOUND BYTES     LOG DISK_LOG STATE        STATE_STAMP                VERSION  REPORTED UP                       UP_PRIMARY ACTING                   ACTING_PRIMARY LAST_SCRUB SCRUB_STAMP                LAST_DEEP_SCRUB DEEP_SCRUB_STAMP
24.0         15                  0        0         0       0  62914560  77       77 active+clean 2018-09-14 01:07:14.593007  7698'77 7735:142 [53,47,36,30,14,55,57,5]         53 [53,47,36,30,14,55,57,5]             53    7698'77 2018-09-14 01:07:14.592966             0'0 2018-09-11 08:06:29.309010

There is about 250MB data on the primary OSD, but noting seems to be compressed:

[root@ceph-07 ~]# ceph daemon osd.53 perf dump | grep blue
[...]
        "bluestore_allocated": 313917440,
        "bluestore_stored": 264362803,
        "bluestore_compressed": 0,
        "bluestore_compressed_allocated": 0,
        "bluestore_compressed_original": 0,
[...]

Just to make sure, I checked one of the objects' contents:

[root@ceph-01 ~]# rados ls -p sr-fs-data-test
10000000004.0000039c
[...]
10000000004.0000039f

It is 4M chunks ...
[root@ceph-01 ~]# rados -p sr-fs-data-test stat 10000000004.0000039f
sr-fs-data-test/10000000004.0000039f mtime 2018-09-11 14:39:38.000000, size 4194304

... with all zeros:

[root@ceph-01 ~]# rados -p sr-fs-data-test get 10000000004.0000039f obj

[root@ceph-01 ~]# hexdump -C obj
00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00400000

All as it should be, except for compression. Am I overlooking something?

Best regards,

=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx<mailto:ceph-users@xxxxxxxxxxxxxx><mailto:ceph-users@xxxxxxxxxxxxxx<mailto:ceph-users@xxxxxxxxxxxxxx>>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux