Oh, for some reason i thought you'd mentioned the OSD config earlier here. Gald you figured it out anyway!
Are you doing any comparison benchmarks with/without compression? There is precious little (no?) info out there about performance impact...
Cheers,
Blair
On 3 Jul. 2018 03:18, "David Turner" <drakonstein@xxxxxxxxx> wrote:
I got back around to testing this more today and I believe figured this out.Originally I set compression_mode to aggressive for the pool. The OSDs themselves, however, had their compression mode set to the default of none. That means that while the pool was flagging the writes so that they should be compressed, the OSD was ignoring that. Setting the OSDs to bluestore_compression_mode = passive tells the OSD not to compress anything unless the write is flagged for compression. So now I'm seeing compression for writes into the pool set to aggressive compression and no compression for the other pools.Alternatively if I were to have set compression for the OSD to aggressive, it should have been attempting compression for all writes to all pools. We have different pools in our deployment, some that I want compression on and others that I don't. If you just want compression on everything, set the OSD config to aggressive. However if you're like me, setting the OSDs to passive and the specific pools to aggressive will give you the desired result._______________________________________________On Wed, Jun 27, 2018 at 11:23 AM David Turner <drakonstein@xxxxxxxxx> wrote:Default OSD settings for compression and the pool has `compression_mode: aggressive`. From my understanding, that should not bother compressing anything except for the specific pool. First I tested this with an EC data pool, but it didn't compress anything. I thought this might be a thing with EC sharding so I tested with a replica pool with 100 copies of the same 10GB file and 1TB of zero's. Neither test seemed to show any compression.On Wed, Jun 27, 2018 at 6:38 AM Igor Fedotov <ifedotov@xxxxxxx> wrote:And yes - first 3 parameters from this list are the right and the only way to inspect compression effectiveness so far.
Corresponding updates to show that with "ceph df" are on the way and are targeted for Nautilus.
Thanks,
Igor
On 6/26/2018 4:53 PM, David Turner wrote:
_______________________________________________ceph daemon osd.1 perf dump | grep bluestore | grep compress"bluestore_compressed": 0,"bluestore_compressed_allocated": 0,"bluestore_compressed_original": 0,"bluestore_extent_compress": 35372,
I filled up an RBD in a compressed pool (aggressive) in my test cluster with 1TB of zeros. The bluestore OSDs in the cluster all show similar to this. Am I missing something here? Is there any other method to determining compression ratios?
On Tue, Dec 5, 2017 at 1:25 AM Rafał Wądołowski <rwadolowski@xxxxxxxxxxxxxx> wrote:
Finally, I've founded the command:
ceph daemon osd.1 perf dump | grep bluestore
And there you have compressed data
_______________________________________________On 04.12.2017 14:17, Rafał Wądołowski wrote:
Hi,
Is there any command or tool to show effectiveness of bluestore compression?
I see the difference (in ceph osd df tree), while uploading a object to ceph, but maybe there are more friendly method to do it.
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com