In the same pool with compression enabled, I have a 1TB RBD filled with a 10GB /dev/urandom file repeating through the entire RBD. Deleting both of these RBDs didn't change the number of bluestore_extent_compress. I'm also pretty certain that's the same number I saw there before starting these tests, but I have no proof. There are 6 nodes with 5 OSDs in each. 5 of the nodes have bluestore OSDs and 1 has filestore (a part of the test was to see the behavior of a pool with compression enabled with filestore and bluestore OSDs in it.
On Tue, Jun 26, 2018 at 10:07 AM Blair Bethwaite <blair.bethwaite@xxxxxxxxx> wrote:
Hi,Zeros are not a great choice of data for testing a storage system unless you are specifically testing what it does with zeros. Ceph knows that other higher layers in the storage stack use zero-fill for certain things and will probably optimise for it. E.g., it's common for thin-provisioning systems to not actually store runs of 0's (they'll still return them when read). I don't know what Ceph does in this specific case, but I could imagine it allocating relevant space within the RBD object-map, but never bothering to send anything to the actual objects.Also, note that the description of bluestore_extent_compress is "Sum for extents that have been removed due to compression", so sounds to me like something is working. Perhaps you can provide for detail of the overall stats, cluster config, and whether anything else is stored in it?On Tue, 26 Jun 2018 at 23:53, David Turner <drakonstein@xxxxxxxxx> wrote:ceph daemon osd.1 perf dump | grep bluestore | grep compress"bluestore_compressed": 0,"bluestore_compressed_allocated": 0,"bluestore_compressed_original": 0,"bluestore_extent_compress": 35372,I filled up an RBD in a compressed pool (aggressive) in my test cluster with 1TB of zeros. The bluestore OSDs in the cluster all show similar to this. Am I missing something here? Is there any other method to determining compression ratios?_______________________________________________On Tue, Dec 5, 2017 at 1:25 AM Rafał Wądołowski <rwadolowski@xxxxxxxxxxxxxx> wrote:Finally, I've founded the command:
ceph daemon osd.1 perf dump | grep bluestore
And there you have compressed data
_______________________________________________On 04.12.2017 14:17, Rafał Wądołowski wrote:
Hi,
Is there any command or tool to show effectiveness of bluestore compression?
I see the difference (in ceph osd df tree), while uploading a object to ceph, but maybe there are more friendly method to do it.
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--Cheers,
~Blairo
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com