Re: cephfs compression?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm using compression on a cephfs-data pool in luminous. I didn't do
anything special

$ sudo ceph osd pool get cephfs-data all | grep ^compression
compression_mode: aggressive
compression_algorithm: zlib

You can check how much compression you're getting on the osd's
$ for osd in `seq 0 11`; do echo osd.$osd; sudo ceph daemon osd.$osd
perf dump | grep 'bluestore_compressed'; done
osd.0
        "bluestore_compressed": 686487948225,
        "bluestore_compressed_allocated": 788659830784,
        "bluestore_compressed_original": 1660064620544,
<snip>
osd.11
        "bluestore_compressed": 700999601387,
        "bluestore_compressed_allocated": 808854355968,
        "bluestore_compressed_original": 1752045551616,

I can't say for mimic, but definitely for luminous v12.2.5 compression
is working well with mostly default options.

-Rich

> For RGW, compression works very well. We use rgw to store crash dumps, in
> most cases, the compression ratio is about 2.0 ~ 4.0.

> I tried to enable compression for cephfs data pool:

> # ceph osd pool get cephfs_data all | grep ^compression
> compression_mode: force
> compression_algorithm: lz4
> compression_required_ratio: 0.95
> compression_max_blob_size: 4194304
> compression_min_blob_size: 4096

> (we built ceph packages and enabled lz4.)

> It doesn't seem to work. I copied a 8.7GB folder to cephfs, ceph df says it
> used 8.7GB:

> root at ceph-admin:~# ceph df
> GLOBAL:
>     SIZE       AVAIL      RAW USED     %RAW USED
>     16 TiB     16 TiB      111 GiB          0.69
> POOLS:
>     NAME                ID     USED        %USED     MAX AVAIL     OBJECTS
>     cephfs_data         1      8.7 GiB      0.17       5.0 TiB      360545
>     cephfs_metadata     2      221 MiB         0       5.0 TiB       77707

> I know this folder can be compressed to ~4.0GB under zfs lz4 compression.

> Am I missing anything? how to make cephfs compression work? is there any
trick?

> By the way, I am evaluating ceph mimic v13.2.0.

> Thanks in advance,
> --Youzhong
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux