Re: Raw space usage in Ceph with Bluestore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jody,

yes, this is a known issue.

Indeed, currently 'ceph df detail' reports raw space usage in GLOBAL section and 'logical' in the POOLS one. While logical one has some flaws.

There is a pending PR targeted to Nautilus to fix that:

https://github.com/ceph/ceph/pull/19454

If you want to do an analysis at exactly per-pool level this PR is the only mean AFAIK.


If per-cluster stats are fine then you can also inspect corresponding OSD performance counters and sum over all OSDs to get per-cluster info.

This is the most precise but quite inconvenient method for low-level per-osd space analysis.

 "bluestore": {
...

       "bluestore_allocated": 655360, # space allocated at BlueStore for the specific OSD
        "bluestore_stored": 34768,  # amount of data stored at BlueStore for the specific OSD
...

Please note, that aggregate numbers built from these parameters include all the replication/EC overhead.  And bluestore_stored vs. bluestore_allocated difference is due to allocation overhead and/or applied compression.


Thanks,

Igor


On 11/29/2018 12:27 AM, Glider, Jody wrote:

 

Hello,

 

I’m trying to find a way to determine real/physical/raw storage capacity usage when storing a similar set of objects in different pools, for example a 3-way replicated pool vs. a 4+2 erasure coded pool, and in particular how this ratio changes from small (where Bluestore block size matters more) to large object sizes.

 

I find that ceph df detail and rados df don’t report on really-raw storage, I guess because they’re perceiving ‘raw’ storage from their perspective only. If I write a set of objects to each pool, rados df shows the space used as the summation of the logical size of the objects, while ceph df detail shows the raw used storage as the object size * the redundancy factor (e.g. 3 for 3-way replication and 1.5 for 4+2 erasure code).

 

Any suggestions?

 

Jody Glider, Principal Storage Architect

Cloud Architecture and Engineering, SAP Labs LLC

3412 Hillview Ave (PAL 02 23.357), Palo Alto, CA 94304

E   j.glider@xxxxxxx, T   +1 650-320-3306, M   +1 650-441-0241

 

 

 



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux