Ceph Erasure Coding - Stored vs used

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I have an issue on my Ceph cluster.
For one of my pools I have 107TiB STORED and 298TiB USED.
This is strange, since I've configured erasure coding (6 data chunks, 3
coding chunks).
So, in an ideal world this should result in approx. 160.5TiB USED.

The question now is why this is the case...
There are 473+M objects stored. Lot's of these files are pretty small.
(Read 150kb files). Not all of them though.
I am running Nautilus version 14.2.4.

I suspect that the stripe size is related with this issue. This is still
the default (4MB), but I am not sure.
Before BlueFS it was easy to check the size of the chunks on the disk...
With BlueFS this is another story.

I have the following questions:
1. How can I check this to be sure that this is the case? I actually want
to drill down starting from an object I've sent to the Ceph cluster thru
the RGW. I would like to see where the chunks are stored and which size is
allocated for these on the disks.
2. If it is related to the stripe size, can I safely adapt this parameter
or is this going to work forward only, or will it also work reversely?

Many thanks,

Kristof
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux