I just found an interesting thread: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024589.html I assume this is the case I’m dealing with. The question is, can I safely adapt the parameter bluestore_min_alloc_size_hdd and how will the system react? Is this backwards compatible? The current setting is 64KB Op wo 12 feb. 2020 om 12:57 schreef Kristof Coucke <kristof.coucke@xxxxxxxxx >: > Hi all, > > I have an issue on my Ceph cluster. > For one of my pools I have 107TiB STORED and 298TiB USED. > This is strange, since I've configured erasure coding (6 data chunks, 3 > coding chunks). > So, in an ideal world this should result in approx. 160.5TiB USED. > > The question now is why this is the case... > There are 473+M objects stored. Lot's of these files are pretty small. > (Read 150kb files). Not all of them though. > I am running Nautilus version 14.2.4. > > I suspect that the stripe size is related with this issue. This is still > the default (4MB), but I am not sure. > Before BlueFS it was easy to check the size of the chunks on the disk... > With BlueFS this is another story. > > I have the following questions: > 1. How can I check this to be sure that this is the case? I actually want > to drill down starting from an object I've sent to the Ceph cluster thru > the RGW. I would like to see where the chunks are stored and which size is > allocated for these on the disks. > 2. If it is related to the stripe size, can I safely adapt this parameter > or is this going to work forward only, or will it also work reversely? > > Many thanks, > > Kristof > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx