On 2/20/2018 11:57 AM, Flemming Frandsen wrote:
I have set up a little ceph installation and added about 80k files of
various sizes, then I added 1M files of 1 byte each totalling 1 MB, to
see what kind of overhead is incurred per object.
The overhead for adding 1M objects seems to be 12252M/1000000 =
0.012252M or 122 kB per file, which is a bit high, but in line with a
min allocation size of 64 kB.
0.012M = 12Kb, not 122.
My ceph.conf file contained this line from when I initially deployed
the cluster:
bluestore min alloc size = 4096
So your min alloc size settings for the cluster in question is 4K not
64K, right?
And pool replication factor is 3, isn't it?
Then one can probably explain additional 12Gb of raw space as:
1M objects * min_alloc_size * replication_factor = 1E6 * 4096 * 3 = 12GB
How do I set the min alloc size if not in the ceph.conf file?
Is it possible to change bluestore min alloc size for an existing
cluster? How?
This is per-osd setting that can't be altered after OSD deployment. So
you should either redeploy the cluster totally or redeploy OSDs one by
one if you want to preserve your data.
Even at this level of overhead I'm nowhere near to the 1129 kB per
file was lost with the real data.
GLOBAL:
SIZE AVAIL RAW USED %RAW USED OBJECTS
273G 253G 19906M 7.12 81059
POOLS:
NAME ID QUOTA OBJECTS QUOTA BYTES
USED %USED MAX AVAIL OBJECTS DIRTY READ WRITE
RAW USED
.rgw.root 1 N/A N/A 1113 0
120G 4 4 108 4 2226
default.rgw.control 2 N/A N/A 0 0 120G
8 8 0 0 0
default.rgw.meta 3 N/A N/A 0 0 120G
0 0 0 0 0
default.rgw.log 4 N/A N/A 0 0
120G 207 207 54085 36014 0
fs1_data 5 N/A N/A 7890M
3.11 120G 80001 80001 0 715k 15781M
fs1_metadata 6 N/A N/A 40951k
0.02 120G 839 839 682 103k 81902k
Overhead per object: (19586M-15781M) / 81059 = 0.046M = 46 kB per object
Added 1M files of 1 byte each totalling 1 MB:
GLOBAL:
SIZE AVAIL RAW USED %RAW USED OBJECTS
273G 241G 32158M 11.50 1056k
POOLS:
NAME ID QUOTA OBJECTS QUOTA BYTES
USED %USED MAX AVAIL OBJECTS DIRTY READ WRITE
RAW USED
.rgw.root 1 N/A N/A 1113 0
114G 4 4 108 4 2226
default.rgw.control 2 N/A N/A 0 0 114G
8 8 0 0 0
default.rgw.meta 3 N/A N/A 0 0 114G
0 0 0 0 0
default.rgw.log 4 N/A N/A 0 0
114G 207 207 56374 37540 0
fs1_data 5 N/A N/A 7891M
3.27 114G 1080001 1054k 287k 3645k 15783M
fs1_metadata 6 N/A N/A 29854k
0.01 114G 1837 1837 5739 118k 59708k
Delta:
fs1_data: +2M raw space as expected
fs1_metadata: -22M raw space, because who the fuck knows?
RAW USED: +12252M
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com