bluestore min alloc size vs. wasted space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have set up a little ceph installation and added about 80k files of various sizes, then I added 1M files of 1 byte each totalling 1 MB, to see what kind of overhead is incurred per object.

The overhead for adding 1M objects seems to be 12252M/1000000 = 0.012252M or 122 kB per file, which is a bit high, but in line with a min allocation size of 64 kB.


My ceph.conf file contained this line from when I initially deployed the cluster:
    bluestore min alloc size = 4096

How do I set the min alloc size if not in the ceph.conf file?

Is it possible to change bluestore min alloc size for an existing cluster? How?


Even at this level of overhead I'm nowhere near to the 1129 kB per file was lost with the real data.


GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED     OBJECTS
    273G      253G       19906M          7.12       81059
POOLS:
NAME ID QUOTA OBJECTS QUOTA BYTES USED %USED MAX AVAIL OBJECTS DIRTY READ WRITE RAW USED .rgw.root 1 N/A N/A 1113 0 120G 4 4 108 4 2226 default.rgw.control 2 N/A N/A 0 0 120G 8 8 0 0 0 default.rgw.meta 3 N/A N/A 0 0 120G 0 0 0 0 0 default.rgw.log 4 N/A N/A 0 0 120G 207 207 54085 36014 0 fs1_data 5 N/A N/A 7890M 3.11 120G 80001 80001 0 715k 15781M fs1_metadata 6 N/A N/A 40951k 0.02 120G 839 839 682 103k 81902k

Overhead per object: (19586M-15781M) / 81059 = 0.046M = 46 kB per object



Added 1M files of 1 byte each totalling 1 MB:


GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED     OBJECTS
    273G      241G       32158M         11.50       1056k
POOLS:
NAME ID QUOTA OBJECTS QUOTA BYTES USED %USED MAX AVAIL OBJECTS DIRTY READ WRITE RAW USED .rgw.root 1 N/A N/A 1113 0 114G 4 4 108 4 2226 default.rgw.control 2 N/A N/A 0 0 114G 8 8 0 0 0 default.rgw.meta 3 N/A N/A 0 0 114G 0 0 0 0 0 default.rgw.log 4 N/A N/A 0 0 114G 207 207 56374 37540 0 fs1_data 5 N/A N/A 7891M 3.27 114G 1080001 1054k 287k 3645k 15783M fs1_metadata 6 N/A N/A 29854k 0.01 114G 1837 1837 5739 118k 59708k

Delta:
   fs1_data: +2M raw space as expected
   fs1_metadata: -22M raw space, because who the fuck knows?
   RAW USED: +12252M

--
 Regards Flemming Frandsen - Stibo Systems - DK - STEP Release Manager
 Please use release@xxxxxxxxx for all Release Management requests

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux