On 02/12/2018 03:16 PM, Behnam Loghmani wrote:
so you mean that rocksdb and osdmap filled disk about 40G for only 800k
files?
I think it's not reasonable and it's too high
Could you check the output of the OSDs using a 'perf dump' on their
admin socket?
The 'bluestore' and 'bluefs' sections should tell you:
- db_used_bytes
- onodes
using those values you can figure out how much data the DB is using and
how many objects you have in the OSD.
Wido
On Mon, Feb 12, 2018 at 5:06 PM, David Turner <drakonstein@xxxxxxxxx
<mailto:drakonstein@xxxxxxxxx>> wrote:
Some of your overhead is the Wal and rocksdb that are on the OSDs.
The Wal is pretty static in size, but rocksdb grows with the amount
of objects you have. You also have copies of the osdmap on each osd.
There's just overhead that adds up. The biggest is going to be
rocksdb with how many objects you have.
On Mon, Feb 12, 2018, 8:06 AM Behnam Loghmani
<behnam.loghmani@xxxxxxxxx <mailto:behnam.loghmani@xxxxxxxxx>> wrote:
Hi there,
I am using ceph Luminous 12.2.2 with:
3 osds (each osd is 100G) - no WAL/DB separation.
3 mons
1 rgw
cluster size 3
I stored lots of thumbnails with very small size on ceph with
radosgw.
Actual size of files is something about 32G but it filled 70G of
each osd.
what's the reason of this high disk usage?
should I change "bluestore_min_alloc_size_hdd"? and If I change
it and set it to smaller size, does it impact on performance?
what is the best practice for storing small files on bluestore?
Best regards,
Behnam Loghmani
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com